id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2302.14556 | An Alternative to Cells for Selective Execution of Data Science
Pipelines | Data Scientists often use notebooks to develop Data Science (DS) pipelines,
particularly since they allow to selectively execute parts of the pipeline.
However, notebooks for DS have many well-known flaws. We focus on the following
ones in this paper: (1) Notebooks can become littered with code cells that are
not part of the main DS pipeline but exist solely to make decisions (e.g.
listing the columns of a tabular dataset). (2) While users are allowed to
execute cells in any order, not every ordering is correct, because a cell can
depend on declarations from other cells. (3) After making changes to a cell,
this cell and all cells that depend on changed declarations must be rerun. (4)
Changes to external values necessitate partial re-execution of the notebook.
(5) Since cells are the smallest unit of execution, code that is unaffected by
changes, can inadvertently be re-executed.
To solve these issues, we propose to replace cells as the basis for the
selective execution of DS pipelines. Instead, we suggest populating a
context-menu for variables with actions fitting their type (like listing
columns if the variable is a tabular dataset). These actions are executed based
on a data-flow analysis to ensure dependencies between variables are respected
and results are updated properly after changes. Our solution separates pipeline
code from decision making code and automates dependency management, thus
reducing clutter and the risk of making errors. | Lars Reimann, Günter Kniesel-Wünsche | 2023-02-28T13:21:04Z | http://arxiv.org/abs/2302.14556v2 | # An Alternative to Cells for Selective Execution
###### Abstract
Data Scientists often use notebooks to develop Data Science (DS) pipelines, particularly since they allow to selectively execute parts of the pipeline. However, notebooks for DS have many well-known flaws. We focus on the following ones in this paper: (1) Notebooks can become filtered with code cells that are not part of the main DS pipeline but exist solely to make decisions (e.g. listing the columns of a tabular dataset). (2) While users are allowed to execute cells in any order, not every ordering is correct, because a cell can depend on declarations from other cells. (3) After making changes to a cell, this cell and all cells to depend on changed declarations must be rerun. (4) Changes to external values necessitate partial re-execution of the notebook. (5) Since cells are the smallest unit of execution, code that is unaffected by changes, can inadvertently be re-executed.
To solve these issues, we propose to replace cells as the basis for the selective execution of DS pipelines. Instead, we suggest populating a context-menu for variables with actions fitting their type (like listing columns if the variable is a tabular dataset). These actions are executed based on a data-flow analysis to ensure dependencies between variables are respected and results are updated properly after changes. Our solution separates pipeline code from decision making code and automates dependency management, thus reducing clutter and the risk of making errors.
Notebook, Usability, Data Science, Machine Learning
## I Introduction
Notebooks allow users to write code in cells that can be executed independently, in any order. Execution results, such as visualizations, are commonly shown close to the code cells that produced them. Code cells can be interspersed with text cells, to offer explanations or document decisions, which follows the paradigm of literate programming [1]. Overall, this makes notebooks well suited for the explorative development process of Data Science (DS) pipelines [2]. Here, developers often write small pieces of code, run them, and use the results to change existing code or decide what to implement next.
Various flavors of notebooks exist, with Jupyter Notebook [3] being most popular according to the 2021 Kaggle Survey on DS [4]. Jupyter Notebook requires language-specific _kernels_ to execute code and to power IDE features like auto-completion, but is otherwise language-agnostic. Since Jupyter Notebook is a web application, it can run locally or be hosted as a service, like Google Colaboratory [5], Kaggle [6], or Amazon SageMaker [7]. Some integrated development environments (IDEs) like PyCharm [8] or Visual Studio Code [9] incorporate Jupyter Notebook. JupyterLab [10] is eventually meant to replace the default Jupyter Notebook GUI. The issues we discuss in this paper are independent from a specific notebook variant, however, since they stem from the core concept of notebooks: Cells.
Fig. 1 shows an example of the typical cell structure of a notebook for DS1 using Python as the programming language. Text cells (blue) and code cells (white) alternate. The code cells (1) read a CSV file containing the training data, (2) view the dataset to gain a general understanding and to know which attributes it has, (3) separate feature vector and target, (4) configure a model (support vector machine for classification), and train it2. For the sake of brevity, we omit result cells.
Footnote 1: Based on a popular notebook from Kaggle ([https://www.kaggle.com/code/startupsci/titanic-data-science-solutions](https://www.kaggle.com/code/startupsci/titanic-data-science-solutions)).
Footnote 2: The _fit_ method in this example is supposed to return a new trained model rather than mutate the untrained model stored in _sv_.
Based on Fig. 1 we can illustrate the problems we want to discuss in this paper:
1. Actual pipeline code (Cells 1, 3, 4) is mixed with value inspection code (Cell 2). The value inspection code exists solely to write more pipeline code (here Cell 3, which needs the name of the target column). Keeping value inspection cells in the notebook afterwards in
Fig. 1: Example for a typical cell structure of a D8 notebook with text cells (blue) and code cells (white). Result cells are omitted.
creases clutter [11, 12] and negatively impacts performance, since they might get re-executed unnecessarily.
2. Users can execute cells in any order. This manipulates the internal state of the notebook (e.g. contained variables and their assigned values), which is maintained throughout the entire session. However, not all orderings are correct [11, 13, 14, 15, 16, 17, 18]: For example, executing Cell 3 first on a blank state is erroneous, since this cell depends on _train_df_, which is computed in Cell 1.
3. Code changes partially invalidate the internal notebook state and results cells [11, 13, 14, 15, 16, 17, 18]. Say, after executing the entire notebook from Fig. 1, we add more data preparation steps to Cell 3, before assigning _X_train_ and _y_train. Now, the values of _X_train_ and _y_train_ in the internal notebook state are outdated, so Cell 3 must be rerun. However, _trained_svc_, which depends on these values, is also outdated, so Cell 4 must be rerun, too. In large notebooks, keeping track of all cells that must be rerun gets complicated, leading to developers frequently rerunning the entire notebook to be safe [16], wasting computing and development time.
4. Even without code changes, notebook state can become stale: For example, the dataset we read from disk in Cell 1 might change. Like in Problem 3, the notebook needs to be partially re-executed to account for such an event.
5. Since cells are the smallest unit of execution, even code that is unaffected by changes is re-executed. In the example from Problem 3, we rerun Cell 4 completely, although the value of _svc_ is still valid. Calling the _SVC_ constructor is fast, but for the same reason long-running data preparation or model training operations might be rerun unnecessarily, drastically slowing the feedback loop. This leads to an "expand then reduce" pattern [11], where developers first write small code cells, which can be executed independently, to iterate quickly and later combine them into bigger cells. This defeats the purpose of cells as a means to group logically related code together.
## II State of the Art
A natural solution to ensure code gets executed in the correct order (Problem 2) and gets re-executed after changes (Problem 3) is to derive dependencies between cells using data-flow analysis and run a cell only after all cells it depends on: Dataflow notebooks [19] assign a unique identifier to cells, which stays the same even as the code in the cell changes. These IDs are used to describe the dependencies between cells. When a cell gets executed, the system ensures that all upstream dependencies are available already. Nodebook [20] keeps track of inputs and outputs of cells. Inputs are determined by parsing the code in the cell, while outputs are discovered by comparing the internal state of the notebook after the cell was run to the state before. [14] describes how data-flow analysis can be used to create a polished version of a notebook that only contains the code needed to produce the results that the user selected. NBSAFETY [17] uses data-flow analysis to detect unsafe interactions with the cells in a notebook and offer resolution advisories. To achieve this, NBSAFETY uses a mix of dynamic and static analysis. ReSplit [21] analyzes definition-usage chains between cells and within the same cell and then suggests an alternative mapping of code to cells to ensure that tightly coupled code resides in the same cell.
However, none of the existing approaches solve Problems 1, 4, and 5: They do not handle changes to values outside the notebook (Problem 4), use complete cells as the smallest unit of execution thus failing to avoid re-execution of parts unaffected by changes (Problem 5), and do not even try to address the tangling of pipeline and value inspection code (Problem 1).
## III Research Questions
This brings us to the following research questions:
* **RQ 1**: How can we separate pipeline code and value inspection code?
* **RQ 2**: How can we ensure changes to external values trigger re-execution of code that depends on them?
* **RQ 3**: How can we avoid unnecessarily executing code?
RQ 1 aims to reduce clutter, RQ 2 is about correctness and RQ 3 addresses performance.
## IV Approach Overview
To solve these issues, we propose to keep cells only as a means to connect code to related results and instructional or documenting text, but abandon them as the basis for the selective execution of DS pipelines. Instead, we suggest to
1. address RQ 1 by introducing different roles for code cells (Sec. V), or by statically typing variables and letting the development environment dynamically populate a context-menu for variables with actions that can be run on a variable of the respective type (Sec. VI),
2. address RQ 2 and RQ 3 by using static code analysis to derive a correct and minimal _execution plan_ for a selected action (Sec. VII).
An execution plan is _correct_, if it contains all operations that must be executed and guarantees that each operation is executed only when all its input values are up-to-date, including external values (RQ 2). It is _minimal_ (RQ 3), if it contains no operations that would only affect already up-to-date values or would compute values that are not accessed by other operations in the graph.
## V Tags for Code Cells
For teaching materials [22] or documentation, keeping value inspection cells in the notebook can be helpful, to outline each step of the development process of a DS pipeline, including decision-making [11]. However, value inspection code is then scattered across the entire notebook, creating clutter, and re-executed each time the notebook is run, wasting time.
To separate pipeline code and value inspection code (RQ 1) in this case, we suggest to tag code cells by their purpose, as "pipeline" or "inspection". Fig. 2 shows this for the start of the notebook from Fig. 1 with pipeline cells (white) and
inspection cells (yellow). With a filter, a user can then narrow down the cells and show, say, just the pipeline cells. Only cells that are currently shown are executed. Text cells can be tagged and filtered in the same way.
## VI Context-Sensitive Inspection of Values
Outside of educational notebooks, value inspections cells are often executed only once, to decide what to do next. For example, after running Cell 2 in Fig. 1 we know that the target attribute is called "Survived" and can use this knowledge to write Cell 3. Leaving Cell 2 in the notebook has little documentation value, since we already manifested the extracted information in Cell 3.
In contexts where inspection cells have no documentation effect, we suggest to avoid writing them in the first place, by additionally offering inspection actions in a context-menu for variables (Fig. 3). This way, notebooks can only contain pipeline code if desired, completely eliminating the tangling with inspection code (RQ 1).
To know which actions can be triggered on a variable, we need to know its type. Ideally, the type should be known statically, so we do not need to run code to determine which actions can be triggered on a variable. Moreover, the type should be inferred, so users can concentrate on writing their pipeline code as they are used to.
When a user selects an action, code that implements the action is executed in the background (see Sec. VII for details). Results of value inspection actions can be closed after inspection or kept outside the main flow of the notebook entirely in separate tabs, windows, or Sticky Cells [23] that float on top of the notebook and maintain their position even when the notebook is scrolled. Fig. 4 shows a mockup of the potential output for the "Show dataset" action from Fig. 3. The data is displayed in an interactive table that the user can filter (funnel icon) or sort (arrow icons). Additional actions can be triggered directly from this view, without the need to go back to the context menu, e.g. for generating a histogram of a column (chart icons).
## VII Minimal Execution
_Data-flow:_ As described in Sec. VI, we want to trigger context-sensitive actions on _variables_ without executing unnecessary code (RQ 3). For example, if the user selects the "Show dataset" action on the variable X_train from Fig. 1, it is a waste of time to execute the entire Cell 3, since the value of y_train is never used. This requires a fine-grained data-flow graph that focuses on individual operations rather than cells. Fig. 5 shows the data-flow graph for the example notebook from Fig. 1. From this graph we can derive that we only need to evaluate the calls of read_csv and drop to compute X_train, leading to the simple execution plan in Fig. 6
_Purity:_ We can derive an execution plan for the entire notebook up to the fit call, based on the data-flow graph from Fig. 5. The graph tells us that we need to run the entire notebook, since fit depends on all other operations. However, we can further optimize, if we know which operations are _pure_. A pure operations has no side effects and its outputs only depend on its parameters captured in the data-flow graph. Impure operations may read or modify external state (files), or global state (global variables or object attributes). Any pure operation can be executed _in parallel_ to any other _independent_ operation (that is, an operation on a different path of the dataflow graph). In contrast, independent impure operations must be executed according to their textual order. For this, additional edges that reflect the textual order are added between independent impure operations, yielding the complete execution plan. In our example, all operations are pure, except read_csv, which reads a file from disk and is impure because the file contents might change although the value of the read operation's path parameter stays the same. Adding purity information (green background) and impurity information (red background) leads to the extended data-flow graph shown in Fig. 7. From it we can deduce that the calls to read_csv and SVC can be run in parallel, as well as the calls to drop and keep. If the calls to drop and keep where impure and the call to drop occurred textually before the one to keep, an additional edge would point in Fig. 7 from drop to keep.
_Execution:_ The extended dataflow graph serves as an execution plan for computing the value of some variable x,
Fig. 4: Mockup for the interactive output of the “Show dataset” action. Users can filter the table (funnel icon), sort it by a column (arrow icons), generate histograms (chart icons), or trigger other inspection and analysis actions from this view without having to write any code.
Fig. 3: Replacing value inspection cells by type-specific inspection actions in the context-menu of variables.
Fig. 2: Distinguishing pipeline code cells (white), and value inspection code cells (yellow) at the start of the notebook from Fig. 1.
when the user triggers an action (Sec. VI) on _x_:
1. Build the dataflow graph.
2. Extend it by purity information for operations.
3. Extend it by textual order edges between independent impure operations.
4. Eliminate operations that have no path to \(x\).
5. Start executing (in any order or in parallel) nodes that have no incoming edges. After execution, delete the respective node and its outgoing edges.
6. Repeat from Step 5, until the graph is empty, at which point the value of \(x\) is available.
The first two steps ensure correctness. The first ensures that data dependencies are respected. The second preserves the order of impure operations. The third step removes all operations that have no effect on the value we want to compute, a first contribution towards minimality (RQ 3).
Re-ExecutionLet us now assume the initial execution plan from Fig. 7 has been run, so the internal notebook state contains the current values of all variables. Let us further assume that the user has subsequently edited the call of the _drop_ operation, to remove additional attributes from the feature vector after inspecting _X_train_. Now, if the user requests the system to update the state of the notebook, we want to ensure that all code that is affected by the change gets rerun, without re-executing code unnecessarily (RQ 3). As with any rerun, we also need to take into account potential changes to external state, which can affect impure operations (RQ 2).
Fig. 8 illustrates the affected part of the notebook, after editing the _drop_ call. The changed operation is marked by a red border. _Potentially stale_ variables are indicated by yellow question marks:
* X_train, the output of the edited _drop_ call,
* because the data-flow graph does not show the hidden dependencies of impure operations, we must always assume that something might have changed,
* y_train, the output of the call to keep that depends on the potentially stale value _train_df_.
All potentially stale values are indicated by a question mark in Fig. 8. The call to SVC is unaffected by the change and its output is still _up-to-date_. Therefore they are not included in the re-execution plan.
The red labels in Fig. 8 reflect the assumption that we must always rerun edited operations, impure operations, and pure operations with at least one potentially stale argument. Potentially stale arguments must be recomputed before they are used, which leads to the shown ordering.
A possibly more efficient approach is illustrated in Fig. 9. The yellow question marks on the calls to keep and _fit_ indicate that these operations do not necessarily need re-execution, if recomputing their _potentially_ stale arguments produces the same values as before. For this _non-staleness check_, we can store the previous value and compare it to the new one. If it didn't change, we mark the argument as up-to-date. If all inputs of a pure operation are up-to-date, we do not need to re-run it and can mark its output as up-to-date. In the plan from Fig. 9 the operations _drop_ and keep can be run in parallel after _read_csv_. If the dataset returned by _read_csv_ is unchanged, we can skip the call to keep and can mark y_train as up-to-date.
This approach can eliminate expensive re-executions at the cost of remembering previous values and comparing them to new ones. For large datasets or models, such comparisons can be costly, too. Thus, the efficiency of the approach critically depends on the trade-off between re-execution time and comparison time. Even expensive comparisons might pay off, because they can eliminate re-execution of many downstream pure operations, including the followup checking of each of their results.
In this regard, an impure operation like _read_csv_ is a particularly worthwhile target for optimization: (1) It usually occurs right at the start of a pipeline, so many other operations
Fig. 5: Data-flow graph for pipeline code from Fig. 1. Rounded boxes represent operations and edges represent data-flow. Each edge is labelled by the name of the variable through which the respective data flows.
Fig. 8: Re-execution plan with question marks indicating potentially stale values and red numbered labels indicating re-execution order of operations.
Fig. 6: Execution plan to view the X_train dataset. The numbers in the boxes indicate the implicit order of the operations.
Fig. 7: Extended data-flow graph for initial run of the notebook. Numbers indicate sequential execution, letters indicate potential parallelism.
Fig. 9: Re-execution plan with dynamic non-staleness checks. The question marks on an operation indicate that it is executed only if the dynamic non-staleness check fails for one of its potentially stale input values.
depend on it and must be repeated if read_csv yields new results. (2) The returned data is large, so an equality check is extremely expensive.
We can fortunately avoid executing file operations subject to a simple check and the storing of minimal extra information: The time of the last modification of each accessed file. If we call read_csv with the same path argument and the respective file has still the same last modified time, the operation returns the same result. We can, therefore, treat the last modified time internally as an extra, hidden argument, and the modified read_csv operation as pure. This principle can be generalized to other cases, e.g. a random number generator that has a fixed seed is no source of impurity anymore. This way, some often occurring cases of impurity can be eliminated, accounting for noticeable re-execution speedups. It allows us to treat all operations shown in Fig. 1 as pure.
## VIII Technical Challenges and Solutions
The concept of notebooks builds on simple language-specific kernels. Our proposal requires additional kernel features, namely static typing for the context menu (Sec. VI), data-flow analysis to derive a correct but minimal execution plan (Sec. VII), and purity inference as a basis for exploiting parallelism. Implementing our proposal for arbitrary languages, would require significant additional work from kernel developers. For some languages, it would even be impossible to implement our extensions fully. Python, for instance, allows dynamically importing modules via importlib.import_module(name) [24]. Reading the module name from an encrypted file would prevent static analysis of the module.
Instead of building on an existing language, we developed Safe-DS [25], a domain specific language for implementing DS pipelines. Safe-DS can integrate existing Python libraries via a simple stub language in a statically type-safe way. This lets the compiler infer the type information that we need to provide context-specific actions on variables. The design of Safe-DS as a language for creating pipelines rather than implementing the algorithms used in the pipeline makes data-flow analysis easy, due to the lack of loops, recursion, and conditionals, which guarantees an acyclic data-flow graph. Safe-DS additionally offers annotations to mark operations as pure, empowering the compiler to prune the execution plan even in cases where purity cannot be inferred. Lastly, Safe-DS has a standard library that prefers pure operations.
## IX Limitations
Generalizing our approach to notebooks written in arbitrary languages can by tricky. It depends on the quality of static analysis for types, data-flow and purity available for the respective language. If at least data-flow analysis works, one can alternatively annotate libraries with types and purity information manually. But for non-trivial libraries, this may quickly exceed available time. However, one can use a combined approach that performs a static analysis and then lets developers review and complement its results by annotating library elements with non-inferrable type or purity information [26]. Such combined tools limit the amount of manual annotation, thus making it possible to exploit the full potential of our approach in spite of the limits of static analysis.
## X Future Plans
PrototypeWe currently extend Safe-DS3 by the discussed computation of execution graphs based on data-flow analysis and purity information. An experimental IDE for Safe-DS is already available as an extension for Visual Studio Code4. After evolving Safe-DS, we will extend its IDE by the proposed, type-based context menu for inspection of variables, including presentation of tabular datasets (Fig. 4), to eliminate inspection code from the pipeline.
Footnote 3: [https://github.com/Safe-DS/DSL](https://github.com/Safe-DS/DSL)
Footnote 4: [https://marketplace.visualstudo.com/items?itemName=safe-ds.safe-ds](https://marketplace.visualstudo.com/items?itemName=safe-ds.safe-ds)
Usability studyOnce the prototype is complete, it will be compared quantitatively and qualitatively to Jupyter Notebook. We will measure effects on code comprehension (by separating value inspection and pipeline code), correctness (by enforcing execution of code in the right order and re-execution of changed code), and performance (by avoiding re-execution of unchanged code).
Purity Inference for PythonMoreover, we plan to add purity inference (pure, impure, unknown) to the API-Editor [26], use it to provide the unknown information, and let purity information be reflected in the Safe-DS stubs that it creates. We will also use its ability to automatically create wrappers that implement a uniform DS API on the basis of existing Python libraries.
## XI Conclusions
In this paper, we presented an alternative to cells as a basis to selectively execute parts of a DS pipeline. We use static code analysis to derive a fine-grained data-flow graph that focuses on individual operations rather than cells (Sec. VII). From this, we derive an execution plan that is _correct_ (contains all operations and runs them in a valid order) and _minimal_ (does not contain unnecessary operations). Information about the purity of operations is used to parallelize the execution plan, improving performance.
We completely eliminate the need for users to _write_ value inspection code, by providing context-menu actions on variables to inspect their values (Sec. VI). In an educational context, where value inspection steps must be contained explicitly in the notebook, we propose to tag respective cells, so that they can be eliminated when running a pipeline (Sec. V).
Overall, we expect that our approach will significantly speed up DS pipeline development, by (1) avoiding bugs resulting from access to stale notebook state, (2) rerunning only the code that must be rerun to update stale state, and (3) eliminating the need to write and understand inspection code. We will verify this claim in a usability study once the prototype for Safe-DS is complete.
## Acknowledgments
This work was partially funded by the Federal Ministry of Education and Research (BMBF), Germany under the Simple-ML project (grant 01IS18054). We thank the reviewers for their numerous constructive comments and suggestions.
|
2309.06220 | Minimisation of 2-coverings of genus 2 Jacobians | An important problem in computational arithmetic geometry is to find changes
of coordinates to simplify a system of polynomial equations with rational
coefficients. This is tackled by a combination of two techniques, called
minimisation and reduction. We give an algorithm for minimising certain pairs
of quadratic forms, subject to the constraint that the first quadratic form is
fixed. This has applications to 2-descent on the Jacobian of a genus 2 curve. | Tom Fisher, Mengzhen Liu | 2023-09-12T13:39:02Z | http://arxiv.org/abs/2309.06220v1 | # Minimisation of 2-coverings of genus 2 Jacobians
###### Abstract.
An important problem in computational arithmetic geometry is to find changes of coordinates to simplify a system of polynomial equations with rational coefficients. This is tackled by a combination of two techniques, called minimisation and reduction. We give an algorithm for minimising certain pairs of quadratic forms, subject to the constraint that the first quadratic form is fixed. This has applications to 2-descent on the Jacobian of a genus 2 curve.
## 1. Introduction
### Models for \(2\)-coverings
We work over a field \(K\) with \(\operatorname{char}(K)\neq 2\). Let \(C\) be a smooth curve of genus 2 with equation \(y^{2}=f(x)=f_{6}x^{6}+f_{5}x^{5}+\ldots+f_{1}x+f_{0}\) where \(f\in K[x]\) is a polynomial of degree 6. We fix throughout the polynomial
\[G=z_{12}z_{34}-z_{13}z_{24}+z_{23}z_{14}.\]
The following two definitions are based on those in [FY, Section 2.4].
**Definition 1.1**.: A _model_ (for a 2-covering of the Jacobian of \(C\)) is a pair \((\lambda,H)\) where \(\lambda\in K^{\times}\) and \(H\in K[z_{12},z_{13},z_{23},z_{14},z_{24},z_{34}]\) is a quadratic form satisfying
\[\det(\lambda x\mathbf{G}-\mathbf{H})=-\lambda^{6}f_{6}^{-1}f(x)\]
where \(\mathbf{G}\) and \(\mathbf{H}\) are the matrices of second partial derivatives of \(G\) and \(H\).
We identify the space of column vectors of length 6 and the space of \(4\times 4\) alternating matrices via the map
\[A:z=\begin{pmatrix}z_{12}\\ z_{13}\\ z_{23}\\ z_{14}\\ z_{24}\\ z_{34}\end{pmatrix}\mapsto\begin{pmatrix}0&z_{12}&z_{13}&z_{14}\\ &0&z_{23}&z_{24}\\ &-&&0&z_{34}\\ &&&0\end{pmatrix}\]
so that \(G(z)\) is the Pfaffian of \(A(z)\). Then each \(4\times 4\) matrix \(P\) uniquely determines a \(6\times 6\) matrix \(\wedge^{2}P\) such that
\[PA(z)P^{T}=A((\wedge^{2}P)z)\]
for all column vectors \(z\). For \(F\in K[x_{1},\dots,x_{n}]\) and \(M\in\operatorname{GL}_{n}(K)\) we write \(F\circ M\) for the polynomial satisfying \((F\circ M)(x)=F(Mx)\) for all columns vectors \(x\). The Pfaffian \(\operatorname{Pf}(A)\) of an alternating matrix \(A\) has the properties that \(\operatorname{Pf}(A)^{2}=\det(A)\) and \(\operatorname{Pf}(PAP^{T})=(\det P)\operatorname{Pf}(A)\). The latter tells us that \(G\circ\wedge^{2}P=(\det P)G\). It is also not hard to show that \(\det(\wedge^{2}P)=(\det P)^{3}\).
**Definition 1.2**.: Two models are \(K\)_-equivalent_ if they are in the same orbit for the action of \(K^{\times}\times\operatorname{PGL}_{4}(K)\) via
\[(c,P):(\lambda,H)\mapsto\left(c\lambda,\frac{c}{\det P}H\circ\wedge^{2}P \right).\]
It may be checked using the above observations that this is a well defined (right) group action on the space of models (for a fixed choice of genus 2 curve \(C\)).
**Example 1.3**.: Let \(C/\mathbb{Q}\) be the genus 2 curve given by \(y^{2}=f(x)\) where
\[f(x)=-28x^{6}+84x^{5}-323x^{4}+506x^{3}-471x^{2}+232x-60.\]
One of the elements of the 2-Selmer group of \(\operatorname{Jac}C\) is represented by the model
\[(\lambda_{1},H_{1})=(42336,\ 25128z_{12}^{2}+24480z_{12}z_{13}+14 031z_{12}z_{23}+15408z_{12}z_{14}\] \[\qquad+13959z_{12}z_{24}+25407z_{12}z_{34}+2232z_{13}^{2}-16407z_ {13}z_{23}+4464z_{13}z_{14}\] \[\qquad-22815z_{13}z_{24}+1161z_{13}z_{34}+2329z_{23}^{2}+15282z_{ 23}z_{14}+7687z_{23}z_{24}\] \[\qquad-19547z_{23}z_{34}-2304z_{14}^{2}-17838z_{14}z_{24}-22590z_ {14}z_{34}-134z_{24}^{2}\] \[\qquad+41978z_{24}z_{34}-99584z_{34}^{2}).\]
Applying the transformation \((c,P)\) with \(c=1/3024\) and
\[P=\begin{pmatrix}2&-19&2&5\\ 4&4&-31&38\\ 2&2&37&40\\ -7&-7&-14&7\end{pmatrix} \tag{1}\]
gives the \(\mathbb{Q}\)-equivalent model
\[(\lambda_{2},H_{2})=(14,\ z_{12}z_{23}+2z_{12}z_{14}-z_{12}z_{24}+ 8z_{12}z_{34}-7z_{13}^{2}-13z_{13}z_{23}\] \[\qquad-12z_{13}z_{14}-15z_{13}z_{24}-20z_{13}z_{34}-5z_{23}^{2}-2 z_{23}z_{14}-25z_{23}z_{24}\] \[\qquad-59z_{23}z_{34}-4z_{14}^{2}-14z_{14}z_{24}-18z_{14}z_{34}+ 17z_{24}^{2}-37z_{24}z_{34}-11z_{34}^{2}).\]
### Relation to previous work
The change of coordinates (1) was found by a combination of two techniques, called minimisation and reduction. _Minimisation_ seeks to remove prime factors from a suitably defined invariant (usually the discriminant). The prototype example is using Tate's algorithm to compute a minimal Weierstrass equation for an elliptic curve. _Reduction_ seeks to a make a final unimodular substitution so that the coefficients are as small as possible. The prototype example is the reduction algorithm for positive definite binary quadratic forms.
Algorithms for minimising and reducing 2-, 3-, 4- and 5-coverings of elliptic curves are given by Cremona, Fisher and Stoll [CFS], and Fisher [F], building on earlier work of Birch and Swinnerton-Dyer [BSD] for 2-coverings. Algorithms for minimising some other representations associated to genus 1 curves are given by Fisher and Radicevic [FR]. A general framework for minimising hypersurfaces is described by Kollar [K], and this has been refined by Elsenhans and Stoll [ES]; in particular they give practical algorithms for plane curves (of arbitrary degree) and for cubic surfaces. Algorithms for minimising Weierstrass equations for general hyperelliptic curves are given by Q. Liu [L].
In this paper we give an algorithm for minimising 2-coverings of genus 2 Jacobians. These are represented by pairs of quadratic forms (see Definition 1.1) where the first quadratic form is fixed. We only consider minimisation and not reduction, since the latter is already treated in [FY, Remark 4.3].
Our minimisation algorithm plays a key role in the work of the first author and Jiali Yan [FY] on computing the Cassels-Tate pairing on the 2-Selmer group of a genus 2 Jacobian. Indeed the method presented in _loc. cit._ for computing the Cassels-Tate pairing relies on being able to find rational points on certain twisted Kummer surfaces. Minimising and reducing our representatives for the 2-Selmer group elements simplifies the equations for these surfaces, and so makes it more likely that we will be able to find such rational points.
Earlier works on minimisation (see in particular [CFS]) considered both minimisation theorems (i.e., general bounds on the minimal discriminant) and minimisation algorithms (i.e., practical methods for finding a minimal model equivalent to a given one). For 2-coverings of hyperelliptic Jacobians, some minimisation theorems have already been proved; see the papers of Bhargava and Gross [BG, Section 8], and Shankar and Wang [SW, Section 2.4]. We will not revisit these results, as our focus is on the minimisation algorithms.
**Remark 1.4**.: As noted in [CF, Lemma 17.1.1], [FH, Section 19.1] and [FY, Section 2.4] the quadratic form \(G=z_{12}z_{34}-z_{13}z_{24}+z_{23}z_{14}\) has two algebraic families of 3-dimensional isotropic subspaces. Moreover, the transformations considered in Definition 1.2 do not describe the full projective orthogonal group of \(G\), but only the index 2 subgroup that preserves (rather than swaps over) these two algebraic
families. Restricting attention to this index \(2\) subgroup (when defining equivalence) makes no difference to the minimisation problem (see Remark 3.2), but as explained in [FY, Sections 2.4 and 2.5] it is important in the context of \(2\)-descent, since it means we can distinguish between elements of the \(2\)-Selmer group with the same image in the fake \(2\)-Selmer group.
Some Magma [BCP] code accompanying this article, including an implementation of our algorithm, will be made available from the first author's website.
### Acknowledgements
This work originated as a summer project carried out by the second author and supervised by the first author. We thank the Research in the CMS Programme for their support.
## 2. Statement of the algorithm
We keep the notation of Section 1.1, but now let \(K\) be a field with discrete valuation \(v:K^{\times}\to\mathbb{Z}\), valuation ring \(\mathcal{O}_{K}\), uniformiser \(\pi\), and residue field \(k\). If \(F\) is a polynomial with coefficients in \(K\) then we write \(v(F)\) for the minimum of the valuations of its coefficients.
**Definition 2.1**.: A model \((\lambda,H)\) is _integral_ if \(v(H)\geqslant 0\). It is _minimal_ if \(v(\lambda)\) is minimal among all \(K\)-equivalent integral models.
Using the action of \(K^{\times}\) (see Definition 1.2) to clear denominators it is clear that any model is \(K\)-equivalent to an integral model. By Definition 1.1 we have \(v(\lambda)\geqslant(v(f_{6})-v(f_{i}))/(6-i)\) for all \(i=0,1,\ldots,5\). We cannot have \(f_{0}=\ldots=f_{5}=0\) since \(C\) is a smooth curve of genus \(2\). Therefore \(v(\lambda)\) is bounded below, and minimal models exist.
It also follows from Definition 1.1 that if \(v(f_{6})=v(\operatorname{disc}f)=0\) then any integral model \((\lambda,H)\) has \(v(\lambda)\geqslant 0\). Therefore, in global applications, minimality is automatic at all but a finite set of primes, which we may determine by factoring.
Returning to the local situation, there is an evident recursive algorithm for computing minimal models if we can solve the following problem.
### Minimisation problem
Given an integral quadratic form \(H\in\mathcal{O}_{K}[z_{12},\ldots,z_{34}]\) determine whether there exists \(P\in\operatorname{PGL}_{4}(K)\) such that
\[v\left(\frac{1}{\det P}H\circ\wedge^{2}P\right)>0\]
and find such a matrix \(P\) if it exists.
Our solution to this problem (see Algorithm 2.4) is an iterative procedure that computes the required transformation as a composition of simpler transformations. These simpler transformations are either given by a matrix in \(\operatorname{GL}_{4}(\mathcal{O}_{K})\), in which
case we call the transformation an _integral change of coordinates_, or given by one of the following operations, corresponding to \(P=\operatorname{Diag}(1,1,1,\pi),\operatorname{Diag}(1,1,\pi,\pi)\) or \(\operatorname{Diag}(1,\pi,\pi,\pi)\).
**Definition 2.2**.: We define the following three operations on quadratic forms \(H\):
* Operation 1. Replace \(H\) by \(\frac{1}{\pi}H(z_{12},z_{13},z_{23},\pi z_{14},\pi z_{24},\pi z_{34})\),
* Operation 2. Replace \(H\) by \(H(\pi^{-1}z_{12},z_{13},z_{23},z_{14},z_{24},\pi z_{34})\),
* Operation 3. Replace \(H\) by \(\frac{1}{\pi}H(z_{12},z_{13},\pi z_{23},z_{14},\pi z_{24},\pi z_{34})\),
The following algorithm suggests some transformations that we might try applying to \(H\). In applications \(W\subset k^{6}\) will be a subspace determined by the reduction of \(H\) mod \(\pi\). We write \(e_{12},e_{13},e_{23},e_{14},e_{24},e_{34}\) for the standard basis of \(k^{6}\), and identify the dual basis with \(z_{12},z_{13},z_{23},z_{14},z_{24},z_{34}\).
**Algorithm 2.3**.: (Subalgorithm to suggest some transformations.) We take as input an integral quadratic form \(H\in\mathcal{O}_{K}[z_{12},\ldots,z_{34}]\) and a vector space \(W\subset k^{6}\) that is isotropic for \(G\). When we make an integral change of coordinates we apply the same transformation (or rather its reduction mod \(\pi\)) to \(W\). The output is either one or two transformations \(P\in\operatorname{PGL}_{4}(K)\).
* If \(\dim W=1\) then make an integral change of coordinates so that \(W=\langle e_{12}\rangle\). Then apply Operation 2.
* If \(\dim W=2\) then make an integral change of coordinates so that \(W=\langle e_{12},e_{13}\rangle\). Then apply either Operation 1 or Operation 3.
* If \(\dim W=3\) then either make an integral change of coordinates so that \(W=\langle e_{12},e_{13},e_{23}\rangle\) and apply Operation 1, or make an integral change of coordinates so that \(W=\langle e_{12},e_{13},e_{14}\rangle\) and apply Operation 3.
We write \(\overline{H}\in k[z_{12},\ldots,z_{34}]\) for the reduction of \(H\) mod \(\pi\). If \(\operatorname{char}(k)\neq 2\) then the rank and kernel of \(\overline{H}\) are defined as the rank and kernel of the corresponding \(6\times 6\) symmetric matrix. If \(\operatorname{char}(k)=2\) then we assume that \(k\) is perfect, so that
\[\overline{H}=\frac{\partial\overline{H}}{\partial z_{12}}=\ldots=\frac{ \partial\overline{H}}{\partial z_{34}}=0\]
defines a \(k\)-vector space, which we call \(\ker\overline{H}\). We then define
\[\operatorname{rank}\overline{H}=6-\dim\ker\overline{H}.\]
We continue to write \(G\) for the reduction of \(G\) mod \(\pi\), as it should always be clear from the context which of these we mean.
**Algorithm 2.4**.: (Minimisation algorithm.) We take as input an integral quadratic form \(H\in\mathcal{O}_{K}[z_{12},\ldots,z_{34}]\). The output is TRUE/FALSE according as whether
there exists \(P\in\operatorname{PGL}_{4}(K)\) such that
\[v\left(\frac{1}{\det P}H\circ\wedge^{2}P\right)>0.\]
1. Compute \(r=\operatorname{rank}\overline{H}\). If \(r=0\) then return TRUE.
2. If \(r=1\) then try making an integral change of coordinates so that \(\overline{H}=z_{34}^{2}\). If the reductions of \(G\) and \(\pi^{-1}H(z_{12},\dots,z_{24},0)\) mod \(\pi\) have a common \(3\)-dimensional isotropic subspace \(W\subset\ker\overline{H}\), then (since running Algorithm 2.3 on any such subspace \(W\) gives \(v(H)>0\)) return TRUE.
3. If \(r=2\) then try running Algorithm 2.3 on each codimension \(1\) subspace \(W\subset\ker\overline{H}\) that is isotropic for \(G\). If one of the suggested transformations gives \(v(H)>0\) then return TRUE.
4. If \(r\in\{1,2\}\) and \(\overline{H}\) factors as a product of linear forms defined over \(k\), say \(\overline{H}=\ell_{1}\ell_{2}\), then for each \(i=1,2\) try making an integral change of coordinates so that \(\ell_{i}=z_{34}\) and then apply Operation 2. If at least one of these transformations gives \(v(H)\geqslant 0\) then select one with \(\operatorname{rank}\overline{H}\) as small as possible and go to Step 1.
5. If \(r\in\{2,3,4,5\}\) then try running Algorithm 2.3 on \(W=\ker\overline{H}\) if this subspace is isotropic for \(G\), and otherwise on each codimension \(1\) subspace \(W\subset\ker\overline{H}\) that is isotropic for \(G\). If at least one of the suggested transformations gives \(v(H)\geqslant 0\) then select one with \(\operatorname{rank}\overline{H}\) as small as possible and go to Step 1.
6. If this step is reached, or if after visiting Step 1 the first time and returning to it a further \(4\) times we still do not have \(v(H)>0\), then return FALSE.
There is no difficulty in modifying the algorithm so that when it returns TRUE the corresponding transformation \(P\in\operatorname{PGL}_{4}(K)\) is also returned. In Section 3 we give further details of the implementation, in particular explaining how we make the integral changes of coordinates, and giving further details of Step 2. In Sections 4 and 5 we prove that Algorithm 2.4 is correct.
## 3. Remarks on implementation
In Algorithms 2.3 and 2.4 we are asked to try making various integral changes of coordinates. It is important to realise that we are restricted to considering matrices of the form \(\wedge^{2}P\) for \(P\in\operatorname{GL}_{4}(\mathcal{O}_{K})\), and not general elements of \(\operatorname{GL}_{6}(\mathcal{O}_{K})\). Therefore some care is required both in determining whether a suitable transformation exists, and in finding one when it does.
Since the natural map \(\operatorname{GL}_{4}(\mathcal{O}_{K})\to\operatorname{GL}_{4}(k)\) is surjective, we may concentrate on the mod \(\pi\) situation here. Notice however that in the global application with
\(K=\mathbb{Q}\) and \(v=v_{p}\) it is better to use the surjectivity of \(\operatorname{SL}_{4}(\mathbb{Z})\to\operatorname{SL}_{4}(\mathbb{Z}/p\mathbb{Z})\), so that minimisation at \(p\) does not interfere with minimisation at other primes.
Let \(k^{4}\) have basis \(e_{1},\ldots,e_{4}\). We identify \(\wedge^{2}k^{4}=k^{6}\) via \(e_{i}\wedge e_{j}\mapsto e_{ij}\). Each linear subspace \(W\subset k^{6}\) determines a linear subspace \(V_{0}\subset k^{4}\) given by
\[V_{0}=\{v\in k^{4}\mid v\wedge w=0\text{ for all }w\in W\}\]
where \(\wedge\) is the natural map \(k^{4}\times\wedge^{2}k^{4}\to\wedge^{3}k^{4}\). Let \(V_{1}\) be the analogue of \(V_{0}\) when \(W\) is replaced by its orthogonal complement with respect to \(G\).
**Lemma 3.1**.: _Let \(W\subset k^{6}\) be a subspace, and let \(P\in\operatorname{GL}_{4}(k)\)._
* _If_ \(\dim W=1\) _then_ \(\wedge^{2}P\) _sends_ \(W\) _to_ \(\langle e_{12}\rangle\) _if and only if_ \(P\) _sends_ \(V_{0}\) _to_ \(\langle e_{1},e_{2}\rangle\)_._
* _If_ \(\dim W=2\) _or_ \(3\) _then_ \(\wedge^{2}P\) _sends_ \(W\) _to a subspace of_ \(\langle e_{12},e_{13},e_{14}\rangle\) _if and only if_ \(P\) _sends_ \(V_{0}\) _to_ \(\langle e_{1}\rangle\)_._
* _If_ \(\dim W=5\) _then_ \(\wedge^{2}P\) _sends_ \(W\) _to_ \(\langle e_{12},e_{13},e_{14},e_{23},e_{24}\rangle\) _if and only if_ \(P\) _sends_ \(V_{1}\) _to_ \(\langle e_{1},e_{2}\rangle\)_._
Proof.: In (i) we have \(W=\langle e_{12}\rangle\) if and only if \(V_{0}=\langle e_{1},e_{2}\rangle\), and in (ii) we have \(W\subset\langle e_{12},e_{13},e_{14}\rangle\) if and only if \(V_{0}=\langle e_{1}\rangle\). Since the definition of \(V_{0}\) in terms of \(W\) behaves well under all changes of coordinates this proves (i) and (ii). As noted in Section 1.1, all transformations of the form \(\wedge^{2}P\) preserve \(G\) (up to a scalar multiple). Therefore (iii) follows from (i) on replacing \(W\) by its orthogonal complement with respect to \(G\).
**Remark 3.2**.: Let \(\mathbf{G}\) be the matrix of second partial derivatives of \(G\), i.e., the \(6\times 6\) matrix with entries \(1,-1,1,1,-1,1\) on the antidiagonal. A direct calculation shows that for any \(4\times 4\) matrix \(P\) we have
\[\wedge^{2}(\operatorname{adj}(P)^{T})=(\det P)\mathbf{G}(\wedge^{2}P)\mathbf{ G}.\]
Letting \(\operatorname{PGL}_{4}\) act on the space of quadratic forms via \(P:H\mapsto\frac{1}{\det P}H\circ\wedge^{2}P\), this tells us that applying \(P\) to a quadratic form \(H(z_{12},z_{13},z_{23},z_{14},z_{24},z_{34})\) has the same effect as applying \(P^{-T}\) to its _dual quadratic form_ which we define to be \(H(z_{34},-z_{24},z_{14},z_{23},-z_{13},z_{12})\). We note that the substitution used to replace \(H\) by its dual swaps over the two families of isotropic subspaces in Remark 1.4.
We find the changes of coordinates in Algorithm 2.3 by using Lemma 3.1(i) and (ii), and the analogue of (ii) after passing to the dual as in Remark 3.2. We find the changes of coordinates in Steps 2 and 4 of Algorithm 2.4 using Lemma 3.1(iii).
**Remark 3.3**.: In Step 2 of Algorithm 2.4 we must find if possible a \(3\)-dimensional subspace \(W\subset\langle e_{12},e_{13},e_{14},e_{23},e_{24}\rangle\) that is isotropic for both \(G\) and \(\overline{H}_{1}\) where
\[H_{1}(z_{12},\ldots,z_{24})=\pi^{-1}H(z_{12},\ldots,z_{24},0).\]
To be isotropic for \(G\) we need that \(\langle e_{12}\rangle\subset W\). So such a subspace \(W\) can only exist if \(\overline{H}_{1}(1,0,\ldots,0)=0\). We assume that this is the case and write
\[\overline{H}_{1}(z_{12},\ldots,z_{24})=z_{12}h_{1}(z_{13},z_{23},z_{14},z_{24}) +h_{2}(z_{13},z_{23},z_{14},z_{24})\]
where \(h_{i}\) is a homogeneous polynomial of degree \(i\). Our problem reduces to that of finding a line contained in
\[\{z_{13}z_{24}-z_{23}z_{14}=h_{1}=h_{2}=0\}\subset\mathbb{P}^{3}.\]
The well known description of the lines on \(\{z_{13}z_{24}-z_{23}z_{14}=0\}\subset\mathbb{P}^{3}\) suggests that we substitute \((z_{13},z_{23},z_{14},z_{24})=(x_{1}y_{1},x_{1}y_{2},x_{2}y_{1},x_{2}y_{2})\) into \(h_{1}\) and \(h_{2}\), take the GCD, and factor into irreducibles. The lines of interest now correspond to linear factors of the form \(\alpha x_{1}+\beta x_{2}\) or \(\gamma y_{1}+\delta y_{2}\).
**Remark 3.4**.: In Steps 3 and 5 of Algorithm 2.4, when \(\ker\overline{H}\) is not itself isotropic for \(G\), we must find all codimension 1 subspaces of \(\ker\overline{H}\) that are isotropic for \(G\). Since the restriction of \(G\) to \(\ker\overline{H}\) is a non-zero quadratic form, it can have at most two linear factors. There are therefore at most two codimension 1 subspaces we need to consider. In particular, the number of times that Algorithm 2.4 applies one of the operations in Definition 2.2 is uniformly bounded.
## 4. Weights and Admissibility
Let \(H\in\mathcal{O}_{K}[u_{0},\ldots,u_{5}]\) be an integral quadratic form and suppose that there exists \(P\in\operatorname{GL}_{4}(K)\) such that
\[v\left(\frac{1}{\det P}H\circ\wedge^{2}P\right)>0.\]
Then \(P\) is equivalent to a matrix in Smith normal form, say
\[P=U\text{Diag}(\pi^{w_{1}},\pi^{w_{2}},\pi^{w_{3}},\pi^{w_{4}})V\]
for some \(U,V\in\operatorname{GL}_{4}(\mathcal{O}_{K})\) and \(w_{1},w_{2},w_{3},w_{4}\in\mathbb{Z}\). We say that the weight \(w=(w_{1},w_{2},w_{3},w_{4})\) is _admissible_ for \(H\). It is clear that permuting the entries of \(w\), or adding the same integer to all entries, has no effect on admissibility.
**Definition 4.1**.: The weight \(w=(w_{1},w_{2},w_{3},w_{4})\)_dominates_ the weight \(w^{\prime}=(w^{\prime}_{1},w^{\prime}_{2},w^{\prime}_{3},w^{\prime}_{4})\) if
\[\begin{split}\max(1+w_{1}+w_{2}+w_{3}+w_{4}-w_{i}-w_{j}-w_{k}-w_{ l},0)\\ \geqslant\max(1+w^{\prime}_{1}+w^{\prime}_{2}+w^{\prime}_{3}+w^{ \prime}_{4}-w^{\prime}_{i}-w^{\prime}_{j}-w^{\prime}_{k}-w^{\prime}_{l},0)\end{split} \tag{2}\]
for all \(1\leqslant i<j\leqslant 4\) and \(1\leqslant k<l\leqslant 4\).
This definition is motivated by the fact that if \(w\) dominates \(w^{\prime}\) and \(w\) is admissible for \(H\) then \(w^{\prime}\) is admissible for \(H\). Our next lemma shows that (for the
purpose of proving that Algorithm 2.4 is correct) it suffices to consider finitely many (in fact 12) weights.
**Lemma 4.2**.: _Every weight \(w=(0,a,b,c)\in\mathbb{Z}^{4}\) with \(0\leqslant a\leqslant b\leqslant c\) dominates one of the following weights_
\[(0,0,0,0),\ (0,0,0,1),\ (0,1,1,1),\ (0,0,1,1),\ (0,0,1,2),\ (0,1,2,2),\] \[(0,1,1,2),\ (0,1,1,3),\ (0,2,2,3),\ (0,1,2,3),\ (0,1,2,4),\ (0,2,3,4).\]
Proof.: We list the pairs \((i,j)\) and \((k,l)\) in Definition 4.1 in the order \((1,2)\), \((1,3)\), \((2,3)\), \((1,4)\), \((2,4)\), \((3,4)\). Taking \(w=(0,a,b,c)\), the left hand side of (2) is \(\max(\xi,0)\) where \(\xi\) runs over the entries of the following symmetric matrix.
\[\begin{bmatrix}1+b+c-a&1+c&1+c-a&1+b&1+b-a&1\\ &1+c&1+a+c-b&1+c-b&1+a&1&1+a-b\\ &1+c-a&1+c-b&1+c-a-b&1&1-a&1-b\\ &1+b&1+a&1&1+a+b-c&1+b-c&1+a-c\\ &1+b-a&1&1-a&1+b-c&1+b-a-c&1-c\\ &1&1+a-b&1-b&1+a-c&1-c&1+a-b-c\end{bmatrix}\]
We divide into 8 cases according as to which of the inequalities \(0\leqslant a\leqslant b\leqslant c\) are equalities. In fact we make the following more precise claims.
* If \(0=a=b=c\) then \(w=(0,0,0,0)\).
* If \(0=a=b<c\) then \(w\) dominates \((0,0,0,1)\).
* If \(0=a<b=c\) then \(w\) dominates \((0,0,1,1)\).
* If \(0=a<b<c\) then \(w\) dominates \((0,0,1,2)\).
* If \(0<a=b=c\) then \(w\) dominates \((0,1,1,1)\).
* If \(0<a=b<c\) then \(w\) dominates \((0,1,1,3)\), \((0,1,1,2)\) or \((0,2,2,3)\).
* If \(0<a<b=c\) then \(w\) dominates \((0,1,2,2)\).
* If \(0<a<b<c\) then \(w\) dominates \((0,1,2,4)\), \((0,1,2,3)\) or \((0,2,3,4)\).
In each case where we list three possibilities, we further claim that these correspond to the subcases \(a+b<c\), \(a+b=c\) and \(a+b>c\) (in that order).
Since the proofs are very similar, we give details in just one case. So suppose that \(0<a<b<c\) and \(a+b=c\). Then we have \(a\geqslant 1\), \(b\geqslant 2\), \(c\geqslant 3\), \(b-a\geqslant 1\), \(c-a\geqslant 2\) and \(c-b\geqslant 1\). Listing the pairs \((i,j)\) and \((k,l)\) in the same order as
before, the left hand side of (2) is at least
\[\begin{bmatrix}5&4&3&3&2&1\\ 4&3&2&2&1&0\\ 3&2&1&1&0&0\\ 3&2&1&1&0&0\\ 2&1&0&0&0&0\\ 1&0&0&0&0&0\end{bmatrix}\]
with equality if \((a,b,c)=(1,2,3)\). Therefore \(w\) dominates \((0,1,2,3)\).
Our next remark further reduces the number of weights we must consider.
**Remark 4.3**.: It is clear from Remark 3.2 that if \(w\in\mathbb{Z}^{4}\) is admissible for \(H\) then \(-w\) is admissible for the dual of \(H\). We say that the weights \(w\) and \(-w\) (or any weights equivalent to these, in the sense of permuting the entries, or adding the same integer to all entries) are _dual_. The list of 12 weights in Lemma 4.2 consists of 4 dual pairs \((0,a,b,c)\) and \((0,c-b,c-a,c)\) with \(a+b\neq c\), and 4 self-dual weights \((0,a,b,a+b)\).
## 5. Completion of the proof
In this section we complete the proof that Algorithm 2.4 is correct.
We first note that if \(H\) and \(H^{\prime}\) are related by an integral change of coordinates, and the algorithm works for \(H\) then it works for \(H^{\prime}\). This is because before applying Operations 1, 2 or 3 we always make an integral change of coordinates that, by Lemma 3.1, is unique up to an element of \(\operatorname{GL}_{4}(\mathcal{O}_{K})\) whose reduction mod \(\pi\) preserves a suitable subspace of \(k^{4}\). The following elementary lemma then shows that the transformed quadratic forms are again related by an integral change of coordinates.
**Lemma 5.1**.: _Let \(\alpha=\operatorname{Diag}(I_{r},\pi I_{4-r})\) and \(P\in\operatorname{GL}_{4}(\mathcal{O}_{K})\). Then \(P\in\alpha\operatorname{GL}_{4}(\mathcal{O}_{K})\alpha^{-1}\) if and only if the reduction of \(P\) mod \(\pi\) preserves the subspace \(\langle e_{1},\dots,e_{r}\rangle\)._
Proof.: This is [CFS, Lemma 4.1].
Let \(H\in\mathcal{O}_{K}[z_{12},\dots,z_{34}]\) be a quadratic form. If there exists \(P\in\operatorname{PGL}_{4}(K)\) such that
\[v\left(\frac{1}{\det P}H\circ\wedge^{2}\!P\right)>0, \tag{3}\]
then, as explained in Section 4, one of the 12 weights in Lemma 4.2 is admissible for \(H\). Since the analysis for dual weights (see Remark 4.3) is essentially identical,
we only need to consider one weight from each dual pair. It therefore suffices to consider the 8 weights listed in the table below.
In the case of weight \((w_{1},\ldots,w_{4})\) we may suppose, by an integral change of coordinates, that (3) holds with \(P=\operatorname{Diag}(\pi^{w_{1}},\ldots,\pi^{w_{4}})\). This implies certain lower bounds on the valuations of the coefficients of \(H\). To specify these (in a way that is valid even when \(\operatorname{char}(k)=2\)), we relabel the variables \(z_{12},z_{13},z_{23},z_{14},z_{24},z_{34}\) as \(z_{1},\ldots,z_{6}\) and write \(H=\sum_{i\leqslant j}H_{ij}z_{i}z_{j}\). We also put \(H_{ji}=H_{ij}\). Then the lower bounds on the \(v(H_{ij})\) are as recorded in the table.
\begin{tabular}{|c|c|} \hline \multicolumn{1}{|c|}{**Case 1:**\((0,0,0,0)\)} \\ \(\begin{bmatrix}1&1&1&1&1&1\\ 1&1&1&1&1\\ 1&1&1&1&1\\ 1&1&1&1&1\\ 1&1&1&1&1\\ 1&1&1&1&1\\ \end{bmatrix}\) & \(\begin{bmatrix}2&2&2&1&1&1\\ 2&2&2&1&1&1\\ 2&2&2&1&1&1\\ 1&1&1&0&0&0\\ 1&1&1&1&0&0&0\\ 1&1&1&0&0&0\\ 1&1&1&0&0&0\end{bmatrix}\) & \(\begin{bmatrix}3&2&2&2&1&1\\ 3&3&2&2&1&1\\ 2&1&1&1&1&0\\ 2&1&1&1&1&0\\ 2&1&1&1&1&0\\ 1&1&1&0&0&0&0\end{bmatrix}\) \\ \multicolumn{1}{|c|}{\(r=0\)} & \multicolumn{1}{c|}{\(r=1,2,3\)} & \multicolumn{1}{c|}{\(r=1,2\)} & \multicolumn{1}{c|}{\(r=1,2,3,4\)} \\ \hline \multicolumn{1}{|c|}{**Case 5:**\((0,0,1,2)\)} \\ \(\begin{bmatrix}4&3&3&2&2&1\\ 3&2&2&1&1&0\\ 3&2&1&1&0&0\\ 2&1&1&0&0&0\\ 1&0&0&0&0\end{bmatrix}\) & \(\begin{bmatrix}4&4&3&2&1&1\\ 4&4&3&2&1&1\\ 3&3&2&1&0&0\\ 2&2&1&0&0&0\\ 1&1&0&0&0&0\end{bmatrix}\) & \(\begin{bmatrix}5&4&3&3&2&1\\ 4&3&2&1&0\\ 4&3&2&1&0&0\\ 3&2&1&1&0&0\\ 2&1&0&0&0&0\\ 1&0&0&0&0&0\end{bmatrix}\) & \(\begin{bmatrix}6&5&4&3&2&1\\ 5&4&3&2&1&0\\ 4&3&2&1&0&0\\ 3&2&1&1&0&0\\ 2&1&0&0&0&0\\ 1&0&0&0&0&0\end{bmatrix}\) \\ \multicolumn{1}{|c|}{\(r=3,4\)} & \multicolumn{1}{c|}{\(r=3,4\)} & \multicolumn{1}{c|}{\(r=5\)} \\ \hline \end{tabular}
In our analysis of each case, we will assume we are not in an earlier case. The possibilities for \(r=\operatorname{rank}\overline{H}\) will be justified below, but are recorded in the table for convenience. We complete the proof that Algorithm 2.4 is correct by going through the 8 cases. In fact we show that if the cases are grouped as
\[\begin{array}{c|c|c|c|c|c}\text{Case 1}&\text{Case 2}&\text{Case 4}&\text{Case 5}& \text{Case 7}\\ &\text{Case 3}&&\text{Case 6}&\text{Case 8}\end{array}\]
then at each iteration of the algorithm we move at least one column to the left. Therefore, if after visiting Step 1 the first time and returning to it a further 4 times we still do not have \(v(H)>0\) then the algorithm is correct to return FALSE.
**Case 1:**\(w=(0,0,0,0)\). In this case we already have \(v(H)>0\), so \(r=0\) and we are done by Step 1.
**Case 2:**\(w=(0,0,0,1)\). We see from the table that \(\langle e_{12},e_{13},e_{23}\rangle\subset\ker\overline{H}\) and so \(r\leqslant 3\). We cannot have \(r=0\), otherwise we would be in Case 1. If \(r=1\) then we are done by Step 2. If \(r=2\) then we are done by Step 3. If \(r=3\) then Step 5 directly applies Operation 1. (By "directly" we mean that there is no preliminary integral change of coordinates.) Since this gives \(v(H)>0\) we are in Case 1 on the next iteration.
**Case 3:**\(w=(0,0,1,1)\). We see from the table that \(\overline{H}=\ell z_{34}\) for some linear form \(\ell\). One of the transformations considered in Step 4 is to directly apply Operation 2. Since this gives \(v(H)>0\) we are in Case 1 on the next iteration.
**Case 4:**\(w=(0,1,1,2)\). We see from the table that \(\langle e_{12},e_{13}\rangle\subset\ker\overline{H}\) and so \(r\leqslant 4\). If \(r=4\) then Step 5 directly applies Operation 1 or Operation 3. Then on the next iteration either \((0,0,0,1)\) or \((0,1,1,1)\) is admissible, which means we are in Case 2 or its dual. If \(r\leqslant 3\) then by applying a block diagonal element of \(\operatorname{GL}_{4}(\mathcal{O}_{K})\) with blocks of sizes \(1\), \(2\) and \(1\), we may suppose that \(H_{35}\equiv H_{45}\equiv 0\pmod{\pi}\). If \(r=3\) then \(\ker\overline{H}=\langle e_{12},e_{13},ae_{23}+be_{14}\rangle\) for some \(a,b\in k\). If \(a=0\) or \(b=0\) then \(\ker\overline{H}\) is isotropic for \(G\). Otherwise \(\langle e_{12},e_{13}\rangle\) is the unique codimension \(1\) isotropic subspace. Either way, Step 5 directly applies Operation 1 or Operation 3, and we are done as before.
We now suppose that \(r\leqslant 2\) and divide into the following cases.
* Suppose that \(H_{36}\not\equiv 0\pmod{\pi}\) and \(H_{46}\not\equiv 0\pmod{\pi}\). Since \(r\leqslant 2\) we have \(\overline{H}=\ell z_{34}\) for some linear form \(\ell\). Since there is no integral change of coordinates taking \(\ell\) to \(z_{34}\) the only possible outcome of Step 4 is to directly apply Operation 2. This brings us to Case 3.
* Suppose that \(H_{36}\equiv 0\pmod{\pi}\) and \(H_{46}\not\equiv 0\pmod{\pi}\). Then \(v(H_{33})=1\), otherwise we would be in Case 2. We again have \(\overline{H}=\ell z_{34}\) for some linear form \(\ell\). Although there does now exist an integral change of coordinates taking \(\ell\) to \(z_{34}\), following this up with Operation 2 does not preserve that \(v(H)\geqslant 0\). So again the only possible outcome of Step 4 is to directly apply Operation 2. This brings us to Case 3.
* Suppose that \(H_{36}\not\equiv 0\pmod{\pi}\) and \(H_{46}\equiv 0\pmod{\pi}\). This is essentially the same as the previous case by duality.
* Suppose that \(H_{36}\equiv H_{46}\equiv 0\pmod{\pi}\). Then \(\overline{H}\) is a quadratic form in \(z_{24}\) and \(z_{34}\) only. If this factors over \(k\) then either of the transformations in Step 4 brings us to Case 3. Otherwise we proceed to Step 5 which directly
applies Operation 1 or Operation 3. As before, this brings us to Case 2 or its dual.
**Case 5:**\(w=(0,0,1,2)\). Applying a block diagonal element of \(\operatorname{GL}_{4}(\mathcal{O}_{K})\) with blocks of sizes 2, 1 and 1, we may suppose that \(H_{26}\equiv 0\pmod{\pi}\). Then \(H_{36}\not\equiv 0\pmod{\pi}\) (otherwise we would be in Case 2) and \(H_{44},H_{45},H_{55}\) cannot all vanish mod \(\pi\) (otherwise we would be in Case 3). Therefore \(\langle e_{12},e_{13}\rangle\subset\ker\overline{H}\) and \(r=3\) or 4. The only 3-dimensional isotropic subspaces for \(G\) that contain \(\langle e_{12},e_{13}\rangle\) are \(\langle e_{12},e_{13},e_{23}\rangle\) and \(\langle e_{12},e_{13},e_{14}\rangle\). Therefore one of the transformations considered in Step 5 is to directly apply Operation 1 or Operation 3 (the latter only being a possibility if \(H_{44}\equiv 0\pmod{\pi}\)). It follows that at the next iteration we have \(r\leqslant 2\), and so are in Case 4 or earlier.
**Case 6:**\(w=(0,1,1,3)\). Applying a block diagonal element of \(\operatorname{GL}_{4}(\mathcal{O}_{K})\) with blocks of sizes 1, 2 and 1, we may suppose that \(H_{15}\equiv 0\pmod{\pi^{2}}\). We have \(H_{44}\not\equiv 0\pmod{\pi}\) (otherwise we would be in Case 4) and \(H_{35}\not\equiv 0\pmod{\pi}\) (otherwise we would be in Case 5). Therefore \(\langle e_{12},e_{13}\rangle\subset\ker\overline{H}\) and \(r=3\) or 4. Exactly as in Case 5 we find that at the next iteration we have \(r\leqslant 2\), and so are in Case 4 or earlier.
**Case 7:**\(w=(0,1,2,3)\). We have \(H_{26}\not\equiv 0\pmod{\pi}\) (otherwise we would be in Case 4), and \(H_{35},H_{45},H_{55}\) cannot all vanish mod \(\pi\) (otherwise we would be in Case 3). Therefore \(r=3\) or 4, and \(\langle e_{12}\rangle\subset\ker\overline{H}\subset\langle e_{12},e_{13},e_{23 },e_{14}\rangle\).
If \(r=4\) then \(\ker\overline{H}=\langle e_{12},ae_{13}+be_{23}+ce_{14}\rangle\) for some \(a,b,c\in k\). If \(b,c\neq 0\) then \(\langle e_{12}\rangle\) is the unique codimension 1 subspace of \(\ker\overline{H}\) that is isotropic for \(G\). Therefore, Step 5 directly applies Operation 2, which brings us to Case 4. If \(b=0\) then \(c\neq 0\), and by applying a block diagonal element of \(\operatorname{GL}_{4}(\mathcal{O}_{K})\) with blocks of sizes 1, 1 and 2, we may suppose that \(a=0\). Then the 3-dimensional isotropic subspaces for \(G\) containing \(\ker\overline{H}=\langle e_{12},e_{14}\rangle\) are \(\langle e_{12},e_{13},e_{14}\rangle\) and \(\langle e_{12},e_{14},e_{24}\rangle\). Step 5 applies either \(\operatorname{Diag}(1,\pi,\pi,\pi)\) or \(\operatorname{Diag}(1,1,\pi,1)\) bringing us to Case 5 or Case 6. The case \(c=0\) is similar by duality.
If \(r=3\) then \(\ker\overline{H}=\langle e_{12},e_{23}+ae_{13},e_{14}+be_{13}\rangle\) for some \(a,b\in k\). By applying a block diagonal element of \(\operatorname{GL}_{4}(\mathcal{O}_{K})\) with blocks of sizes 2 and 2, we may suppose that \(a=b=0\). Then \(H_{35}\equiv H_{36}\equiv H_{45}\equiv H_{46}\equiv 0\pmod{\pi}\) and \(H_{55}\not\equiv 0\pmod{\pi}\). The codimension 1 subspaces of \(\ker\overline{H}=\langle e_{12},e_{23},e_{14}\rangle\) that are isotropic for \(G\) are \(\langle e_{12},e_{23}\rangle\) and \(\langle e_{12},e_{14}\rangle\). The 3-dimensional isotropic subspaces for \(G\) containing one of these spaces are
\[\langle e_{12},e_{13},e_{23}\rangle,\ \langle e_{12},e_{13},e_{14}\rangle,\ \langle e _{12},e_{23},e_{24}\rangle,\ \langle e_{12},e_{14},e_{24}\rangle.\]
The first two of these correspond to directly applying Operation 1 or Operation 3, which brings us to Case 5 or its dual. The last two correspond to transformations which fail to preserve that \(v(H)\geqslant 0\), and so cannot be selected by Step 5.
**Case 8:**\(w=(0,1,2,4)\). We have \(H_{35}\not\equiv 0\pmod{\pi}\) (otherwise we would be in Case 5), \(H_{26}\not\equiv 0\pmod{\pi}\) (otherwise we would be in Case 6), and \(H_{44}\not\equiv 0\pmod{\pi}\) (otherwise we would be in Case 7). Therefore \(r=5\) and \(\ker\overline{H}=\langle e_{12}\rangle\). Step 5 directly applies Operation 2 which brings us to Case 6.
**Example 5.2**.: We give three examples where Algorithm 2.4 takes the maximum of 4 iterations to give \(v(H)>0\). The first two examples start in Case 7, with \(\operatorname{rank}\overline{H}=3\) or 4, and the final one starts in Case 8. In the first two examples there are two choices on the first iteration. We made an arbitrary choice in each case, but in fact with the other choices the algorithm would still have taken 4 iterations.
Let \(K=\mathbb{Q}\) and \(v=v_{p}\) for any choice of prime number \(p\). An arrow labelled \((w_{1},\ldots,w_{4})\) indicates that we replace \(H\) by \(\frac{1}{\det P}H\circ\wedge^{2}P\) where \(P=\operatorname{Diag}(p^{w_{1}},\ldots,p^{w_{4}})\).
\[p^{5}z_{12}^{2}+z_{13}z_{34}+pz_{23}^{2}+pz_{14}^{2}+z_{24}^{2} \stackrel{{(0,0,0,1)}}{{\longrightarrow}}p^{4}z_{12}^{2}+z_{ 13}z_{34}+z_{23}^{2}+p^{2}z_{14}^{2}+pz_{24}^{2}\] \[\stackrel{{(0,0,1,0)}}{{\longrightarrow}}p^{3}z_{12 }^{2}+pz_{13}z_{34}+pz_{23}^{2}+pz_{14}^{2}+z_{24}^{2}\] \[\stackrel{{(0,1,0,1)}}{{\longrightarrow}}p^{3}z_{1 2}^{2}+z_{13}z_{34}+pz_{23}^{2}+pz_{14}^{2}+p^{2}z_{24}^{2}\] \[\stackrel{{(0,0,1,1)}}{{\longrightarrow}}p(z_{12 }^{2}+z_{13}z_{34}+z_{23}^{2}+z_{14}^{2}+pz_{24}^{2}).\]
\[p^{5}z_{12}^{2}+z_{13}z_{34}+pz_{23}^{2}+z_{14}z_{24} \stackrel{{(0,0,1)}}{{\longrightarrow}}p^{4}z_{12}^{2}+z_{ 13}z_{34}+z_{23}^{2}+pz_{14}z_{24}\] \[\stackrel{{(0,0,1,0)}}{{\longrightarrow}}p^{3}z_{1 2}^{2}+pz_{13}z_{34}+pz_{23}^{2}+z_{14}z_{24}\] \[\stackrel{{(0,1,0,1)}}{{\longrightarrow}}p^{3}z_{1 2}^{2}+z_{13}z_{34}+pz_{23}^{2}+pz_{14}z_{24}\] \[\stackrel{{(0,0,1,1)}}{{\longrightarrow}}p(z_{12 }^{2}+z_{13}z_{34}+z_{23}^{2}+z_{14}z_{24}).\]
\[p^{6}z_{12}^{2}+z_{13}z_{34}+z_{23}z_{24}+z_{14}^{2} \stackrel{{(0,0,1,1)}}{{\longrightarrow}}p^{4}z_{1 2}^{2}+pz_{13}z_{34}+z_{23}z_{24}+z_{14}^{2}\] \[\stackrel{{(0,0,0,1)}}{{\longrightarrow}}p^{3}z_{1 2}^{2}+pz_{13}z_{34}+z_{23}z_{24}+pz_{14}^{2}\] \[\stackrel{{(0,1,0,1)}}{{\longrightarrow}}p^{3}z_{1 2}^{2}+z_{13}z_{34}+pz_{23}z_{24}+pz_{14}^{2}\] \[\stackrel{{(0,0,1,1)}}{{\longrightarrow}}p(z_{12 }^{2}+z_{13}z_{34}+z_{23}z_{24}+z_{14}^{2}).\] |
2309.10442 | Essential ideal transforms | It is our intention in this research generalized some concept in local
cohomology such as contravarint functor $ext$, covariant functor $Ext$,
covarian functor $Tor$ and ideal transforms with $e$-exact sequences. The
$e$-exact sequence was introduced by Akray and Zebari \cite{AZ} in 2020. We
obtain for a torsion-free modules $B$, $_eext^n_R(P,B)=0$ while
$_eExt^n_R(A,E)=0$ for every module $A$. Also for any torsion-free module $B$
we have an $e$-exact sequence $0\to \Gamma_{a}(B) \to B\to D_{a}(B)\to
H^1_{a}(B)\to 0$ and an isomorphisms between $B$ and $r D_{a}(B)$. Finally we
generalize Mayer-Vietories with $e$-exact sequences in essential local
cohomology, we get a special $e$-exact sequences. | Runak H. Mustafa, Ismael Akray | 2023-09-19T09:04:00Z | http://arxiv.org/abs/2309.10442v1 | # Essential ideal transforms
###### Abstract.
It is our intention in this research generalized some concept in local cohomology such as contravarint functor \(ext\), covariant functor \(Tor\) and ideal transforms with \(e\)-exact sequences. The \(e\)-exact sequence was introduced by Akray and Zebari [1] in 2020. We obtain for a torsion-free modules \(B\), \({}_{e}ext_{R}^{n}(P,B)=0\) while \({}_{e}Ext_{R}^{n}(A,E)=0\) for every module \(A\). Also for any torsion-free module \(B\) we have an \(e\)-exact sequence \(0\rightarrow\Gamma_{a}(B)\to B\to D_{a}(B)\to H_{a}^{1}(B)\to 0\) and an isomorphisms between \(B\) and \(rD_{a}(B)\). Finally we generalize Mayer-Vietories with \(e\)-exact sequences in essential local cohomology, we get a special \(e\)-exact sequences.
Key words and phrases:Contravariant and covariant essential derived functor, essential projective, essential injective, essential ideal transforms 2020 Mathematics Subject Classification: 13C11, 46M18, 13D45
## 1. Introduction
Throughout this article, \(R\) will denote a Noetherian domain and \(B\) torsion free e-enjective \(R\)-module. In 1972, R. S. Mishra introduced a generalization for split sequence where a semi-sequence \(M_{i-1}\stackrel{{ f_{i-1}}}{{\rightarrow}}M_{i}\stackrel{{ f_{i}}}{{\rightarrow}}M_{i+1}\) is called semi-split if \(Ker(f_{i})\) is a direct summand of \(M_{i}\)[8]. So a semi-split is split if and only if it is an exact. In 1999, Davvaz and parnian-Goramaleky introduced a generalization for exact sequences called it a \(U\)-exact sequence [6]. A submodule \(N\) of an \(R\)-module \(M\) is called essential or large in \(M\) if it has non-zero intersection with every non-zero submodule of \(M\) and denoted by \(N\leqslant_{e}M\). Akray and Zebari in 2020 [1] gives another generalization to exact sequences of modules and instead of the equality of \(Im(f)\) with \(Ker(g)\) they took \(Im(f)\) as a large (essential) submodule of \(Ker(g)\) in a sequence \(A\stackrel{{ f}}{{\rightarrow}}D\stackrel{{ g}}{{ \rightarrow}}C\) and called it essential exact sequence or simply \(e\)-exact sequence. Equivalently, a sequence of \(R\)-modules and \(R\)-morphisms \(\cdots\to N_{i-1}\stackrel{{ f_{i-1}}}{{\rightarrow}}N_{i} \stackrel{{ f_{i}}}{{\rightarrow}}N_{i+1}\rightarrow\dots\) is said to be essential exact (e-exact) at \(N_{i}\), if \(Im(f_{i-1})\leqslant_{e}Ker(f_{i})\) and to be e-exact if it is an e-exact at \(N_{i}\) for all \(i\). In particular, a sequence of \(R\)-modules and \(R\)-morphisms
\(0\to L\xrightarrow{f_{1}}M\xrightarrow{f_{2}}N\to 0\) is a short e-exact sequence if and only if \(Ker(f_{1})=0,Im(f_{1})\leqslant_{e}Ker(f_{2})\) and \(Im(f_{2})\leqslant_{e}N\). They studied some basic properties of \(e\)-exact sequences and established their connection with notions in module theory and homological algebra [2]. Also, F. Campanini and A. Facchini were worked on \(e\)-exact sequences and studied the relation of \(e\)-exactness with some related functors like the functor defined on the category of \(R\)-modules to the spectral category of \(R\)-modules and the localization functor with respect to the singular torsion theory [5]. Furthermore, Akray and R. H. Mustafa in 2023 introduced and proved further properties of \(e\)-exact sequences and we will restrict our discussion to their applications on both injective modules and the torsion functor of local cohomology [3].The local cohomology was introduced by Grothendieck in a seminar in Harvard 1961 and written up by Hartshorne in 1967. Next, this subject was studied by Hartshorne and numerous authors even in the recent years see [9], [4] and [7].
In this research we generalize some concept in local cohomology such as contravarint functor \({}_{e}ext\), covariant functor \({}_{e}Ext\) and ideal transforms with \(e\)-exact sequences.
In section two, we describe the concept \({}_{e}ext_{n}^{R}\) and \({}_{e}Ext_{n}^{R}\) as well as we investigate the propertes each of them. For example, let \(0\to A^{\prime}\to A\to A^{\prime\prime}\to 0\) be an \(e\)-exact sequence of \(R\)-modules, then there is a long \(e\)-exact sequence \(0\to r_{1}Hom(A^{\prime\prime},B)\to r_{2}Hom(A,B)\to r_{3}Hom(A^{\prime},B)\to {}_{e}ext_{R}^{1}(A^{\prime\prime}.B)\to{}_{e}ext_{R}^{1}(A,B)\to\dots\) for some nonzero element \(r_{1},r_{2},r_{3}\in R\) and also we have for any \(R\)-module \(A\) and \(E\) is an \(e\)-injective \(R\)-module, \({}_{e}Ext_{R}^{n}(A,E)=0\), for all \(n\geq 1\).
In section three, we construct essential ideal transform and we find the new \(e\)-exact sequences by generalized the idea of Mayer-vietores sequence. Also we prove that for any torsion-free \(R\)-module \(B\) there exists \(0\neq r\in R\) such that \(\epsilon^{*}:B\to rD_{a}(B)\) is an isomorphism if and only if \(\Gamma_{a}(B)=H_{a}^{1}(B)=0\) and also we show that there is an \(e\)-exact sequences \(0\to D_{r(a+b)}(B)\to D_{a}(B)\bigoplus D_{b}(B)\to D_{a\cap b}(B)\to rH_{r(a+b )}^{2}(B)\to rH_{a}^{2}(B)\bigoplus rH_{b}^{2}(B)\to rH_{a\cap b}^{2}(B)\to\dots\).
## 2. contravariant and covariant right essential derived functors
### Contravariant essential derived functor
In this subsection we want to describe contravariant right derived functors \(ext_{R}^{n}\) on the \(e\)-projective resolutions call it essential derived functors (berifly \({}_{e}ext_{n}^{R}\)) and discuss some properties of them. On the other hand, we present
some definition that are central for our object such as essential injective and essential projective module as the following:
**Definition 2.1**.: [3] An \(R\)-module \(E\) is an e-injective if satisfies the following condition: for any monic \(f_{1}:A_{1}\to A_{2}\) and any map \(f_{2}:A_{1}\to E\), there exist \(0\neq r\in R\) and \(f_{3}:A_{2}\to E\) such that \(f_{3}f_{1}=rf_{2}\).
In this case, we say the map \(f_{3}\) is essentially extends to the map \(f_{2}\). An \(e\)-injective module may not be injective, for example the \(\mathbb{Z}\)-module \(\mathbb{Z}\) is \(e\)-injective module, but it is not injective see [3, Example 2.3].
**Definition 2.2**.: [1] An \(e\)-exact sequence \(0\to A\overset{i}{\to}B\overset{p}{\to}C\to 0\) is \(e\)-split if there exist \(0\neq s\in R\) and a morphism \(j:C\to B\,(\text{or }f:B\to A)\) such that \(pj=sI_{C}\) (or \(\,fi=sI_{A}\)).
**Definition 2.3**.: [1] An \(R\)-module \(P\) is \(e\)-projective if it satiesfies the following condition: for any \(e\)-epic map \(f_{1}:A_{1}\to A_{2}\) and any map \(f_{2}:P\to A_{2}\), there exist \(0\neq r\in R\) and \(f_{3}:P\to A_{1}\) such that \(f_{1}f_{3}=rf_{2}\).
The following example shows that an \(e\)-projective module may not be projective.
**Example 2.4**.: Consider we have an \(e\)-exact sequence of \(\frac{\mathbb{Z}}{16\mathbb{Z}}\)-modules \(0\to\frac{4\mathbb{Z}}{16\mathbb{Z}}\overset{f_{1}}{\to}\frac{\mathbb{Z}}{16 \mathbb{Z}}\overset{g}{\to}\frac{2\mathbb{Z}}{16\mathbb{Z}}\to 0\), where \(f(x+16\mathbb{Z})=x+16\mathbb{Z}\) and \(g(x+16\mathbb{Z})=4x+16\mathbb{Z}\) is an \(e\)-split, because we have a map \(f_{1}:\frac{2\mathbb{Z}}{16\mathbb{Z}}\to\frac{\mathbb{Z}}{16\mathbb{Z}},f_{1} (x+16\mathbb{Z})=x+16\mathbb{Z}\) such that \(g\circ f_{1}(x+16\mathbb{Z})=g(x+16\mathbb{Z})=4x+16\mathbb{Z}=4I_{\frac{2 \mathbb{Z}}{16\mathbb{Z}}}\). Thus we obtain \(\frac{2\mathbb{Z}}{16\mathbb{Z}}\) is an \(e\)-projective as \(\frac{\mathbb{Z}}{16\mathbb{Z}}\)-module, while it is not projective.
**Definition 2.5**.: An e-projective resolution of an \(R-\)module \(A\) is an e-exact sequence \(\cdots\to P_{n+1}\stackrel{{ d_{n+1}}}{{\rightarrow}}P_{n}\ldots \stackrel{{ d_{2}}}{{\rightarrow}}P_{1}\stackrel{{ d_{1}}}{{ \rightarrow}}P_{0}\stackrel{{ d_{0}}}{{\rightarrow}}A\to 0\) where each \(P_{n}\) is an e-projective \(R\)-module. If \(T\) is a contravariant functor, then \((R^{n}T)A\ =H^{n}(TP_{A})=\frac{KerTd_{n+1}}{ImTd_{n}}\), where \(\cdots\to P_{n+1}\to P_{n}\rightarrow\cdots\to P_{1}\stackrel{{ d_{1}}}{{\rightarrow}}P_{0}\to 0:P_{A}\) is a deleted \(e\)-projective resolution of an \(R\)-module \(A\). In particular, we put \(T=Hom(\,B)\), and define \({}_{e}ext_{R}^{n}(\,B)=R^{n}T\). Then \({}_{e}ext_{R}^{n}(A,B)=H^{n}(Hom_{R}(P_{A},B))\), which means \({}_{e}ext_{R}^{n}(A,B)=\frac{Kerd^{n^{*}}}{ImTd^{(n-1)^{*}}}\),
where \(d^{n^{*}}:Hom(P_{n-1},B)\to Hom(P_{n},B)\) is defined as usual by \(d^{n^{*}}:f\longmapsto fd^{n}\)
**Theorem 2.6**.: _Let \(A\) be an \(R\)-module. Then \({}_{e}ext_{R}^{n}(A,B)=0\) for all negative integer \(n\)._
Proof.: Suppose that \(\cdots\to P_{n+1}\to P_{n}\rightarrow\cdots\to P_{1}\to P_{0} \to A\to 0:P\) be an \(e\)-projective resolution for \(A\). Then the deleted complex of \(A\) is \(\cdots\to P_{n+1}\to P_{n}\rightarrow\cdots\to P_{1}\to P_{0} \to 0:P_{A}\), after applying \(Hom(\,B)\) on the deleted complex, we get \(0\to Hom(P_{0},B)\to Hom(P_{1},B)\to Hom(P_{2},B)\rightarrow\ldots\) by [1, Theorem 2.7], which implies that \(Hom(P_{n},B)=0\) for all negative integer number \(n\). Hence \({}_{e}ext_{R}^{n}(A,B)=0\) for all negative integer number \(n\).
**Theorem 2.7**.: _Let \(A\) be \(R\)-module. Then \({}_{e}ext_{R}^{0}(A,B)\cong rHom(A,B)\) for some \(0\neq r\in R\)._
Proof.: Let \(\cdots\to P_{n+1}\to P_{n}\rightarrow\cdots\to P_{1}\stackrel{{ d_{1}}}{{\rightarrow}}P_{0}\stackrel{{ \epsilon}}{{\rightarrow}}A\to 0\) be an \(e\)-projective resolution for \(A\). By definition \({}_{e}ext_{R}^{0}(A,B)=H^{0}(Hom(P_{A},B))=\frac{Kerd_{1}^{*}}{Imd_{\epsilon}^{ *}}=Kerd_{1}^{*}\). But left \(e\)-exactness of \(Hom(\,B)\) gives an \(e\)-exact sequence \(0\to Hom(A,B)\stackrel{{\epsilon^{*}}}{{\rightarrow}}Hom(P_{0},B)\stackrel{{ d_{1}^{*}}}{{\rightarrow}}Hom(P_{1},B)\stackrel{{ d_{2}^{*}}}{{\rightarrow}}Hom(P_{2},B)\rightarrow\ldots\) by [3, Proposition 2.7]. We define \(\epsilon^{*}:Hom(A,B)\to Kerd_{1}^{*}\), since \(Im\epsilon^{*}\leqslant_{e}Kerd_{1}^{*}\), \(\epsilon^{*}\) is well-defined and since \(Hom(\,B)\) is a left \(e\)-exact functor then \(\epsilon^{*}\) is monic. Now, we want to prove that \(\epsilon^{*}\) is an epic. Let \(f\in Kerd_{1}^{*}\) where \(f:P_{0}\to B\), then \(d_{1}^{*}(f(p_{0}))=f(d_{1}(p_{1}))=0\). We have \(Im\epsilon\leqslant_{e}A\) so there exist \(a^{\prime}\in A\) and \(0\neq r\in R\) such that \(\epsilon(p_{0})=ra^{\prime}\). Now, we define \(g:A\to B\) by \(rg(a)=b\) for fixed \(r\in R\). Let \(a_{1},a_{2}\in A\) and \(a_{1}=a_{2}\). Then \(ra_{1}=ra_{2}\) implies that \(rg\epsilon(p_{1})=rg\epsilon(p_{2})\) so we obtain \(f(p_{1})=f(p_{2})\) and \(b_{1}=b_{2}\) thus \(g\) is well-defined. Now, \(rf(p_{0})=rg\epsilon(p_{0})=g\epsilon(rp_{0})=\epsilon^{*}(g(a))\). Hence \(\epsilon^{*}\) is an isomorphism to \(rKerd_{1}^{*}\) because, \({}_{e}ext_{R}^{0}(A,B)=Kerd_{1}^{*}\) and \({}_{e}ext_{R}^{0}(A,B)\) is isomorphic to \(rHom(A,B)\).
**Theorem 2.8**.: _Let \(P\) be an \(e\)-projective \(R\)-module, then \({}_{e}ext_{R}^{n}(P,B)=0\), for all \(n\geq 1\)._
Proof.: Since \(P\) is an \(e\)-projective, the \(e\)-projective resolution is \(0\to P\stackrel{{ 1_{P}}}{{\to}}P\to 0\). The corresponding deleted \(e\)-projective resolution \(P_{P}\) is \(0\to P\to 0\). By applying \(Hom(\,B)\) to the deleted complex we obtain \({}_{e}ext_{R}^{n}(P,B)=0\) for all \(n\geq 1\).
**Corollary 2.9**.: _Let \(0\to A^{\prime}\to A\to A^{\prime\prime}\to 0\) be an \(e\)-exact sequence of \(R\)-modules, then there is a long \(e\)-exact sequence \(0\to r_{1}Hom(A^{\prime\prime},B)\to r_{2}Hom(A,B)\to r_{3}Hom(A^{\prime},B)\to{ }_{e}ext_{R}^{1}(A^{\prime\prime}.B)\to{}_{e}ext_{R}^{1}(A,B)\to\dots\) for some nonzero element \(r_{1},r_{2},r_{3}\in R\)._
Proof.: By [2, Theorem 3.7], we have an \(e\)-exact sequence of deleted complexes \(0\to P_{A^{\prime}}\to P_{A}\to P_{A^{\prime\prime}}\to 0\). If \(T=Hom(\,B)\), then \(0\to TP_{A^{\prime\prime}}\to TP_{A}\to TP_{A^{\prime}}\to 0\) is still \(e\)-exact by [3, Proposition 2.7]. Then by [2, Theorem 3.2] we have an \(e\)-exact sequence \(0\to H^{0}(Hom(P_{A^{\prime\prime}},B))\to H^{0}(Hom(P_{A},B))\to H^{0}(Hom(P_ {A^{\prime}},B))\to H^{1}(Hom(P_{A^{\prime\prime}}.B))\to H^{1}(Hom(P_{A},B))\to \dots\). By using the definition of \({}_{e}ext_{R}^{R}\), Theorem 2.7 and Theorem 2.6 we obtain an \(e\)-exact sequence \(0\to r_{1}Hom(A^{\prime\prime},B)\to r_{2}Hom(A,B)\to r_{3}Hom(A^{\prime},B) \to{}_{e}ext_{R}^{1}(A^{\prime\prime}.B)\to{}_{e}ext_{R}^{1}(A,B)\to\dots\) for some nonzero element \(r_{1},r_{2},r_{3}\in R\).
**Theorem 2.10**.: _Given a commutative diagram of \(R\)-modules having \(e\)-exact rows as the following:_
_Then there is a commutative diagram of \(R\)-modules with \(e\)-exact rows:_
_,_
Proof.: By [2, Theorem 3.7], we have an \(e\)-exact sequence of deleted complexes \(0\to P_{A^{\prime}}\to P_{A}\to P_{A^{\prime\prime}}\to 0\). If \(T=Hom(\,B)\), then \(0\to TP_{A^{\prime\prime}}\to TP_{A}\to TP_{A^{\prime}}\to 0\) is still \(e\)-exact by [3, Proposition 2.7]. By [2, Remark 3.3] there is a commutative diagram of \(R\)-modules and \(R\)-morphisms as the following
\[\begin{array}{l}\dots\to H^{n-1}(Hom(P_{A^{\prime\prime}},B)\stackrel{{ i^{\star}}}{{\to}}H^{n-1}(Hom(P_{A},B))\stackrel{{\sigma}}{{\to}}H^{n}( Hom(P_{A^{\prime}},B)\to\dots\\ \stackrel{{ g^{\star}\star}}{{\to}}\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill\hfill \hfill\
and our proof will be complete by using the definition of \({}_{e}ext_{R}^{n}(A,B)=H^{n}(Hom(P_{A},B))\).
### Covariant essential derived functor \({}_{e}Ext\)
In this subsection we want to describe covariant right derived functors \(Ext_{R}^{n}\) on the \(e\)-injective resolution call it covariant essential derived functors (breifly \({}_{e}Ext_{n}^{R}\)) and discuss some properties of them and prove some results under acceptable conditions. In homology, \(ext_{n}^{R}\) and \(Ext_{n}^{R}\) are equivalent but this is not the case for \({}_{e}ext_{n}^{R}\) and \({}_{e}Ext_{n}^{R}\). We begin with the following definition.
**Definition 2.11**.: An e-injective resolution of an \(R-\)module \(A\) is an e-exact sequence \(0\to A\stackrel{{\eta}}{{\rightarrow}}E^{0}\stackrel{{ d^{0}}}{{\rightarrow}}E^{1}\stackrel{{ d^{1}}}{{\rightarrow}}...\to E^{n}\stackrel{{ d^{n}}}{{\rightarrow}}E^{n+1}\rightarrow...\) where each \(E^{i}\) is an e-injective \(R\)-module. If \(T\) is a covariant functor,then \((R^{n}T)A=H^{n}(TE^{M})=\frac{KerTd^{n}}{ImTd^{n-1}}\), where \(E^{A}:0\to E^{0}\stackrel{{ d^{0}}}{{\rightarrow}}E^{1} \stackrel{{ d^{1}}}{{\rightarrow}}\dots\) is deleted e-injective resolution of an \(R\)-module \(M\). In particular, we put \(T=Hom(A,\,)\), for any \(R\)-module \(A\), we define \({}_{e}Ext_{R}^{n}(A,\,)=R^{n}T\). Then \({}_{e}Ext_{R}^{n}(A,M)=H^{n}(Hom_{R}(A,E^{M}))\), which means \({}_{e}Ext_{R}^{n}(A,M)=\frac{Kerd_{n}^{n}}{ImdT_{n}^{n-1}}\),
where \(d_{*}^{n}:Hom(A,E^{n})\to Hom(A,E^{n+1})\) is defined as usual by \(d_{*}^{n}:f\longmapsto d^{n}f\)
**Theorem 2.12**.: _Let \(E\) be an \(e\)-injective \(R\)-module, then for \(R\)-module \(A\), \({}_{e}Ext_{R}^{n}(A,E)=0\), for all \(n\geq 1\)_
Proof.: Since \(E\) is an \(e\)-injective module, the \(e\)-injective resolution of \(E\) is \(0\to E\stackrel{{ 1_{E}}}{{\rightarrow}}E\to 0\). The corresponding deleted \(e\)-injective resolution \(E^{E}\) is \(0\to E\to 0\). By applying \(Hom(\,,E)\) to the deleted complex we obtain \({}_{e}Ext_{R}^{n}(A,E)=0\) for all \(n\geq 1\).
**Corollary 2.13**.: _Let \(0\to A^{\prime\prime}\to A\to A^{\prime}\to 0\) be a short \(e\)-exact sequence of \(R\)-modules and \(P\) be an \(e\)-projective module, then there is a long \(e\)-exact sequence \(0\to Hom(P,A^{\prime\prime})\to Hom(P,A)\to Hom(P,A^{\prime})\to{}_{e}Ext_{R}^ {1}(P,A^{\prime\prime})\rightarrow{}_{e}Ext_{R}^{1}(P,A)\rightarrow\dots\)_
Proof.: By [3, Proposition 2.10], we have an \(e\)-exact sequence of deleted complexes \(0\to E^{\prime\prime A^{\prime\prime}}\to E^{A}\to E^{\prime A^{\prime}}\to 0\). If \(T=Hom(P,\,)\), then \(0\to TE^{A^{\prime\prime}}\to TE^{A}\to TE^{A^{\prime}}\to 0\) is still \(e\)-exact by [1, Theorem 3.1]. Then by [2, Theorem 3.2] we have an \(e\)-exact sequence
\(0\to H^{0}(Hom(P,E^{A^{\prime\prime}}))\to H^{0}(Hom(P,E^{A}))\to H^{0}( Hom(P,E^{A^{\prime}}))\to H^{1}(Hom(P,E^{A^{\prime\prime}}))\to H^{1}( Hom(P,E^{A^{\prime\prime}}))\rightarrow\dots\). By using the definition of \({}_{e}\)Ext and [2, Theorem 5.3] we obtain an \(e\)-exact sequence \(0\to Hom(P,A^{\prime\prime})\to Hom(P,A)\to Hom(P,A^{\prime})\rightarrow{}_{e} Ext_{R}^{1}(P,A^{\prime\prime})\rightarrow{}_{e}Ext_{R}^{1}(P,A)\rightarrow\dots\)
**Theorem 2.14**.: _Let \(P\) be an \(e\)-projective module and given a commutative diagram of \(R\)-modules having \(e\)-exact rows as the following:_
_Then there is a commutative diagram of \(R\)-modules with \(e\)-exact rows:_
_Proof._ By [3, Proposition 2.10], we have an \(e\)-exact sequence of deleted complexes \(0\to E^{A^{\prime\prime}}\to E^{A}\to E^{A^{\prime}}\to 0\). If \(T=Hom(P,\;)\), then \(0\to TE^{A^{\prime\prime}}\to TE^{A}\to TE^{A^{\prime}}\to 0\) is still \(e\)-exact by [1, Theorem 3.1]. By [2, Remark 3.3] there is a commutative diagram of \(R\)-modules and \(R\)-morphisms as the following
\[\begin{CD}\cdots\to H^{n-1}(Hom(P,E^{A}))@>{p^{*}}>{\to}H^{n-1}(Hom(P,E^{A^{ \prime}}))@>{\mathfrak{H}^{n}}>{\to}H^{n}(Hom(P,E^{A^{\prime\prime}}))@>{ \mathfrak{H}^{*}}>{\to}\ldots\\ @V{\mathfrak{H}^{*}}V{\to}V@V{\mathfrak{H}^{*}}V{\to}V@V{\mathfrak{H}^{*}}V{ \to}V@V{\mathfrak{V}^{*}}V{H^{n}(Hom(P,E^{C^{\prime\prime}}))@>{\sigma^{ \prime*}}V{\to}H^{n}(Hom(P,E^{C^{\prime\prime}}))@>{\mathfrak{H}^{*}}>{\to} \ldots\end{CD}\]
and our proof will be complete by using the definition of \({}_{e}Ext^{n}_{R}(P,A)=H^{n}(Hom(P,E^{A}))\). \(\square\)
## 3. ideal transforms regarding to essential exact sequences
Throughout this section \(R\) be a hereditary domain. All our work in this section applies to two particular systems of ideals the system of ideals \(\beta=(a^{n})_{n\in N}\) and the system of ideals \(\beta=(a^{n}+b^{n})_{n\in N}\)by [4, Example 3.12]. Then the ideal transforms with respect to an ideal \(a\) is defined as \(D_{a}(B)=\varinjlim_{n\in N}Hom(a^{n},B)\). It is a covariant \(R\)-linear functor as well as it is left \(e\)-exact sequence because \(Hom(a^{n},B)\) is a left \(e\)-exact sequence and direct limit preserves \(e\)-exactness. If a system of ideals \((a^{n})_{n\in N}\) is an inverse family of ideals, then there is a natural equivalence \(\varinjlim_{n\in N}Hom(\frac{R}{a^{n}},B)\cong\Gamma_{a}(B)\) as well as we have a natural equivalent between \(\varinjlim_{n\in N}ext^{i}_{R}(\frac{R}{a^{n}},B)\) and \({}_{e}H^{i}_{a}(B)\). Let \(0\to A\to B\to C\to 0\) be an \(e\)-exact sequence. If \(C\) is an \(e\)-projective, then by [1, Proposition 3.2] the \(e\)-exact sequence \(0\to A\to B\to C\to 0\)
is an \(e\)-split and then by [3, Proposition 2.5]\(rB\cong A\oplus C\). Therefore, there is a submodule \(C\) of \(B\) with \(C\cong\frac{rB}{A}\).
**Theorem 3.1**.: _For any torsion-free \(R\)-module \(B\). The sequence \(0\to\Gamma_{a}(B)\to B\to D_{a}(B)\to H^{1}_{a}(B)\to 0\) is an \(e\)-exact._
Proof.: The sequence \(0\to a^{n}\to R\to\frac{rR}{a^{n}}\to 0\), where \(0\neq r\in R\) is an \(e\)-exact sequence. From above \(e\)-exact sequence and Corollay 2.9 induces a long \(e\)-exact sequence of \({}_{e}ext^{n}_{R}(\,,B)\) modules. Since \(R\) is an \(e\)-projective and since \(Hom(R,B)\) is naturally isomorphic to \(B\), we therefore obtain a long \(e\)-exact sequence \(0\to Hom(\frac{rR}{a^{n}},B)\to B\to Hom(a^{n},B)\to{}_{e}ext^{1}_{R}(\frac{rR} {a^{n}},B)\to 0\). Now pass to direct limits \(0\to\varinjlim_{n\in N}Hom(\frac{rR}{a^{n}},B)\to B\to\varinjlim_{n\in N}Hom(a ^{n},B)\to\varinjlim_{n\in N}{}_{e}ext^{1}_{R}(\frac{rR}{a^{n}},B)\to 0\), then use the natural equivalent to obtain an \(e\)-exact sequences of \(R\)-modules and \(R\)-morphism \(0\to\Gamma_{a}(B)\to B\to D_{a}(B)\to H^{1}_{a}(B)\to 0\).
**Theorem 3.2**.: _Let the ideal \(a^{n}\) be a torsion-free module for all natural number \(n\). Then \(Hom(a^{n},B)\) is an \(e\)-injective._
Proof.: To prove that \(Hom(a^{n},B)\) is an \(e\)-injective, we show that
\(Hom(-,Hom(a^{n},B))\) is an \(e\)-exact functor. By the adjoint isomorphism, this functor is naturally isomorphic to \(Hom(a^{n}\otimes-,B)\) which is the composite \(Hom(-,B)\circ(a^{n}\otimes-)\). By [3, Proposition 2.7], \(Hom(-,B)\) is an \(e\)-exact functor and by [1, Theorem 2.10], \(a^{n}\otimes-\) is also \(e\)-exact, so their composite is again \(e\)-exact.
**Theorem 3.3**.: _For any torsion-free \(R\)-module \(B\) there exists \(0\neq r\in R\) such that \(\epsilon^{*}:B\to rD_{a}(B)\) is an isomorphism if and only if \(\Gamma_{a}(B)=H^{1}_{a}(B)=0\)._
Proof.: If \(\epsilon^{*}\) is an isomorphism then by \(e\)-exactness and definition of essential we get the result. Conversely, suppose that \(\Gamma_{a}(B)=H^{1}_{a}(B)=0\) then by \(e\)-exactness and definition of essential we obtain \(Ker\epsilon^{*}=0\) implies \(\epsilon^{*}\) is monic. It is remain to show that \(\epsilon^{*}\) is an epic. By Theorem 3.2\(D_{a}(B)\) is an \(e\)-injective and by [3, Proposition 2.8] there exists a homomorphism \(g:Im\epsilon^{*}\to D_{a}(B)\) such that \(f_{3}\circ g=rI_{D_{a}(B)}\) which means that \(rD_{a}(B)=Im\epsilon^{*}\). Therefore, \(\epsilon^{*}\) is an isomorpism.
**Theorem 3.4**.: _Let \({}_{e}ext_{R}^{n}(a^{n},B)\) be an \(e\)-injective torsion-free module. Then there exists \(0\neq r\in R\) such that \(\eta:{}_{e}ext_{R}^{n}(a^{n},B)\to r_{e}ext_{R}^{n+1}(\frac{rR}{a^{n}},B)\) is an isomorphism._
Proof.: Corollay 2.9 induces a long \(e\)-exact sequences of \({}_{e}ext_{R}^{n}\) and by \(e\)-exactness and definition of essential we obtain \(Ker\eta=0\) implies \(\eta\) is monic. It is remain to show that \(\eta\) is an epic. By [3, Proposition 2.8] there exists a homomorphism \(g:Im\eta\to{}_{e}ext_{R}^{n+1}(\frac{rR}{a^{n}},B)\) such that \(f_{3}\circ g=rI_{{}_{e}ext_{R}^{n+1}(\frac{rR}{a^{n}},B)}\) which means that \(r_{e}ext_{R}^{n+1}(\frac{rR}{a^{n}},B)=Im\eta\). Therefore, \(\eta\) is an isomorpism.
The Mayer-Vietoris sequence involves two ideals so the \(a\) be the first ideal and b will denote a second ideal. In the following theorem we have the new \(e\)-exact sequences which is obtain by generalize the idea of Mayer-vietores sequence.
**Theorem 3.5**.: _For any torsion-free \(R\)-module \(B\), there is an \(e\)-exact sequence \(0\to D_{r(a+b)}(B)\to D_{a}(B)\bigoplus D_{b}(B)\to D_{a\cap b}(B)\to rH_{r(a+b)} ^{2}(B)\to rH_{a}^{2}(B)\bigoplus rH_{b}^{2}(B)\to rH_{a\cap b}^{2}(B)\to\dots\) for some \(0\neq r\in R.\)_
Proof.: We have an \(e\)-exact sequences \(0\to a\cap b\to a\bigoplus b\to\frac{r(a\bigoplus b)}{a\cap b}\to 0,\) where \(0\neq r\in R\). By Corollay 2.9 induces a long \(e\)-exact sequences \(0\to Hom(r(a+b),B)\to Hom(a,B)\bigoplus Hom(b,B)\to Hom(a\cap b,B)\to{}_{e}ext_{R }^{1}(r(a+b),B)\to{}_{e}ext_{R}^{1}(a,B)\bigoplus{}_{e}ext_{R}^{1}(b,B)\to{}_{e }ext_{R}^{1}(a\cap b,B)\to\dots\) Now pass to direct limits
\(0\to\varinjlim_{n\in N}Hom(r(a+b),B)\to\varinjlim_{n\in N}Hom(a,B)\bigoplus \varinjlim_{n\in N}Hom(b,B)\to\varinjlim_{n\in N}Hom(a\cap b,B)\to\varinjlim_{ n\in N}{}_{e}ext_{R}^{1}(r(a+b),B)\to\varinjlim_{n\in N}{}_{e}ext_{R}^{1}(a,B)\bigoplus \varinjlim_{n\in N}{}_{e}ext_{R}^{1}(b,B)\to\lim_{n\in N}{}_{e}ext_{R}^{1}(a \cap b,B)\to\dots\), then by using a natural equivalent and Theorem 3.4 induces an \(e\)-exact sequences \(0\to D_{r(a+b)}(B)\to D_{a}(B)\bigoplus D_{b}(B)\to D_{a\cap b}(B)\to rH_{r(a+ b)}^{2}(B)\to rH_{a}^{2}(B)\bigoplus rH_{b}^{2}(B)\to rH_{a\cap b}^{2}(B)\to\dots\) |
2309.11748 | Deep Learning Meets Swarm Intelligence for UAV-Assisted IoT Coverage in
Massive MIMO | This study considers a UAV-assisted multi-user massive multiple-input
multiple-output (MU-mMIMO) systems, where a decode-and-forward (DF) relay in
the form of an unmanned aerial vehicle (UAV) facilitates the transmission of
multiple data streams from a base station (BS) to multiple Internet-of-Things
(IoT) users. A joint optimization problem of hybrid beamforming (HBF), UAV
relay positioning, and power allocation (PA) to multiple IoT users to maximize
the total achievable rate (AR) is investigated. The study adopts a
geometry-based millimeter-wave (mmWave) channel model for both links and
proposes three different swarm intelligence (SI)-based algorithmic solutions to
optimize: 1) UAV location with equal PA; 2) PA with fixed UAV location; and 3)
joint PA with UAV deployment. The radio frequency (RF) stages are designed to
reduce the number of RF chains based on the slow time-varying angular
information, while the baseband (BB) stages are designed using the
reduced-dimension effective channel matrices. Then, a novel deep learning
(DL)-based low-complexity joint hybrid beamforming, UAV location and power
allocation optimization scheme (J-HBF-DLLPA) is proposed via fully-connected
deep neural network (DNN), consisting of an offline training phase, and an
online prediction of UAV location and optimal power values for maximizing the
AR. The illustrative results show that the proposed algorithmic solutions can
attain higher capacity and reduce average delay for delay-constrained
transmissions in a UAV-assisted MU-mMIMO IoT systems. Additionally, the
proposed J-HBF-DLLPA can closely approach the optimal capacity while
significantly reducing the runtime by 99%, which makes the DL-based solution a
promising implementation for real-time online applications in UAV-assisted
MU-mMIMO IoT systems. | Mobeen Mahmood, MohammadMahdi Ghadaksaz, Asil Koc, Tho Le-Ngoc | 2023-09-21T03:03:06Z | http://arxiv.org/abs/2309.11748v1 | # Deep Learning Meets Swarm Intelligence for UAV-Assisted IoT Coverage in Massive MIMO
###### Abstract
This study considers a UAV-assisted multi-user massive multiple-input multiple-output (MU-mMIMO) systems, where a decode-and-forward (DF) relay in the form of an unmanned aerial vehicle (UAV) facilitates the transmission of multiple data streams from a base station (BS) to multiple Internet-of-Things (IoT) users. A joint optimization problem of hybrid beamforming (HBF), UAV relay positioning, and power allocation (PA) to multiple IoT users to maximize the total achievable rate (AR) is investigated. The study adopts a geometry-based millimeter-wave (mmWave) channel model for both links and proposes three different swarm intelligence (SI)-based algorithmic solutions to optimize: 1) UAV location with equal PA; 2) PA with fixed UAV location; and 3) joint PA with UAV deployment. The radio frequency (RF) stages are designed to reduce the number of RF chains based on the slow time-varying angular information, while the baseband (BB) stages are designed using the reduced-dimension effective channel matrices. Then, a novel deep learning (DL)-based low-complexity joint hybrid beamforming, UAV location and power allocation optimization scheme (J-HBF-DLLPA) is proposed via fully-connected deep neural network (DNN), consisting of an offline training phase, and an online prediction of UAV location and optimal power values for maximizing the AR. The illustrative results show that the proposed algorithmic solutions can attain higher capacity and reduce average delay for delay-constrained transmissions in a UAV-assisted MU-mMIMO IoT systems. Additionally, the proposed J-HBF-DLLPA can closely approach the optimal capacity while significantly reducing the runtime by \(99\%\), which makes the DL-based solution a promising implementation for real-time online applications in UAV-assisted MU-mMIMO IoT systems.
Decode-and-forward (DF) relay, deep learning, hybrid beamforming, massive MIMO, millimeter wave communications, power allocation (PA), unmanned aerial vehicles (UAVs).
## I Introduction
The advent of advanced wireless communications and networking technologies has heralded a new age of innovation with the Internet-of-Things (IoT) at its forefront. This pioneering concept has captured the imagination of the industry and promises to transform the way we interact with the world around us. The potential applications of IoT are vast, ranging from healthcare and urban environments to households [2]. However, deploying IoT effectively and extensively still poses significant challenges, including efficient information transfer between wireless nodes and gateways. To address this issue, various routing schemes have been proposed, including direct transmission or relay structures. Nonetheless, when the distance between the IoT end node and the gateway is substantial, direct transmission may not be feasible or may consume excessive power. In such cases, communication through relay can be a more power-efficient alternative. Moreover, deploying cellular stations in urban areas can be a costly and challenging task, which can further complicate the communications coverage issue in the IoT framework [3].
Unmanned aerial vehicles (UAVs), commonly referred to as drones, are viewed as a key component of the next generation of wireless communications networks. UAV as a relay offers several advantages over traditional static relays. Specifically, the ability to deploy on-demand, mobile relaying systems at a relatively low cost and in a timely manner, makes them particularly well-suited for unforeseen or short-term events, such as emergency situations or network offloading [4]. Furthermore, the high mobility of UAVs allows for the dynamic adjustment of their locations to optimize communications conditions, a technique particularly promising for delay-tolerant applications, such as periodic sensing and the transfer of large data [5, 6, 7]. UAVs' capability to reach inaccessible locations makes them a viable option for future IoT applications, as they can fly close to IoT devices, sequentially collect sensing data, address coverage issues, and reduce IoT communications networks' overhead [8].
The incorporation of UAVs as relay nodes in wireless sensor networks (WSNs) has the potential to augment communications capacity by connecting remote sensor gateways and addressing the escalating data-rate demands in applications such as virtual reality, device-to-device communications, and smart cities. UAVs can be deployed at high altitudes to increase the likelihood of line-of-sight (LoS) dominated air-to-ground communications channels, thereby supporting high-rate communications. However, the severely congested sub-6 GHz bands can be inadequate to meet the rising data rate requirements. In contrast, millimeter-wave (mmWave) communications, with their abundant spectrum resources, can potentially support the high-throughput and low-latency requirements of various UAV application scenarios [9]. Nonetheless, mmWave signals suffer from high propagation loss, including free-space path loss, atmospheric and molecular absorption, and rain attenuation. This challenge can be surmounted by leveraging massive multiple-input multiple-output (mMIMO) technology with large array structures to generate high beam gains, which can improve the transmission range and simultaneously suppress interference among IoT nodes by utilizing the advanced capabilities of three-dimensional (3D) beamforming. In the realm of mmWave mMIMO systems, fully-digital beamforming (FDBF) and hybrid beamforming (HBF) are two common approaches for mitigating interference. However, FDBF becomes infeasible in UAV-assisted IoT systems due to the prohibitively high cost, complexity, and limited power supply of UAVs [10]. Conversely, HBF, which involves the design of both the radio frequency (RF)-stage and |
2309.10456 | Improving Speaker Diarization using Semantic Information: Joint Pairwise
Constraints Propagation | Speaker diarization has gained considerable attention within speech
processing research community. Mainstream speaker diarization rely primarily on
speakers' voice characteristics extracted from acoustic signals and often
overlook the potential of semantic information. Considering the fact that
speech signals can efficiently convey the content of a speech, it is of our
interest to fully exploit these semantic cues utilizing language models. In
this work we propose a novel approach to effectively leverage semantic
information in clustering-based speaker diarization systems. Firstly, we
introduce spoken language understanding modules to extract speaker-related
semantic information and utilize these information to construct pairwise
constraints. Secondly, we present a novel framework to integrate these
constraints into the speaker diarization pipeline, enhancing the performance of
the entire system. Extensive experiments conducted on the public dataset
demonstrate the consistent superiority of our proposed approach over
acoustic-only speaker diarization systems. | Luyao Cheng, Siqi Zheng, Qinglin Zhang, Hui Wang, Yafeng Chen, Qian Chen, Shiliang Zhang | 2023-09-19T09:13:30Z | http://arxiv.org/abs/2309.10456v2 | # Improving Speaker Diarization Using Semantic Information: Joint Pairwise Constraints Propagation
###### Abstract
Speaker diarization has gained considerable attention within speech processing research community. Mainstream speaker diarization rely primarily on speakers' voice characteristics extracted from acoustic signals and often overlook the potential of semantic information. Considering the fact that speech signals can efficiently convey the content of a speech, it is of our interest to fully exploit these semantic cues utilizing language models. In this work we propose a novel approach to effectively leverage semantic information in clustering-based speaker diarization systems. Firstly, we introduce spoken language understanding modules to extract speaker-related semantic information and utilize these information to construct pairwise constraints. Secondly, we present a novel framework to integrate these constraints into the speaker diarization pipeline, enhancing the performance of the entire system. Extensive experiments conducted on the public dataset demonstrate the consistent superiority of our proposed approach over acoustic-only speaker diarization systems.
Luyao Cheng, Siqi Zheng, Qinglin Zhang, Hui Wang, Yafeng Chen, Qian Chen, Shiliang Zhang Speech Lab, Alibaba Group
{shuli.cly, zsql74630, tanging.cq, sly.zsl}@alibaba-inc.com speaker diarization, spoken language processing, pairwise constraints propagation.
## 1 Introduction
Speaker Diarization(SD) is the task of solving the question "who speak when" and assigning the speaker labels for the given audio. In most applications setting, the speaker label will be integrated with the corresponding words or sentences transcribed from Automatic Speech Recognition(ASR) system. Despite the rich profusion of transcribed text, mainstream SD systems[1] take only acoustic information into consideration. Traditional SD system usually consists of the following components: (1) A voice activity detection(VAD) component. (2) A speaker embedding extractor, such as x-vector[2], d-vector[3] and ECAPA-TDNN[4]. (3) A speaker clustering component using clustering algorithms such as agglomerative hierarchical clustering(AHC)[5] and spectral clustering(SC)[6]. Mainstream SD system solely utilize acoustic information, ignoring the potential of content semantics. This limitation often results in obvious performance degradation in adverse acoustic conditions such as noise, reverberation, and far-field recordings.
Some previous works tried to leverage semantic information by learning speaker-role information in specific domain applications such as _air traffic controller[7]_ or _medical consultation_[8]. However, these methods are highly task-oriented and only suitable for two-speaker scenarios. In this work, we focus on open multi-party meeting scenarios where the number of speakers is unknown and the relations among speakers are unspecified.
Recent works such as [9] and [10] tried to implicitly utilize semantic information with an modified ASR system. These systems often require large-scale annotated multi-speaker speech data, which is scarce in reality and extremely expensive to obtain. Furthermore, these works mostly utilize semantic information to precisely determine the turning point between speaker utterances. Semantic information is not explicitly used in speaker clustering and determining the number of speakers. Other multi-modal speaker diarization systems presented similar limitations [11][12][13].
To address these limitations, we explicitly incorporates semantic information into speaker embedding normalization and speaker clustering. We introduced additional spoken language processing (SLP) modules to extract speaker-related information from transcribed texts. The main contributions of this paper are as follows: (1) We propose a novel framework to directly incorporate semantic information into a speaker clustering, exceedingly the performance boundary of acoustic-only speaker clustering.
(2) We introduce methods of pairwise constraints propagation to speaker clustering, and investigate the effectiveness of constraints derived from semantic information.
## 2 Semantic Speaker Constraints
### Semantic Speaker-related Tasks
For the given conversation speech signal features \(S=\{s_{1},s_{2},...,s_{T}\}\), the text content \(Y=\{y_{1},y_{2},...,y_{K}\}\) are decoded by the ASR system from \(S\). In traditional speaker diarization system, we extract embeddings \(E=\{e_{1},e_{2},...,e_{N}\}\) from the speech signal \(S\) by speaker embedding extractor. In
most application settings, A forced-alignment(FA) module is utilized to associate speech signal features \(S\), transcribed text \(T\) and speaker embeddings \(E\). This cascaded pipeline often results in cumulative errors, as each components are optimized for different objectives.
We defined two spoken language processing(SLP) task: **Dialogue-Detection** and **Speaker-Turn-Detection** to extract speaker-related information based on the transcribed text \(T\). Due to the uncertainty of the actual duration of the meetings, in practical applications, the task should be defined on the subsequence from the whole session.
**Dialogue-Detection** takes a sequence of sentences as input and predicts whether this is transcribed from multi-speaker dialogue or single-speaker. Dialogue-detection can be defined as a binary text classification problem.
**Speaker-Turn-Detection** tries to determine, for each given sentence in the sequence, the probability of the occurrence of speaker change. Speaker turn detection can be defined as a sequence labeling problem, where the goal is to determine whether the given position represents a point of change in speaker role from a semantic perspective.
In practical, ASR system introduced insert, delete and replacement errors into the transcribed text which will cause the performance of SLP tasks decreasing. In [14], a simple yet effective hybrid strategy was proposed to mitigate the impact of ASR system errors by incorporating both acoustic and semantic information. In this paper, we extended this approach to improve the accuracy of these two SLP models.
### Pairwise Constraints from Semantic Information
Traditional cluster-based SD systems often employ Text-Independent speaker embedding models. Consequently, extracting speaker information from semantics during the embedding extraction stage becomes challenging. We proposed that utilizing semantic information to construct constraints between embeddings offers a suitable strategy for incorporating the results of SLP models into cluster-based SD.
We construct two kinds of constraints from the semantic speaker-related information: must-link \(\mathcal{M}\) and cannot-link \(\mathcal{C}\):
\[\begin{split}\mathcal{M}&=\{(e_{i},e_{j}):l(e_{i})= l(e_{j})\}\\ \mathcal{C}&=\{(e_{i},e_{j}):l(e_{i})\neq l(e_{j})\} \end{split} \tag{1}\]
where the \(l(\cdot)\) means the speaker role of the embedding.
As shown in Figure 1, the strategy of building \(\mathcal{M}\) and \(\mathcal{C}\) can be concluded as: If two embeddings contained in one non-dialogue segments, a must-link between two embeddings should be constructed. If two embeddings cross speaker-turn change point, a cannot-link should be constructed.
## 3 Constrained Speaker Diarization
We proposed a novel framework named **Joint Pairwise Constraints Propagation (JPCP)**, as shown in Figure 2. The proposed framework incorporates pairwise constraints into embedding normalization and affinity function. The following sections provide more details of these components.
### Constrained Embedding Normalization
We introduce the semi-supervised dimension reduction(SSDR) algorithm[15] to integrate pairwise constraints into speaker embedding normalization module.
We first build a weight matrix \(\mathbf{S}\) from pairwise constraints:
\[\mathbf{S}_{ij}=\begin{cases}\frac{1}{N^{2}}+\frac{\alpha}{|\mathcal{M}|}& \text{if }(e_{i},e_{j})\in\mathcal{M}\\ \frac{1}{N^{2}}-\frac{\beta}{|\mathcal{C}|}&\text{if }(e_{i},e_{j})\in \mathcal{C}\\ \frac{1}{N^{2}}&\text{otherwise}\end{cases} \tag{2}\]
The function of constrained embedding normalization is to find the projective vectors \(\mathbf{W}=[\mathbf{w}_{1},\mathbf{w}_{2},...,\mathbf{w}_{d}]\), such that the normalized (low-dimensional) embeddings \(\{\mathbf{w}^{T}e_{k}\}\) can preserve the manifold of the original embeddings as well as the pairwise constraints set \(\mathcal{M}\) and \(\mathcal{C}\). The objective function can defined as \(J(\mathbf{W})\):
\[J(\mathbf{w})=\mathbf{w}^{T}E\mathbf{L}E^{T}\mathbf{w} \tag{3}\]
where \(\alpha\), \(\beta\) are parameters to balance the contribution of each constraint type. The \(\mathbf{L}\) is the Laplacian matrix of the weight matrix \(\mathbf{S}\). The problem expressed by (3) is a typical eigen-problem and can be efficiently solved by computing the eigenvectors of \(E\mathbf{L}E^{T}\) corresponding to the largest eigenvalues.
### Constrained Affinity Function
Constructing the affinity matrix \(\mathcal{A}=\{a_{ij}\}_{N\times N}\) is another core part of speaker clustering algorithms, especially in spectral clustering. In [6], a series of refinement operations are defined to refine the affinity matrix, like row-wise thresholding and symmetrization.
The constrained affinity function can be seen as a new refinement operation which integrate with \(\mathcal{M}\) and \(\mathcal{C}\). The constraints are concluded into a constraints matrix \(\mathcal{Z}\):
\[\mathcal{Z}_{ij}=\begin{cases}+1&\text{if }(e_{i},e_{j})\in\mathcal{M}\\ -1&\text{if }(e_{i},e_{j})\in\mathcal{C}\\ 0&\text{otherwise}\end{cases} \tag{4}\]
Figure 1: A sample of strategy for constructing constraints.
Since pairwise constraints cannot cover all the pair \((i,j)\) in the affinity matrix, a **constraint propagation algorithm** is introduced to build the global constraints relationship. The propagated pairwise constraints matrix \(\hat{\mathcal{Z}}=f(\mathcal{Z})\), and the \(f(\,\cdot\,)\) is the constraint propagation function. The propagated pairwise constraints matrix \(\hat{\mathcal{Z}}\) and result affinity matrix \(\hat{\mathcal{A}}\in\mathcal{R}^{N\times N}\) should satisfy:
\[\hat{\mathcal{A}}_{ij}=\begin{cases}1-(1-\hat{\mathcal{Z}}_{ij})(1-A_{ij})& \text{if }\hat{\mathcal{Z}}_{ij}\geq 0\\ (1+\hat{\mathcal{Z}}_{ij})A_{ij}&\text{if }\hat{\mathcal{Z}}_{ij}<0\end{cases} \tag{5}\]
In practical, a classical constraints propagation methods called E\({}^{2}\)CP [16] has been applied:
\[\hat{\mathcal{Z}}=(1-\lambda)^{2}(\mathbf{I}-\lambda\mathbf{L})^{-1}\mathcal{ Z}(\mathbf{I}-\lambda\mathbf{L})^{-1} \tag{6}\]
where \(\mathbf{L}=\mathbf{D}^{-1/2}\mathcal{A}\mathbf{D}^{-1/2}\) is the Normalized Laplacian matrix and \(\mathbf{D}\) is the degree matrix of \(\mathcal{A}\). The parameter \(\lambda\in[0,1]\) is to control the effectiveness of the constraints matrix.
### Improve Constraints Propagation Algorithm
The constraints derived from semantic information have certain limitations: (1) incorrect constraints may be generated when the semantic model makes prediction errors, and (2) the constraints may become too close to the embedding when there are frequent speaker-turn changes. To address these issues, we propose an enhanced version of the E\({}^{2}\)CP method, referred to as E\({}^{2}\)CPM:
Firstly, we introduce a k-NN strategy for constructing the Laplacian matrix \(\mathbf{L}\) in equation (6). Specifically, we defined the affinity matrix \(\mathcal{A}^{\prime}=\{a^{\prime}_{ij}\}_{N\times N}\) as follows: \(a^{\prime}_{ij}=a_{ij}\) if \(e_{j}(j\neq i)\) if among the k-nearest neighbors of \(e_{i}\) and \(a^{\prime}_{ij}=0\) otherwise. To ensure symmetry, we set \(\mathcal{A}^{\prime}=(\mathcal{A}^{\prime T}+\mathcal{A}^{\prime})/2\).
Secondly, we enhance the existing constraints by incorporating embedding pairs with a high confidence level in affinity similarity. Specifically, we randomly select pairs with affinity scores above a threshold \(\theta_{m}\) and add them as additional must-link constraints \(\mathcal{M}^{\prime}=\mathcal{M}\cup r_{\theta_{m}}(\mathcal{A})\). Similarly, we randomly select pairs with affinity scores below a threshold \(\theta_{c}\) as additional cannot-link constraints \(\mathcal{C}^{\prime}=\mathcal{C}\cup r_{\theta_{c}}(\mathcal{A})\) where \(r_{\theta_{m}}(\cdot)\) and \(r_{\theta_{c}}(\cdot)\) represent thresholding and uniform random functions, respectively.
## 4 Experimental Setup
### Dataset and Metrics
Our experiments are conducted on AISHELL-4 [17] which focus on multi-party meeting scenario, where all speech content are manually annotated.
We report the following clustering algorithm metrics: Normalized Mutual Information (NMI) and Adjusted Rand Index (ARI). As the transcribed text and forced-alignment module have been used in the pipeline, we directly report the Concatenated Minimum-permutation Word Error Rate (cp-WER). Additionally, we use the metric Text Diarization Error Rate (TextDER), to evaluate the amount of text assigned to wrong speakers [14].
### Acoustic and Semantic Modules Configuration
The whole system pipeline we utilized in this paper is similar with the pipelines in [14] and we improved some acoustic models. The speaker embedding extractor is based on CAM++ [18] trained on a large Mandarin corpus1. The ASR model we used is based on Paraformer[19] trained by FunASR [20] toolkits2. These models are open-sourced and fixed in all our experiments.
Footnote 1: The speaker embedding extractor we used can be found in [https://github.com/alibaba-damo-academy/3D-Speaker](https://github.com/alibaba-damo-academy/3D-Speaker)
Footnote 2: The ASR models and punctuation prediction models we used can be found in [https://github.com/alibaba-damo-academy/FunASR](https://github.com/alibaba-damo-academy/FunASR)
The semantic models trained for Dialogue-Detection and Speaker-Turn-Detection tasks are based on the pre-trained BERT language model. The training samples are generated by a sliding-window method with a window length of 64 and a shift of 16 and the training labels for these two semantic tasks can be obtained by the manually annotated speaker label from the speech content.
Figure 2: The pipeline is a traditional speaker diarization backend with acoustic information. The addtional pairwise constraints constructed from semantic information, including Must-Link and Cannot-Link, will be used in two parts: Embedding Normalization and Affinity Function.
The construction of pairwise constraints using semantic information is explained in Section 2.2. To explore the efficacy of our proposed method, we also generated simulated pairwise constraints. These simulated constraints were created from the ground truth speaker labels assigned to each embedding. We randomly selected a subset of pairs from all the possible embedding pairs.
## 5 Results and Discussions
### Experiments Results
The experiments results are shown in the Table 1. The baseline is the acoustic speaker diarization system that combines VAD, CAM++ and SC. The "Semantic Turn-Cut" settings, proposed from [14], optimized the strategy for segmenting embeddings by integrating the timestamps of semantic boundaries into the VAD results. The following JPCP experiment also utilized this approach to extract embeddings.
We report the results of incorporating simulation constraints (JPCP-S) and inference constraints (JPCP-I). JPCP-S utilizes ground truth ASR results and directly simulates constraints for speaker clustering, indicating the potential upper bound for our proposed system. The simulation constraints demonstrates a significant improvement in speaker diarization, particularly in determining the number of speakers.
In terms of inference constraints, it can be observed that the improvement achieved by SSDR is relatively smaller compared to E\({}^{2}\)CPM. Hence, it is inferred that the constrained affinity function has a more direct impact on the overall clustering. Our proposed JPCP approach has shown improvements compared to the baseline, with a \(19\%\) increase in TextDER, and also some improvement in SpkDiff. It can also be observed that both methods are sensitive to the quality of constraints so that the improvement achieved by JPCP-I in the experimental results is relatively marginal.
### Constraints Analysis
Given the flourishing development of large-scale language models, the semantic constraints constructed through semantic information are becoming increasingly robust. Therefore, in this paper, we simulate higher-quality constraints to explore the performance upper bound of our proposed methods.
Figure 3 shows that with the number of constraints increases, both the clustering performance and the effectiveness of speaker diarization show significant improvements. It can be observed that with around \(6\%\) of constraints, the results approach the system's upper bound. This indicates the high potential of our proposed method.
## 6 Conclusion
We propose a novel architecture that integrates semantic modeling into a clustering-based speaker diarization system, enhancing its overall performance. Speaker-related information is extracted from ASR transcriptions and represented as pairwise constraints. We investigate the integration of these constraints in the process of speaker embedding normalization and the speaker affinity function. Experimental results show that incorporating semantic constraints improves performance compared to acoustic-only models. Moreover, our system architecture is designed to be compatible with other modules and shows promising results in simulated experiments. As the current work has demonstrated the potential of our framework, our future work will focus on further enhancing the quantity and quality of pairwise constraints to achieve superior results.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Diarization System} & \multirow{2}{*}{Constraints} & \multirow{2}{*}{Methods} & \multicolumn{3}{c}{Cluster Metrics} & \multicolumn{2}{c}{Diarization Metrics} \\ \cline{3-8} & & & ARI & NMI & SpkDiff \# & CpWER (\%) & TextDER (\%) \\ \hline Acoustic Only & No Constraints & SC & - & - & - & 26.1816 & 3.7723 \\ \hline Semantic Turn-Cut & No Constraints & SC & 0.8901 & 0.8616 & 11 & 25.6421 & 3.4636 \\ \hline \multirow{4}{*}{JPCP-I} & Inference Constraints & SSDR + SC & 0.9010 & 0.8863 & 11 & 25.9185 & 3.8122 \\ & Inference Constraints & E\({}^{2}\)CP & 0.9006 & 0.8857 & 11 & 25.9174 & 3.8161 \\ \cline{1-1} & Inference Constraints & E\({}^{2}\)CPM & 0.9162 & 0.8863 & 10 & 25.2774 & 3.0967 \\ \cline{1-1} & Inference Constraints & SSDR + E\({}^{2}\)CPM & **0.9171** & **0.8871** & **9** & **25.3168** & **3.0379** \\ \hline \multirow{2}{*}{JPCP-S} & Simulation 6\% & SSDR + E\({}^{2}\)CPM & 0.9939 & 0.9879 & 4 & 24.5919 & 1.9810 \\ \cline{1-1} & Simulation 12\% & SSDR + E\({}^{2}\)CPM & **0.9961** & **0.9927** & **3** & **24.4809** & **1.9028** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance evaluation of cluster metrics and speaker diarization results. TextDER refers to the amount of text assigned to wrong speakers. SpkDiff # refers to difference in number of speakers between inference and ground truth. JPCP-S utilizes ground truth ASR results, indicating potential upper bound for our proposed system.
Figure 3: The impact of pairwise constraints rate on both clustering metrics and the effectiveness of the overall speaker diarization system. |
2309.11990 | On $σ$-compact Hattori spaces | We present several characterizations of $\sigma$-compact Hattori spaces, and
reject some possible characterization candidates of the spaces. | Vitalij Chatyrko | 2023-09-21T12:02:34Z | http://arxiv.org/abs/2309.11990v1 | # On \(\sigma\)-compact Hattori spaces.
###### Abstract
We present several characterizations of \(\sigma\)-compact Hattori spaces, and reject some possible characterization candidates of the spaces.
_Keywords and Phrases: Hattori spaces, \(\sigma\)-compact spaces_
_2000 AMS (MOS) Subj. Class.:_ Primary 54A10
## 1 Introduction
Let \(\mathbb{R}\) be the set of real numbers and \(A\) be a subset of \(\mathbb{R}\).
In [H] Hattori introduced a topology \(\tau(A)\) on \(\mathbb{R}\) defined as follows:
1. if \(x\in A\) then \(\{(x-\epsilon,x+\epsilon):\epsilon>0\}\) is a nbd open basis at \(x\),
2. if \(x\in\mathbb{R}\setminus A\) then \(\{[x,x+\epsilon):\epsilon>0\}\) is a nbd open basis at \(x\).
Note that \(\tau(\emptyset)\) (respectively, \(\tau(\mathbb{R})\)) is the Sorgenfrey topology \(\tau_{S}\) (respectively, the Euclidean topology \(\tau_{E}\)) on the reals.
The topological spaces \((\mathbb{R},\tau(A)),A\subseteq\mathbb{R}\), are called _Hattori spaces_ and denoted by \(H(A)\) or \(H\) (if \(A\) is unimportant for a discussion). It is easy to see that the identity mapping of reals is a continuous bijection of any \(H\)-space onto the real line.
Let us recall ([CH]) that every \(H\)-space is \(T_{1}\), regular, hereditary Lindelof and hereditary separable. However there are topological properties as the metrizability or the Cech-completeness which some \(H\)-spaces possess and other \(H\)-spaces do not possess. When the \(H\)-spaces possess these properties one can find in [K] and [BS].
Recall ([EJ]) that each compact subset of the Sorgenfrey line \(H(\emptyset)\) is countable. So the space \(H(\emptyset)\) cannot be \(\sigma\)-compact unlike to the space \(H(\mathbb{R})\) (the real line) which is evidently \(\sigma\)-compact.
The following natural question was posed by F. Lin and J. Li.
**Question 1.1**: ([LL, Question 3.7]) For what subsets \(A\) of \(\mathbb{R}\) are the spaces \(H(A)\)\(\sigma\)-compact?
F. Lin and J. Li also noted
**Proposition 1.1**: _([LL, Theorem 3.13]) For an arbitrary subset \(A\) of \(\mathbb{R}\), if \(H(A)\) is \(\sigma\)-compact, then \(\mathbb{R}\setminus A\) is countable and nowhere dense in \(H(A)\). \(\square\)_
**Proposition 1.2**: _([LL, Theorem 3.14]) For an arbitrary subset \(A\) of \(\mathbb{R}\), if \(\mathbb{R}\setminus A\) is countable and scattered in \(H(A)\), then \(H(A)\) is \(\sigma\)-compact. \(\square\)_
In this note I present several characterizations of Hattori spaces to be \(\sigma\)-compact, and show that the implications of Propositions 1.1 and 1.2 are not invertible. Moreover, Proposition 1.2 (formulated as above) does not hold, its corrected version is presented in Corollary 2.1.
For standard notions we refer to [E].
## 2 Main results
First of all let us recall the following fact.
**Lemma 2.1**: _([CH, Lemma 2.1]) Let \(A\subseteq\mathbb{R}\) and \(B\subseteq A\) and \(C\subseteq\mathbb{R}\setminus A\). Then_
* \(\tau(A)|_{B}=\tau_{E}|_{B}\)_, where_ \(\tau_{E}\) _is the Euclidean topology on_ \(\mathbb{R}\)_, and_
* \(\tau(A)|_{C}=\tau_{S}|_{C}\)_, where_ \(\tau_{S}\) _is the Sorgenfrey topology on_ \(\mathbb{R}\)_._ \(\square\)__
**Proposition 2.1**: _For an arbitrary subset \(A\) of \(\mathbb{R}\), if \(B=\mathbb{R}\setminus A\) is countable and it is a \(G_{\delta}\)-subset of the real line (in particular, if \(B\) is countable and closed in the real line), then \(H(A)\) is \(\sigma\)-compact._
Proof. Let us note that on the real line our set \(A\) is an \(F_{\sigma}\)-set and hence it is \(\sigma\)-compact there. So by Lemma 2.1\(A\) is \(\sigma\)-compact in \(H(A)\) too. Since \(B\) is countable we get that \(H(A)\) is \(\sigma\)-compact. \(\square\)
Since every scattered subset of the real line is a \(G_{\delta}\) (see [KR, Corollary 4]) we get the following.
**Corollary 2.1**: _For any subset \(A\) of \(\mathbb{R}\), if \(\mathbb{R}\setminus A\) is countable and scattered in the real line, then \(H(A)\) is \(\sigma\)-compact. \(\square\)_
We continue with several characterizations of \(H\)-spaces to be \(\sigma\)-compact.
**Theorem 2.1**: _Let \(A\subseteq\mathbb{R}\) and \(B=\mathbb{R}\setminus A\). Then the following conditions are equivalent._
* _There exist a_ \(\sigma\)_-compact subset_ \(D\) _and a closed subset_ \(C\) _of the space_ \(H(A)\) _such that_ \(B\subseteq C\subseteq D\)_._
* _There exists a closed_ \(\sigma\)_-compact subset_ \(C\) _of the space_ \(H(A)\) _such that_ \(B\subseteq C\)_._
* _The closure_ \(\operatorname{Cl}_{H(\mathbb{R})}(B)\) _of_ \(B\) _in the real line is_ \(\sigma\)_-compact in_ \(H(A)\)_._
* _The closure_ \(\operatorname{Cl}_{H(A)}(B)\) _of_ \(B\) _in the space_ \(H(A)\) _is_ \(\sigma\)_-compact in_ \(H(A)\)_._
* _the space_ \(H(A)\) _is_ \(\sigma\)_-compact._
Proof. The following implications are obvious: \((e)=>(a)\), \((a)=>(b)\), \((c)=>(b)\), \((b)=>(d)\), \((e)=>(c)\).
Let us show \((d)=>(e)\). Since \(B\subseteq\operatorname{Cl}_{H(A)}(B)\) each point \(x\in H(A)\setminus\operatorname{Cl}_{H(A)}(B)\) has inside the set \(H(A)\setminus\operatorname{Cl}_{H(A)}(B)\) an open nbd which is an open interval of the real line. Since the space \(H(A)\) is hereditarily Lindelof the set \(H(A)\setminus\operatorname{Cl}_{H(A)}\) is a \(\sigma\)-compact subset of \(H(A)\) (see Lemma 2.1). Thus even \(H(A)\) is \(\sigma\)-compact. \(\square\)
**Remark 2.1**: Note that the set \(\operatorname{Cl}_{H(\mathbb{R})}(B)\) does not need to be \(\sigma\)-compact in the space \(H(A)\) (it is of course closed there) even if it is compact in the real line, see Proposition 2.3.
Let us consider in the set of reals the standard Cantor set \(\mathbb{C}\) on the closed interval \([0,1]\) which can be defined as follows.
For any closed bounded interval \([a,b]\) of \(\mathbb{R}\) put
\[F([a,b])=\{[a,\frac{2}{3}a+\frac{1}{3}b],[\frac{1}{3}a+\frac{2}{3}b,b]\}.\]
Then for each \(n\geq 0\) by induction define a family \(\mathcal{C}_{n}\) of closed intervals:
\[\mathcal{C}_{0}=\{[0,1]\},\ \mathcal{C}_{n}=\{F([a,b]):[a,b]\in\mathcal{C}_{n-1}\}.\]
The standard Cantor set \(\mathbb{C}\) of the closed interval \([0,1]\) is the intersection \(\cap_{n=0}^{\infty}(\cup\mathcal{C}_{n})\), where \(\cup\mathcal{C}_{n}\) is the union of all closed intervals from the family \(\mathcal{C}_{n}\).
Put now \(B_{1}=\{a:[a,b]\in\mathcal{C}_{n},n\geq 0\}\), \(B_{2}=\{b:[a,b]\in\mathcal{C}_{n},n\geq 0\}\) and \(A_{1}=\mathbb{R}\setminus B_{1}\), \(A_{2}=\mathbb{R}\setminus B_{2}\). We will use the notations below.
**Remark 2.2**: Let us note that on the real line (i.e. on the reals with the Euclidean topology) the set \(\mathbb{C}\) is compact, the sets \(B_{1}\) and \(B_{2}\) (which are subsets of \(\mathbb{C}\)) are homeomorphic to the space of rational numbers \(\mathbb{Q}\), the sets \(\mathbb{C}\setminus B_{1}\) and \(\mathbb{C}\setminus B_{2}\) are homeomorphic to the space of irrational numbers \(\mathbb{P}\). Moreover, \(B_{1}\) and \(B_{2}\) are nowhere dense in the real line.
**Remark 2.3**: Let us note that a set \(Y\subset\mathbb{R}\) is nowhere dense in the real line iff \(Y\) is nowhere dense in any \(H\)-space (see for example, [CN, Lemma 3.3]).
**Proposition 2.2**: _For the space \(H(A_{1})\) the following is valid._
* _The subspace_ \(B_{1}\) _of_ \(H(A_{1})\) _is nowhere dense in_ \(H(A_{1})\) _and it is homeomorphic to the space of rational numbers_ \(\mathbb{Q}\)_._
* _The subspace_ \(\operatorname{Cl}_{H(A_{1})}(B_{1})\) _of_ \(H(A_{1})\) _is homeomorphic to the standard Cantor set_ \(\mathbb{C}\) _on the real line, and the subspace_ \(\operatorname{Cl}_{H(A_{1})}(B_{1})\setminus B_{1}\) _of_ \(H(A_{1})\) _is homeomorphic to the space of irrational numbers_ \(\mathbb{P}\)_._
* _The space_ \(H(A_{1})\) _is_ \(\sigma\)_-compact._
Proof. (a) and (b) are obvious. Theorem 2.1 and (b) prove (c). \(\square\)
**Corollary 2.2**: _Proposition 1.2 is not invertible. \(\square\)_
Proof. Let us note that \(H(A_{1})\) is \(\sigma\)-compact but the subspace \(B_{1}\) of \(H(A_{1})\) is not scattered. \(\square\)
**Corollary 2.3**: _Proposition 2.1 is not invertible._
Proof. Let us note that \(H(A_{1})\) is \(\sigma\)-compact but \(B_{1}\) is not a \(G_{\delta}\)-subset of the Cantor set \(\mathbb{C}\) in the real line and hence it is not a \(G_{\delta}\) in the real line. \(\square\)
**Proposition 2.3**: _For the space \(H(A_{2})\) the following is valid._
* _The subspace_ \(B_{2}\) _of_ \(H(A_{2})\) _is nowhere dense in_ \(H(A_{2})\) _and it is homeomorphic to the space of natural numbers_ \(\mathbb{N}\)_._
* _The subspace_ \(\mathrm{Cl}_{H(A_{2})}(B_{2})\) _of_ \(H(A_{2})\) _is equal to the standard Cantor set_ \(\mathbb{C}\) _of_ \(\mathbb{R}\)_, and it is not_ \(\sigma\)_-compact. The subspace_ \(\mathrm{Cl}_{H(A_{2})}(B_{2})\setminus B_{2}\) _of_ \(H(A_{2})\) _is homeomorphic to the space of irrational numbers_ \(\mathbb{P}\)_,_
* _The space_ \(H(A_{2})\) _is not_ \(\sigma\)_-compact._
Proof. (a) is obvious.
In (b) let us show that the subspace \(\mathrm{Cl}_{H(A_{2})}(B_{2})\) of \(H(A_{2})\) is not \(\sigma\)-compact. Assume that the subspace \(\mathrm{Cl}_{H(A_{2})}(B_{2})\) of \(H(A_{2})\) is \(\sigma\)-compact, i.e. \(\mathrm{Cl}_{H(A_{2})}(B_{2})=\cup_{i=1}^{\infty}K_{i}\), where \(K_{i}\) is compact in \(H(A_{2})\). Note that for each \(i\) the set \(K_{i}\) is compact in the real line and the Cantor set \(\mathbb{C}\) with the topology from the real line is the union \(\cup_{i=1}^{\infty}K_{i}\). Hence there is an open interval \((c,d)\) of the reals and some \(i\) such that \((c,d)\cap\mathbb{C}\subseteq K_{i}\). Moreover, there exist points \(b_{0},b_{1},\dots\) of \(B_{2}\) such that \(b_{1}<b_{2}<\dots<b_{0}\) and the sequence \(\{b_{j}\}_{j=1}^{\infty}\) tends to \(b_{0}\) in the real line. Since at the points of \(B_{2}\) the topology of \(H(A_{2})\) is the Sorgenfrey topology we get a contradiction with the compactness of \(K_{i}\) in the space \(H(A_{2})\).
Theorem 2.1 and (b) prove (c). \(\square\)
**Corollary 2.4**: _Proposition 1.1 is not invertible._
Proof. Let us note that \(B_{2}\) is nowhere dense in \(H(A_{2})\) (see Remark 2.3) but the space \(H(A_{2})\) is not \(\sigma\)-compact. \(\Box\)
**Corollary 2.5**: _Proposition 1.2 does not hold. \(\Box\)_
_Proof. Let us note that \(B_{2}\) is scattered in \(H(A_{2})\) and the space \(H(A_{2})\) is not \(\sigma\)-compact. \(\Box\)_
## 3 Additional questions
The following is obvious.
* If a space \(X\) is \(\sigma\)-compact then a subset \(Y\) of \(X\) is \(\sigma\)-compact iff it is an \(F_{\sigma}\)-subset of \(X\). In particular, a subset of the real line is \(\sigma\)-compact iff it is an \(F_{\sigma}\)-set.
* A subset of the Sorgenfrey line is \(\sigma\)-compact iff it is countable,
* A subset of the space \(\mathbb{P}\) of irrational numbers is \(\sigma\)-compact iff it is homeomorphic to an \(F_{\sigma}\)-subset of the standard Cantor set \(\mathbb{C}\) on the real line.
One can pose the following problem.
**Problem 3.1**: Let \(A\subseteq\mathbb{R}\). Describe the \(\sigma\)-compact subsets of \(H(A)\).
Let us note in advance that according to (a) if \(H(A)\) is \(\sigma\)-compact then a subset of \(H(A)\) is \(\sigma\)-compact iff it is an \(F_{\sigma}\)-subset of \(H(A)\)
Below we present some other answers to Problem 3.1 by the use of observations (b) and (c) and some known facts.
**Proposition 3.1**: _([K, Theorem 6] and [BS, Theorem 2.8]) \(H(A)\) is homeomorphic to the Sorgenfrey line iff \(A\) is scattered. \(\Box\)_
**Corollary 3.1**: _If \(A\) is scattered then a subset of \(H(A)\) is \(\sigma\)-compact iff it is countable. \(\Box\)_
**Proposition 3.2**: _([BS, Proposition 3.6]) \(H(A)\) is homeomorphic to the space \(\mathbb{P}\) of irrational numbers iff \(\mathbb{R}\setminus A\) is dense in the real line and countable. \(\Box\)_
**Corollary 3.2**: _If \(\mathbb{R}\setminus A\) is dense in the real line and countable then a subset of \(H(A)\) is \(\sigma\)-compact iff it is homeomorphic to an \(F_{\sigma}\)-subset of the standard Cantor set \(\mathbb{C}\) on the real line. \(\square\)_
Since the space \(H(A_{2})\) from Proposition 2.3 is not \(\sigma\)-compact (as well as any subset of \(H(A_{2})\) containing some \([a,b]\) from \(\mathcal{C}_{n},n=0,1,2,\dots\)) one can pose the following question.
**Question 3.1**: What subsets of \(H(A_{2})\) are \(\sigma\)-compact?
|
2309.17339 | Scaling Experiments in Self-Supervised Cross-Table Representation
Learning | To analyze the scaling potential of deep tabular representation learning
models, we introduce a novel Transformer-based architecture specifically
tailored to tabular data and cross-table representation learning by utilizing
table-specific tokenizers and a shared Transformer backbone. Our training
approach encompasses both single-table and cross-table models, trained via
missing value imputation through a self-supervised masked cell recovery
objective. To understand the scaling behavior of our method, we train models of
varying sizes, ranging from approximately $10^4$ to $10^7$ parameters. These
models are trained on a carefully curated pretraining dataset, consisting of
135M training tokens sourced from 76 diverse datasets. We assess the scaling of
our architecture in both single-table and cross-table pretraining setups by
evaluating the pretrained models using linear probing on a curated set of
benchmark datasets and comparing the results with conventional baselines. | Maximilian Schambach, Dominique Paul, Johannes S. Otterbach | 2023-09-29T15:48:38Z | http://arxiv.org/abs/2309.17339v1 | # Scaling Experiments in Self-Supervised Cross-Table Representation Learning
###### Abstract
To analyze the scaling potential of deep tabular representation learning models, we introduce a novel Transformer-based architecture specifically tailored to tabular data and cross-table representation learning by utilizing table-specific tokenizers and a shared Transformer backbone. Our training approach encompasses both single-table and cross-table models, trained via missing value imputation through a self-supervised masked cell recovery objective. To understand the scaling behavior of our method, we train models of varying sizes, ranging from approximately \(10^{4}\) to \(10^{7}\) parameters. These models are trained on a carefully curated pretraining dataset, consisting of \(135\,\mathrm{M}\) training tokens sourced from \(76\) diverse datasets. We assess the scaling of our architecture in both single-table and cross-table pretraining setups by evaluating the pretrained models using linear probing on a curated set of benchmark datasets and comparing the results with conventional baselines.
## 1 Introduction
Tabular data is abundant in many real-world applications across industries as well as research domains and has been argued to be the data type with the highest potential for AI impact [8]. Nevertheless, on tabular data, deep learning approaches fail to consistently outperform established boosting implementations such as XGBoost, LightGBM, and CatBoost [7; 23; 11; 15]. Nevertheless, the success of the Transformer architecture [40] and self-supervised learning applied to large datasets in natural language and computer vision has motivated similar methods in the tabular domain. However, the scaling behavior of these approaches has not been investigated. This is mostly due to the fact that tabular benchmark data is often small and separate models are trained for each table, requiring that the models remain small for fast training and to avoid over-parametrization. This limits the scaling potential of the underlying architecture as both the model size as well as the training data would need to be scaled for a consistent increase in performance as shown in the language and vision domain [22; 16]. For most tables, however, accessing or creating more data is not possible.
In order to scale tabular deep learning approaches, the architecture needs to be able to generalize across multiple tables so that a large heterogeneous training corpus can be used. Furthermore, cross-table generalization amortizes the increased costs of training a large versatile model as opposed to training table-specific ones. Besides a potential performance gain from increased scale, a tabular general-purpose model that generalizes across multiple tables is of practical importance. For example, pretrained tabular backbones lend themselves as feature extractors and could be of interest in the zero- and few-show regime, with no or only limited training data, as well as in joint representation learning to be used with language models or incorporating inter-table dependencies in relational databases.
While a variety of cross-table approaches based on Large Language Models (LLMs) have been proposed in the past [18; 47; 46], table-specific architectures are extremely scarce. Despite showing first promising results, we believe the potential of LLMs in the context of tabular data to be limited mainly due to the technical challenges around tokenization which we will discuss in detail. On the other hand, simple and straightforward Transformer-based architectures in the tabular domain are the exception [14; 51] while the field is scattered with, in our opinion, complex and sometimes convoluted architectures. We believe a solid understanding of a Transformer-based tabular architecture, and, in particular, the preceding table tokenization, to be the core of future developments and a successful scaling of architectures towards a new state of the art. In our opinion, the full potential of existing approaches, namely self-supervised Transformer-based architectures, has yet to be understood.
To address these limitations, we propose a clean and simple Transformer-based architecture, similar to the FT-Transformer [14], and generalize the architecture for cross-table self-supervised pretraining via masked cell recovery. Overall, our main contributions are as follows:
* We propose a novel architecture and training pipeline for cross-table pretraining based on self-supervised masked cell recovery. This loss can be naturally interpreted as multi-variate value imputation, a formidable problem in real-world applications.
* We investigate the scaling behavior of the proposed approach both in a single- as well as a cross-table pretraining setup. We do so by training four model configurations with backbone sizes ranging from roughly \(10^{4}\) to \(10^{7}\) parameters using a large curated heterogeneous pretraining corpus of 76 datasets and evaluating the pretrained models via linear probing using a small curated collection of benchmark datasets.
## 2 Cross-Table Representation Learning
While a wide range of approaches has been proposed in the context of learning representations for single tables, covering both supervised [20; 14] as well as self-supervised methods [38; 1; 35; 3; 34], how to best design architectures for learning representations across multiple tables is still an open question in the community. Following the tremendous success of deep learning in natural language and computer vision, Transformer-based architectures trained via self-supervision at scale are most promising to push the state of the art in tabular representation learning and perhaps finally surpass the strong conventional baselines. However, as opposed to natural language or computer vision, where tokenization and embedding methods naturally generalize across a wide range of datasets, the characteristics of tabular data are table-specific. Notably, different tables usually have different numbers of columns with numerical and categorical features, as well as column-specific statistics. That is, even if column names have a similar label indicating a shared semantic, the corresponding (joint and marginalized) statistics may be extremely diverse. Moreover, unlike language or images, tabular data does not possess a natural ordering and is invariant against column and row permutations.
Cross-table tokenization Tokenization transforms tables (or individual rows) into a sequence of tokens, which are subsequently embedded in a shared embedding space and processed by the model backbone. In the single-table case, tokenization can be achieved by a combination of conventional tabular encoding and subsequent embedding [14; 13]. Numerical features can be tokenized via standardization or quantile transformation while categoricals can be tokenized via integer or one-hot encoding [21]. Linear projections or lookup embeddings map the tokens into the embedding space. However, a cross-table generalization of these approaches is not straightforward and has only recently been proposed within the XTab framework [51]. Here, table-specific tokenizers are used to extend the FT-Transformer approach [14], whereas the shared backbone contextualizes the embeddings.
A currently popular approach to cross-table tokenization and representation learning is to serialize a table's row into a string, e.g. "[Column A] is [Value 1], [Column B] is..."), and then use a pretrained LLM to generate the row's embeddings. Many works exist in this area, notably utilizing pretrained BERT models [18; 47; 37; 43], as well as GPT-style generative architectures [6; 50]. Badaro et al. [2] and Hegselmann et al. [17] discuss and compare multiple forms of table serialization. This seemingly straightforward concept of table serialization and text-based tokenization comes with a few challenges and pitfalls. (i) Text tokenizers struggle with numerical features, which are typically broken down into multiple tokens by splitting at the decimal point and other subwords in the vocabulary. Recent research has shown that this likely leads to subpar performance on numerical tasks such as arithmetic and financial reasoning [31; 49; 45; 25]. While some workarounds, like
character-level tokenization for numeric features, have been used [50], they don't fully address the core issue and introduce additional complexity by requiring a separate decoder architecture. (ii) The coding scheme is not token-efficient, resulting in an excessive amount of tokens per cell. As Transformers scale quadratically with the input's length, the excessive representation length of a row requires more computational power than we believe is necessary. Hence, the number of columns that can be encoded is limited by the context length of the backbone model. (iii) When using causal language modeling, we need to artificially introduce a column order, despite the table's natural column permutation invariance. To break this artificial order, any-order learning needs to be enforced, leading to an exponential overhead in column orders that need to be trained, e.g. via permutation augmentation. On the other hand, in masked language modeling, the masking of individual tokens is not the same as blanking an entire table cell, requiring special treatment of the masking function.
Drawing parallels with text tokenization in natural language processing, we recognize that tokenization is a nuanced, domain-specific problem. Tokenizer developments in natural language have significantly enhanced Transformer-based language models by addressing linguistic and engineering challenges [28; 52]. In the same way, tokenization for tabular data demands specialized efforts and meticulous experimentation to optimize its utility and compatibility with Transformer architectures.
**Permutation invariance and imputation loss** While the embeddings contextualized by a Transformer are inherently permutation invariant, this invariance is typically explicitly broken by introducing positional encodings [40; 12]. Nevertheless, in particular, LLM-based tabular learning architectures use positional encoding and address the problem, if at all, via permutation augmentation [32; 5; 50]. Positional encodings are not helpful for tabular data due to their invariance against column permutations. Instead, semantic column encodings, e.g. via additive column-specific bias embeddings, can be a useful inductive bias to distinguish between different columns [14; 51].
A possible solution are bidirectional models, such as BERT [10], based on masked token recovery losses, akin to a denoising objective. Note that this is not a natural loss for a language, which is typically constructed in a sequential manner. However, this objective is most natural for table representation learning. As table columns have no natural order, and often suffer from missing values, one can interpret masked cell recovery as an imputation of missing values. In fact, this allows for a natural generalization to a table-generative model using Markov Random Field sampling [41].
**Cross-table pretraining** In the supervised case, early works treat tables as images and utilize general-purpose vision backbones [36], whereas recent approaches such as TransTab [44] are limited to tables from similar domains. In a different line of research, prior-fitted networks were introduced, recasting the problem to approximate Bayesian inference learned over a large synthetic prior, dubbed TabPFN [19]. While useful for practitioners and conceptually interesting, TabPFN is limited to small datasets and classification tasks based on purely numeric features and cannot be scaled naively.
Most self-supervised tabular learning approaches are explored in the single-table domain, ranging from autoencoders [48], contrastive approaches [38; 9; 35; 3], to more recent masked autoencoding objectives [1; 27]. In the cross-table setup, some works deal with self-supervised representation learning for tables with partially (or largely) overlapping columns [24; 30]. We are aware of only one non-LLM-based architecture for unconstrained tabular representation learning, namely the recently proposed XTab framework [51]. XTab generalizes the FT-Transformer to multiple tables via table-specific tokenizers and otherwise uses its exact hyperparameter configuration. Notably, XTab's Transformer backbone has less than 1 M trainable parameters.
## 3 Proposed Approach
We propose a simple Transformer-based architecture and training pipeline for cross-table pretraining that minimizes inductive biases as shown in Figure 1. This way, the proposed approach can be used as a baseline for further experimentation, for example around cross-table tokenization techniques. Our approach builds on the FT-Transformer [14] and is similar to the recently proposed XTab framework [51], with a few important distinctions which we outline in the following.
**Tokenization** We employ table-specific tokenizers and use quantile encoding of numerical features combined with look-up embeddings as opposed to quantile transformation with subsequent linear projection embeddings used in FT-Transformer, XTab, and other approaches [13]. That is, instead of transforming the features in order to normalize the column distributions, we encode each value using
its quantile index. Encoding numericals as quasi-categorical values makes the further treatment of all columns uniform. It simplifies the overall setup and makes the implementation easier to optimize, e.g. via vectorization. As all values are treated equally, there is no need to distinguish between numericals and categoricals at inference. Hence, balancing classification and regression losses is not necessary. However, the gained flexibility and robustness come at the cost of a quantization error and increased numbers of learnable embedding parameters, depending on the number of quantiles chosen. However, combining a low embedding space and linear up-projection can counter this problem, which we plan to address in future work. Furthermore, the ordinal character of the encodings is lost without explicit additional treatment. For categoricals, we use standard integer encoding and embedding via learnable look-up embeddings. Numerical features with less than 20 unique values are treated directly as categoricals. Finally, missing values are encoded as an additional NAN category for both numerical as well as categorical features. Sample statistics needed for the encoding, such as the quantiles, are estimated separately for each dataset using a fixed amount of 10,000 samples each before training.
Finally, we did not add any further additive encodings such as positional encodings and table- or column-specific bias terms, to minimize inductive biases and to retain the permutation invariance of the architecture. The column- and table-specific characteristics have to be learned by each embedding individually. While our current work only uses the minimal architectural requirements we see different types of additive encodings as an interesting prospective research question.
**Data interleaving** To obtain rows from multiple tables, we sample from a large heterogeneous corpus of tables, which we describe in detail in Section 4. We choose to perform stratified sampling, that is, in every batch the occurrence of each dataset is equally likely, regardless of the dataset size. This way, we sample uniformly from tasks and domains instead of sampling uniformly from the union of datasets. As a consequence, smaller datasets are iterated over more often than large ones. To process these samples in a single batch, we add a learnable padding token to each sequence up to the maximum number of tokens per batch. This is vastly different from XTab, which utilizes a federated learning approach, deploying the table-specific tokenizers on individual GPUs. By processing intertable samples natively, we are able to scale the required hardware independently of the number of tables contained in the pretraining dataset. In fact, we perform all experiments on a single GPU. Our approach can easily be further parallelized using standard techniques from distributed training.
**Contextualization and learning objective** The interleaved batch of tokens are contextualized by a single Transformer backbone. In line with FT-Transformer and XTab, we use the pre-norm variant due to better performance and stability in the natural language context [42]. For self-supervised pretraining, we use the masked cell recovery objective - the tabular analog of masked language modelling (MLM). A random subset of tokens per cell is masked with a learnable Mask token and the training objective is to reconstruct the masked values from the contextualized embedding of the corresponding masked token. We note that this is a natural loss for the case of tabular data, as opposed to MLM. Masked tokens can be interpreted as missing values, a common occurrence in practical table modeling problems, and the recovery objective is simply the imputation of its value. Compared to traditional imputation methods such as univariate mean, median, or mode estimation, the imputation loss is multi-variate in nature. Hence it can capture richer dependencies between columns and other missing values, that are not able to be captured with standard methods. In the
Figure 1: Overview of the proposed cross-table pertaining architecture. Individual tables are tokenized using table-specific tokenizers, including numerical as well as categorical features, and processed by a shared Transformer backbone.
cross-table regime, this loss has been shown to perform better than contrastive pretraining while being more lightweight [51]. As opposed to XTab, we fully replace masked tokens with a single learnable mask embedding instead of random values drawn from the marginalized distribution. We believe this to yield a stronger training signal, but a comparison is left for future works. Note that, in order to obtain a uniform masking rate for all tasks, masking is performed before padding of the tokens.
For the cell recovery, the contextualized masked tokens are projected by a linear layer into the corresponding target probability space. As all values have been effectively encoded into categoricals, we optimize for classification via minimization of the cross-entropy loss. Unlike XTab, we do not use table-specific target heads but perform the target projection into the union of the individual column's target probability spaces. More precisely, given the individual column-specific target probability spaces \(\mathcal{C}_{ij}\) for column \(j\) of dataset \(i\), the full target probability space is modeled as their direct product, \(\mathcal{C}=\prod_{i=1}^{M}\prod_{j=1}^{N_{i}}\mathcal{C}_{ij}\). However, the calculation of the cross-entropy for each token is restricted to its individual subspace via binary masking corresponding to an orthogonal projection onto \(\mathcal{C}_{ij}\).
## 4 Datasets
**Pretraining corpus** In order to perform meaningful scaling experiments, sufficient training data is required. As of now, heterogeneous high-quality tabular training data is not widely available. Instead, we chose to create a large heterogeneous training corpus by utilizing several tabular benchmark datasets of different tasks and sizes. For benchmarking, we then restrict ourselves to a small set of curated datasets as discussed next.
We gather datasets from multiple OpenML collections [39], and only kept datasets with more than 1000 rows and 10 to 50 columns. We discarded categorical columns that have more than 64 unique values. Finally, we manually deduplicated the remaining datasets.
In total, we obtain a corpus containing 76 tables including 30 binary and 26 multiclass classification tasks as well as 20 regression datasets of different widths and sizes. Overall, the pretraining corpus contains ca. 135 M tokens in total. Using the previously discussed table-specific tokenization approach, we obtain a token vocabulary size, i.e. the number of unique numerical quantities and categories to be embedded via look-up, of roughly 66 k. As a comparison, the BERT language model was trained using a vocabulary size of about 30 k, whereas GPT-2 used ca. 50 k. The feature and sample statistics of our pretraining corpus are shown in Figure 2. More detailed information on the datasets and statistics are presented in Appendix C.
**Benchmark datasets** Instead of evaluating on a similarly large corpus of datasets or curating a larger set of datasets and splitting it into two folds similar to XTab, we believe a small curated set to be more suitable for investigating these early scaling experiments as opposed to average rank performance across a large benchmark suite. This way, we anticipate gaining more nuanced insights into the performance behavior. For these reasons, we followed the work by Borisov et al. [5] and use five tabular datasets for our evaluation, namely HELOC, California Housing, Adult Income, Cover Type, and Higgs, details of which are shown in Appendix C. These datasets cover a range of tasks (binary and multi-class classification, as well as regression), different numbers and types of columns (from 9 to 55 features), as well as sizes, ranging from roughly 10 k to 10 M samples per dataset. Even in the single-table case, we expect a Transformer-based model to perform severely differently across these five datasets. We split each dataset into 60 % used for pretraining and 40 % evaluated via a 5-fold cross-validation. We describe the pretraining and evaluation procedure in detail next.
## 5 Experiment Description
We perform scaling experiments for the proposed architecture using self-supervised pretraining in the single-table as well as the cross-table setup. In total, we investigate four different model configurations, covering four orders of magnitudes in terms of the backbone model's parameter count,
Figure 2: Column and row statistics of the individual datasets contained in our curated pretraining corpus.
ranging from 13 k to 16 M. Due to limitations with respect to the dataset sizes, for the single-table case, we evaluate models S, M, and L, whereas M, L, and XL are considered in the cross-table case.
Single-table evaluationServing as a baseline, we investigate the scaling behavior of our approach in the single-table case. That is, for each table in our benchmark suite, we train a separate model via the imputation loss using the mentioned 60 % pretraining set. We then evaluate the task-specific performance of the pretrained model via linear probing using 5-fold validation on the remaining 40 % of each benchmark dataset. Linear probing is a well-established method to assess the quality of embeddings obtained via self-supervised pretraining and effectively corresponds to learning a linear projection layer supervised on the table-specific task. Hence, linear probing investigates the linear separability of the table representations with respect to a specific downstream task which the model was not explicitly trained on. Note that we evaluate the pretraining performance and do not perform any supervised fine-tuning of the tokenizers or backbone, which we leave for future investigations.
Cross-table evaluationSecondly, we investigate the cross-table case. Here, each model configuration is pretrained using the imputation loss on the large pretraining corpus. As our architecture uses table-specific tokenizers, the cross-table pretrained models cannot directly be investigated on the benchmark datasets. To this end, we again use the table-specific pretraining portion and train the corresponding tokenizers for the pretrained model. To observe the transferability during training, we checkpoint the pretraining models every 250 M training tokens and evaluate all checkpointed models via linear probing on all benchmark datasets. Importantly, for a direct and fair comparison, we also use the same self-supervised learning objective here as in the single-table case to be able to assess the impact of cross-table pretraining. In this evaluation, we perform two variations: one where the pretrained backbone is frozen and only the tokenizer is trained, and one where the tokenizers and backbone weights are trained jointly. The obtained models are then evaluated via linear probing in full analogy to the single-table case using 5-fold cross-validation on the remaining portion of each dataset. Again, we do not perform any supervised fine-tuning.
We want to point out that in both cases a comparison to baselines is challenging, as existing methods, such as boosted trees, are trained in a supervised fashion on a single table. This is in stark contrast to this work, which uses self-supervised training without labeled targets and simply uses the representation features to train a linear model on top to predict the target. Furthermore, a comparison to other cross-table architectures is difficult, as the only existing approach, XTab, is trained in a federated setup requiring a training cluster of, in our case, 76 GPUs, which is outside our computational budget.
Hyperparameters and trainingTrainings are performed via mini-batch stochastic gradient descent using the AdamW optimizer [26] with the default parameters. In the single-table experiments, we choose a batch size of 2048 which we reduce to 512 for the cross-table pretraining due to memory constraints. In total, we use 5 M, 10 M, and 25 M samples for pretraining the S, M, and L model in the single-table cases, respectively. In the cross-table case, we train all model configurations using 75 M samples, i.e. rows. The total number of training _tokens_ is calculated by summing the number of cells for all samples excluding Padding tokens. For the learning rate, we choose a warmup phase for the first 10 % of training samples, linearly increasing the learning rate from 5 \(\times\) 10\({}^{-5}\) to 10\({}^{-3}\), and a cosine decay to 0 for the remaining 90 % of training samples. We employ a global weight decay, i.e. an \(L_{2}\)-norm regularization, of 10\({}^{-2}\). Throughout, we use a dropout rate of 10 % during training. For all experiments, we use a masking fraction of 25 %. More details on the used hyperparameters are given in Appendix A. All experiments are conducted using compute nodes with 8 CPU cores, 32 GB of RAM, and a single Nvidia L4 GPU.
Baseline methodsFor comparison, we evaluate two baseline approaches. We investigate per-table performance using XGBoost [7], as well as a simple linear model using the raw features as predictors. Naturally, these methods are fitted on each benchmark dataset separately and do not allow for cross-table generalization. In all cases, we do not perform any hyperparameter optimization - including our proposed approach. As we use a different split of the benchmark data, due to the necessity of setting aside a portion for self-supervised pretraining, we cannot directly compare with the many baselines presented in the paper by Borisov et al. [5]. However, we do not expect the results to be fundamentally different on the splits used here as we follow the identical evaluation protocol via five-fold cross-validation.
## 6 Results
Our main results investigate the scaling behavior of the different models in terms of their linear probing performance on the benchmark datasets and are shown in Figure 3.
**Single-table performance** Investigating the single-table case, we make the following observations: First, the imputation objective of recovering masked cell values is indeed informative on the dataset-specific downstream task. Recall that we do not perform any supervised fine-tuning. It indicates that the models are indeed learning multi-variate dependencies to efficiently recover missing values. That is, despite the model not being trained on the task specifics, the obtained contextualized features show good linear separability with respect to the downstream tasks. In most cases, in particular for HELOC and HIGGS, the contextualized features have more predictive power than the unprocessed ones as shown by the comparison with the linear model. Generally, the results are sub-par compared to a non-optimized XGBoost, which, however, is trained in a supervised fashion. With respect to the backbone size, we see slight improvements with scale: The linear probing performance increases with the amount of backbone parameters, as expected. We do, however, observe that in most cases, in particular with smaller datasets, this increase is saturated already with the Medium model configuration while larger datasets, such as Cover Type and HIGGS, do not show this saturation. This is, to some extent, expected as the amount of training data has to be scaled with increasing backbone parameter counts. This supports our claimed need for cross-table approaches in order to be able to scale tabular models towards a much larger scale. The training loss and imputation accuracies for all trained models are provided in Appendix B.
**Cross-table performance** Generally, we observe a slight increase in performance when using cross-table pretraining, in particular notable in the HELOC and Adult Income datasets. Typically, the updating of the backbone parameters jointly with the training of the tokenizer, again in a self-supervised fashion, tends to perform better than the frozen weights obtained during pretraining, with the exception of the HELOC dataset. Overall, we do not see a strong increase in performance with scale, which indicates that we might be far from optimal dataset sizes to saturate the models and learn meaningful cross-table contextualization patterns within the backbone. On the other hand, we also observe that scaling does not hurt performance, which could indicate that increasing the dataset sizes can lead to improvements. Slight increases can be observed in the HELOC dataset, whereas increased scale actually leads to worse performance in some instances such as the California Housing dataset. Moreover, we see an interestingly steep increase in imputation accuracy during transfer learning on the benchmark datasets, as shown in Figure 4 in the case of the Adult Income dataset and in Appendix B for the remaining ones. This encourages the usage of the proposed cross-table pretrained model as a multivariate imputation system.
Figure 3: Mean 5-fold cross-validation linear probing results for the considered benchmark datasets in the case of single-table as well as cross-table pretraining with frozen and updated backbones (BB). The (supervised) performance of a linear model as well as XGBoost are shown for comparison.
Further, looking more closely at the linear probing performance at several stages during pretraining, which are shown in Appendix B, we do not see systematic improvements with longer pretraining. This is surprising and suggests that the backbone feature processing does not increase in performance with increased pretraining performance. That is, while we see an increase in pretraining imputation accuracy, this does not directly transfer to improvements with respect to the linear separability of the benchmark tasks, unlike our observations in the single-table case. This is an interesting observation that could be caused by a number of reasons opening several future research directions. First, we note that the cross-table pretraining was limited by our compute budget and that all models, in particular the L and XL variants, show further potential in training as shown in Figure 5. Here, the training loss of the XL model is hardly saturated and we expect further gains with longer training. This is less the case in the single-table training, for which we present the loss curves in Appendix B, which are limited by the individual dataset sizes and saturate much earlier. Second, the approach to using table-specific tokenizers comes at the cost of a comparably large parameter overhead. As previously mentioned, our cross-table pretraining vocabulary contains 66 k tokens and look-up embeddings, resulting in a large number of additional training parameters as detailed in Appendix A. For comparison, GPT-2 uses a 60 k subword vocabulary at a size of 1.5 B parameters and 40 B training tokens, which is orders of magnitudes larger than the ones used here. This imbalance of tokenization and backbone parameters could be a reason for the observed behavior. Continued scaling experimentation is required, while keeping the vocabulary size constant, e.g. by using larger pretraining datasets or improving the tokenization efficiency by using a lower-dimensional embedding space combined with a shared upsampling layer. Finally, we do not investigate supervised fine-tuning here. For one, it would be interesting to observe whether pretraining boosts supervised fine-tuning, similar to the results obtained in the XTab framework [51]. Furthermore, using a supervised objective, either in addition to the self-supervised pretraining or for the benchmark dataset transfer, would allow for introducing a learnable CLS token to aggregate the contextualized embeddings in an adaptive way. Currently, our evaluation protocol uses mean pooling across the contextualized row tokens, excluding Pad tokens, for linear probing. This aggregation might smooth out representations with higher predictive performance and is not task-adaptive. However, in the fully self-supervised case, it is not directly possible to introduce global contextualized representations, e.g. via a learnable CLS token.
**Limitations and future work** Our current approach offers several limitations, the most technical of which we previously discussed. In addition, our current evaluation protocol is limited in scope. A comparison across more benchmark datasets as well as supervised and unsupervised baselines, such as boosting or LLM-based approaches, is of interest and we plan to address this in the future. Also, performing a hyperparameter optimization should yield better results for both the considered baselines and the proposed approach, e.g. investigating the dropout and masking ratios in detail. Furthermore, we plan to investigate the cross-table tokenization in detail in future works, for example, the impact of row and table encodings as well as the explicit use of the individual table schemas, for example by using a separate learnable schema embedder. Finally, we argue there is a great need for more elaborate tabular training data in order to scale tabular models towards model sizes comparable to, e.g., GPT-2 as a first step. Similarly, benchmarks tailored to the usage of deep learning models need to be further developed and refined.
Conclusion
We have presented a novel architecture and training pipeline for cross-table pretraining and conducted scaling experiments that showed first interesting results. Generally, we see an increase in the linear probing accuracy across several benchmark datasets with larger model scales in both the single- and the cross-table case. Whereas models trained in a single-table fashion saturated, we saw slight improvements using cross-table pretraining, which was however limited likely due to a lack of training data or compute resources. We have discussed multiple possible reasons for the observed behavior and interesting further research directions.
|
2310.01438 | Building Flexible, Scalable, and Machine Learning-ready Multimodal
Oncology Datasets | The advancements in data acquisition, storage, and processing techniques have
resulted in the rapid growth of heterogeneous medical data. Integrating
radiological scans, histopathology images, and molecular information with
clinical data is essential for developing a holistic understanding of the
disease and optimizing treatment. The need for integrating data from multiple
sources is further pronounced in complex diseases such as cancer for enabling
precision medicine and personalized treatments. This work proposes Multimodal
Integration of Oncology Data System (MINDS) - a flexible, scalable, and
cost-effective metadata framework for efficiently fusing disparate data from
public sources such as the Cancer Research Data Commons (CRDC) into an
interconnected, patient-centric framework. MINDS offers an interface for
exploring relationships across data types and building cohorts for developing
large-scale multimodal machine learning models. By harmonizing multimodal data,
MINDS aims to potentially empower researchers with greater analytical ability
to uncover diagnostic and prognostic insights and enable evidence-based
personalized care. MINDS tracks granular end-to-end data provenance, ensuring
reproducibility and transparency. The cloud-native architecture of MINDS can
handle exponential data growth in a secure, cost-optimized manner while
ensuring substantial storage optimization, replication avoidance, and dynamic
access capabilities. Auto-scaling, access controls, and other mechanisms
guarantee pipelines' scalability and security. MINDS overcomes the limitations
of existing biomedical data silos via an interoperable metadata-driven approach
that represents a pivotal step toward the future of oncology data integration. | Aakash Tripathi, Asim Waqas, Kavya Venkatesan, Yasin Yilmaz, Ghulam Rasool | 2023-09-30T15:44:39Z | http://arxiv.org/abs/2310.01438v2 | # Building Flexible, Scalable, and Machine Learning-ready Multimodal Oncology Datasets
###### Abstract
The advancements in data acquisition, storage, and processing techniques have resulted in the rapid growth of heterogeneous medical data. Integrating radiological scans, histopathology images, and molecular information with clinical data is essential for developing a holistic understanding of the disease and optimizing treatment. The need for integrating data from multiple sources is further pronounced in complex diseases such as cancer for enabling precision medicine and personalized treatments. This work proposes Multimodal Integration of Oncology Data System (MINDS) - a flexible, scalable, and cost-effective metadata framework for efficiently fusing disparate data from public sources such as the Cancer Research Data Commons (CRDC) into an interconnected, patient-centric framework. MINDS offers an interface for exploring relationships across data types and building cohorts for developing large-scale multimodal machine learning models. By harmonizing multimodal data, MINDS aims to potentially empower researchers with greater analytical ability to uncover diagnostic and prognostic insights and enable evidence-based personalized care. MINDS tracks granular end-to-end data provenance, ensuring reproducibility and transparency. The cloud-native architecture of MINDS can handle exponential data growth in a secure, cost-optimized manner while ensuring substantial storage optimization, replication avoidance, and dynamic access capabilities. Auto-scaling, access controls, and other mechanisms guarantee pipelines' scalability and security. MINDS overcomes the limitations of existing biomedical data silos via an interoperable metadata-driven approach that represents a pivotal step toward the future of oncology data integration.
## 1 Introduction
To gain a deeper insight into patients' health and provide tailored medical care, clinicians routinely gather data from multiple sources, including radiological scans, histopathology studies, laboratory tests, body vitals, and other clinical information. The reliance on multiple data sources for clinical decision-making makes medicine inherently multimodal, where the data modality refers to the form of data, e.g., X-ray is one modality, hematoxylin and eosin (H&E)-stained histopathology image is another, and patient's demographic information is yet another modality. Each modality in such multimodal data may have a different resolution and scale due to its own data collection, recording, or generation process. The data modalities may include (i) -omics information from genome, proteome, transcriptome, epigenome, and microbiome, (ii) radiological images from computed tomography (CT), positron emission tomography (PET), magnetic resonance imaging (MRI), ultrasound scanners or X-ray machines, (iii) digitized histopathology, immunohistochemistry, and immunofluorescence slides created using tissue samples and stored as gigapixel whole slide images (WSI), and (iv) electronic health record (EHR) that houses structured information consisting of demographic data, age, ethnicity, sex, race, smoking history, etc. and unstructured data such as discharge notes or medical reports.
Integrating data from multiple heterogeneous modalities can create a unified, richer view of cancer, potentially more informational and complete than the individual modalities [1]. The multimodal medical data holds great potential to advance our understanding of complex diseases and help develop effective and tailored treatments [2, 3]. The recent growth in machine learning models capable of learning from multimodal data further underlines the importance of collecting, organizing, and harmonizing multimodal data in cancer care [4, 3].
The advent of high-throughput multi-omics technologies like next-generation sequencing (NGS), high-resolution radiological and histopathology imaging, and the rapid digitization of medical records has led to an explosion of diverse, multimodal data [5]. This data deluge has been a boon for machine learning, where abundant training data has directly enabled significant breakthroughs. For example, the rise of large general-purpose datasets like Common Crawl [6] for natural language processing (NLP) has fueled advances in language models and Artificial Intelligence (AI) assistants. One may hope that extensive, standardized, and representative multimodal datasets in the medical domain would provide a fertile ground for developing advanced translational machine learning models. Machine learning thrives on massive, high-quality datasets; however, assembling such resources in healthcare poses unique challenges. First, multimodal medical data is inherently heterogeneous and noisy, spanning structured (demographics, medications, billing codes), semi-structured (physician notes), and unstructured data (medical images). Aggregating such heterogeneous data requires extensive harmonization and manual processing. Second, reliability, robustness, and accuracy are critical for all medical applications. However, real-world clinical data is often incomplete, sparse, and contains errors, which makes building robust and reliable models more challenging. Meticulous quality control and manual curation are essential before using these datasets to train machine learning models. Finally, strict data privacy and security considerations arise in healthcare. The data may contain protected health information (PHI) that must be redacted. Rigorous data de-identification and access control processes are required per the Health Insurance Portability and Accountability Act (HIPAA) [7].
Traditionally, vast amounts of multimodal data are generated during clinical trials and research studies where the raw data undergoes initial processing and quality control by the study's researchers. The data is then transmitted to standardization pipelines such as the National Cancer Institute's (NCI) Center for Cancer Genomics (CCG) Genome Characterization pipeline [8], where the data is systematically annotated, formatted, and quality-controlled before being deposited into centralized biobanks. For example, NGS data from cancer genomic studies is standardized by CCG and deposited into the NCI's Genomic Data Commons (GDC) [9]. However, medical imaging data from the same studies, consisting of CT, MRI, and PET scans, follow a different path and may end up in imaging an archive like The Cancer Imaging Archive (TCIA) [10]. This leads to fragmentation of data across multiple disconnected databases. To address this, integrated data commons like the NCI Cancer Research Data Commons (CRDC) have been proposed [11]. The CRDC aims to link datasets from diverse sources using Findable, Accessible, Interoperable, and Reusable (FAIR) principles to enhance interoperability [12].
However, significant challenges remain in unifying multimodal data dispersed across different repositories with heterogeneous interfaces, formats, and query systems. For example, a researcher studying lung cancer requires integrating clinical, imaging, and genomic data for their cohort across the GDC, TCIA, and other databases. But each has different application programming interfaces (APIs), schemas, and querying methods. Piecing together data manually across these silos is painstakingly difficult. There is a lack of unified interfaces and analytical tools that can work seamlessly across multiple cancer data repositories. This leads to isolated data silos and hampers easy access and integrated multimodal data analysis. To address the limitations and fragmentation of current oncology data systems, we propose a novel solution called the "Multimodal Integration of Oncology Data System", abbreviated as _MINDS_. MINDS is a scalable, cost-effective data lakehouse architecture that can consolidate dispersed multimodal datasets into a unified platform for streamlined analysis. The key objectives of MINDS are fourfold:
1. To integrate siloed data from diverse public sources into a single access point.
2. To implement robust data security and access control while supporting reproducibility.
3. To develop an automated system to accommodate new data continually.
4. To enable efficient, scalable multimodal machine learning.
At the core, MINDS combines the advantages of data lakes and data warehouses to ingest, structure, and analyze large volumes of heterogeneous oncology datasets. The flexible schema of the data lake provides scalable storage for varied data types, including imaging, -omics, and electronic health records. Meanwhile, the warehouse's performance, governance, and extract-transform-load (ETL) capabilities facilitate structured access and analysis. By bringing together disconnected datasets, applying state-of-the-art data integration techniques, and leveraging cloud-native technologies, MINDS aims to overcome key pain points of fragmentation, interoperability, and inefficient analytics workflows. This will ultimately enable translational researchers to leverage multimodal data better for deriving new insights and advancing precision oncology.
The paper is organized as follows. In Section 2, we provide the necessary background information regarding the existing landscape of the multimodal heterogeneous datasets in oncology, from collection and processing to distribution. In Section 3, we delve into the methodology used to build the proposed data lakehouse architecture and discuss the project's technical aspects in detail. In Section 4, we discuss the project implementation results and the study's potential implications on cancer research and clinics. Finally, we conclude in Section 5 with recommendations for future research.
## 2 Background and Literature Review
The rapid growth of biomedical data has created immense opportunities for translational research and significant data management challenges. This section reviews key aspects of the complex landscape of multimodal oncology data, from collection pipelines to traditional biobanks and modern data commons approaches. This background motivates the need for new solutions to effectively consolidate, integrate, and analyze heterogeneous datasets.
### Data Characterization Pipeline
Standardized data characterization pipelines are vital in transforming raw biological samples into usable multimodal datasets. A sample data pipeline for gathering genomic modality from CCG for the GDC [9] is illustrated in Figure 1. The presented pipeline involves several stages, including tissue collection and processing, genome characterization, genomic data analysis, and data sharing and discovery. The NCI has adopted similar pipelines for medical images, referred to as the Imaging Data Commons [13] or IDC and Proteomics Data Commons or PDC [14].
* **Tissue Collection and Processing:** Tissue source sites, which include clinical trials and community oncology groups, collect tumor tissue samples and normal tissue from participating patients. These samples are either formalin-fixed paraffin-embedded (FFPE) tissues or frozen tissue. In CCG, Biospecimen Core Resource (BCR) is responsible for collecting and processing these samples and collecting, harmonizing, and curating clinical data [8].
Figure 1: Genome Characterization Pipeline is illustrated as an example of data characterization. Data source sites collect tumor tissue samples and normal tissue from participating patients. Biospecimen Core Resource (BCR) collects and processes the tissue samples and collects, harmonizes, and curates clinical data. Genome Characterization Centers (GCCs) generate data such as whole genome sequencing, total RNA and microRNA sequencing, methylation arrays, and single-cell sequencing from the tissue samples received from the BCR. At the Genomic Data Analysis stage, the raw data from the previous stage is transformed into meaningful biological information. Data generated by the Genome Characterization Pipeline are made available to the public via the GDC for use by researchers worldwide. The figure is adapted from [8]
* **Genome Characterization:** This stage involves generating data from the collected samples. At CCG, the Genome Characterization Centers (GCCs) generate data from the samples received from the BCR. Each GCC supports distinct genomic or epigenomic pipelines, including whole genome sequencing, total RNA and microRNA sequencing, methylation arrays, and single-cell sequencing [8].
* **Genomic Data Analysis:** The raw data from the previous stage is then transformed into meaningful biological information at this stage. In CCG, the Genomic Data Analysis Network (GDAN) transforms the raw data output from the GCCs into biological insights. The GDAN has a wide range of expertise, from identifying genomic abnormalities to integrating and visualizing multi-omics data [8].
* **Data Sharing and Discovery:** At this stage, the insightful genomic data is processed, shared, and unified at a central location. The NCI's Genomic Data Commons (GDC) harmonizes genomic data by applying a standardized set of data processing protocols and bioinformatic pipelines. The data generated by the Genome Characterization Pipeline are made available to the public via the GDC [8, 9].
### Traditional Data Management - BioBanks
Traditionally, medical data modalities are stored and managed separately in biobanks. These biobanks are the repositories that store biological samples for use in research and by clinicians for reference. Today, such biobanks have become an essential resource in medical and oncological facilities and are frequently used by users [15]. They provide researchers access to various medical samples and associated clinical and demographic data, which is used to study disease progression, identify biomarkers, and develop personalized and new treatments. However, traditional data management using biobanks has several limitations, enumerated below:
* **Fragmented Data:** One of the main issues is that data from different sources are often stored in separate biobanks, leading to fragmentation of information [16]. This makes integrating and analyzing data across different modalities difficult, limiting the potential for comprehensive, multi-dimensional analysis of patient data [15].
* **Incoherent Data Management:** How data is stored, formatted, and organized often varies significantly across biobanks, even for the same patient. For example, clinical data may be encoded differently, imaging data may use proprietary formats, and terminology can differ across systems. This heterogeneity and lack of unified standards make aggregating and analyzing data across multiple biobanks challenging [15].
* **Data Synchronization:** Patient data stored in separate biobanks tends to go out of sync over time. As patients undergo new tests and treatments, new data is collected and added to different biobank silos uncoordinatedly [15]. Piecing together a patient's history timeline requires extensive manual effort to sync disparate records across systems [15].
* **Data Governance:** The increasing prevalence of bio-banking has sparked a profound and extensive discussion regarding the ethical, legal, and social implications (ELSI) of utilizing vast quantities of human biological samples and their associated personal data [17]. Ensuring and safeguarding the fundamental ethical and legal principles concerning research involving human data in Biobanks becomes significantly more intricate and challenging than conducting ethical reviews for specific research projects [17].
### Data Commons
The concept of data commons has emerged to address the challenges faced by biobanks. A data commons is a shared virtual space where researchers can work with and use data from multiple sources. The NCI has developed the CRDC, which integrates different data types, including genomic, proteomic, imaging, and clinical data, into a unified, accessible platform [11]. The CRDC provides researchers access to various data repositories, including the GDC, PDC, and IDC. Each of these repositories hosts a specific data type, and together, they form a comprehensive platform for multimodal data analysis. While the CRDC has made significant strides in integrating diverse data types, it still faces challenges. One of the main issues is the difficulty in harmonizing data from different sources. Due to the differences in data formats, standards, and quality control measures across data sites and modalities, it takes significant effort by the researchers to conform the data to uniform quality standards. The Cancer Data Aggregator (CDA) was developed to address this issue to facilitate data integration and analysis across different data commons. CDA provides an aggregated search interface across major NCI repositories, including the Proteomic, Genomic, and Imaging Data Commons. It allows unified querying of core entities like subjects, research participants, specimens, files, mutations, diagnoses, and treatments. This facilitates access to integrated records across different data types [18].
The CDA has its own limitations, like static outdated mapping and the inability to incorporate external repositories. This motivates the need for more robust integrative platforms. The proposed MINDS system aims to overcome these challenges in several key ways:
1. CDA's mapping of the CRDC data is not real-time. For example, as of September of 2023, when querying patients with the primary diagnosis site being lung, only 4,870 cases are present, despite there being 12,267 cases present in the GDC data portal. MINDS pulls source data directly from repositories like GDC to ensure real-time, up-to-date mapping of all cases.
2. MINDS is designed as an end-to-end platform for users to build integrated multimodal datasets themselves rather than a fixed service. The open methodology enables full replication of huge multi-source datasets. To this end, anyone can replicate our method to generate the exact copy of over 40,000 public case data on their infrastructure.
3. MINDS is flexible and incorporates diverse repositories and data sources, not just CRDC resources. Our proposed architecture can integrate new repositories as needed, unlike CDA, which is constrained to CRDC-managed data. For example, the cBioPortal for Cancer Genomics, a widely used platform for exploring, visualizing, and analyzing cancer genomics data, has its own data management and storage system separate from the CDA [19, 20]. This means that data stored in the cBioPortal cannot be directly queried or accessed through the CDA, limiting the potential for integrated data analysis across different platforms.
## 3 Methodology
[backgroundcolor=gray!20, linecolor=gray!
amount of healthcare data through big data processing has become crucial. The process of big data processing involves various techniques, such as data mining, leveraging data management, machine learning, high-performance computing, statistics, and pattern recognition to extract knowledge from extensive datasets. These datasets possess distinctive characteristics, often called the seven \(V\)s of big data, as explained below [22].
* **Volume** relates to the data size. Handling large volumes of complex data is a significant challenge and holds vast potential. With more data, the models can learn more and perform better.
* **Variety** refers to the data types we deal with. As previously discussed, oncology data vary from structured to semi-structured to unstructured. Each data type presents unique challenges and opportunities.
* **Velocity** considers the speed at which the data is accumulated. Rapid data accumulation poses storage and processing challenges, but it also keeps the learning models current and improves their adaptability.
* **Veracity** concerns the quality and integrity of the data. Ensuring the data is reliable and accurate is crucial to developing effective models. It is not just about collecting a lot of data; it must be credible and high-quality.
* **Value** focuses on the utility and benefits of the data. The ultimate goal of collecting and processing this data is to create user value, improving oncology decision-making and clinical outcomes.
* **Variability** pertains to the data volatility that changes in both temporal and spatial domains. Variability in the data modalities, views, and resolutions poses a vital challenge to its storage, processing, and management.
* **Visualization** depicts insights through visual representations and illustrations. Knowing the data is important for a meaningful, contextual understanding of what the data represents.
The Big Data approach guides data handling strategies. By considering each of these aspects, we can effectively manage oncology data and, in turn, build better, effective models. We use two primary data management systems to facilitate our big data approach: Data Warehouses and Data Lakes.
#### 3.1.1 Data Warehouse
Data warehouses represent a foundational pillar of the big data paradigm that MINDS leverages. These repositories provide a highly structured environment explicitly optimized for analytics, reporting, and deriving data-driven insights across vast information [22]. A data warehouse integrates heterogeneous data from diverse sources into a centralized, well-organized repository to enable proper analysis. By fulfilling this role, data warehouses deliver immense value in informing better decision-making. The process of assembling data into warehouses is called data warehousing. A core concept employed is "schema-on-write", where the warehouse schema is predefined to meet specific analytical needs before data is loaded. This upfront structural optimization makes warehouses ideal for handling structured data. Supervised machine learning workloads thrive in warehouses, as structured, consistent data facilitates training algorithmic models. Moreover, the innate high degree of organization enables fast, efficient querying to uncover trends and patterns through predictive analytics [22]. Overall, by structuring varied data sources into a unified environment purpose-built for analytics, data warehouses provide the backbone for deriving value from big data across many domains.
#### 3.1.2 Data Lake
Complementing warehouses, data lakes provide centralized but low-structure storage to accumulate expansive, heterogeneous data in raw form until needed. In contrast to "schema-on-write," data lakes employ "schema-on-read," only defining structure when data is queried. This provides flexibility to modify analytics on-demand [22]. With their innate tolerance for storing original, unprocessed data, lakes accommodate structured, semi-structured, and unstructured data types. This diversity makes lakes uniquely suited for advanced analytics like machine learning, AI, and natural language processing that leverage raw data complexity. The lack of enforced structure enables rapid scaling to meet growing analytics demands. The dual architectures of data warehouses and data lakes provide structured refinement and raw accommodating capabilities to put big data into action. Lakes aggregate heterogeneous datasets, while warehouses prepare refined data for analysis. This symbiotic combination ultimately enables MINDS to derive maximal value from oncology's multidimensional data landscape.
### Requirements of a Flexible and Scalable Data Management System
To handle the complexities, scales, and heterogeneity in the structure and function of oncology data, the data management system design has to be comprehensive, scalable, and interoperable. The primary goal of this system is to cater to the needs of machine learning engineering, which requires a robust and efficient data management infrastructure to build
accurate and reliable models. We set off with the aim to design and build a data management system with the following requirements in mind:
* **Requirement 1:** Minimize large-scale unstructured data storage whenever possible. This requirement ensures the efficient use of storage resources and allows the user to access the data directly from the data provider.
* **Requirement 2:** The system should be horizontally and vertically scalable. Satisfying this requirement is crucial to handle the increasing volume of oncology data and ensure the system can accommodate data size and complexity growth.
* **Requirement 3:** The system should be interoperable, allowing for the easy integration of new data sources. This is important in oncology, where data is often distributed across various databases and systems.
* **Requirement 4:** The system should track data from the point of ingestion to the point of training. This ensures reproducibility, a key requirement in scientific research and machine learning.
* **Requirement 5:** Incorporate audit checkpoints in the data collection, pre-processing, storage, processing, and analysis stages of the data pipeline. This ensures data integrity, the prime consideration in delivering reliable machine learning outcomes.
### MINDS Architecture
Considering the above-mentioned requirements, we have built a Multimodal Integration of the Oncology Data System (MINDS) using the cloud-based technology of Amazon Web Services (AWS). The cloud-based architecture allows us to scale up or down easily based on the data volume requirements and the required computational resources. It also provides a wide range of tools and services that can be leveraged to build, deploy, and manage a data management system.
MINDS adopts a common two-tier data architecture, a data lake, and a data warehouse [22] to process data and derive meaningful insights efficiently. Figure 2 illustrates the architecture of MINDS, which is divided into three primary stages: (1) Data Acquisition, (2) Data Processing, and (3) Data Serving. By segmenting the process into these three stages, we ensure the multimodal oncology data is efficiently handled while accruing its maximum value.
Figure 3 provides a detailed layout of technical components at each stage using AWS cloud infrastructure and the tools utilized to actualize the system. Definitions of these technical components are summarized in Box 1.
Figure 2: MINDS architecture implements a 3-stage pipeline designed to optimize data aggregation, data preparation, and data serving of multimodal datasets. Stage 1 comprises _data acquisition_ and involves acquiring structured and semi-structured data from sources like GDC, including clinical records and biogeochemical metadata. These are gathered, normalized, and securely stored in cloud object storage. Stage 2 consists of _data processing_. The raw data is processed by extract, transform, load (ETL) tools cataloging into data lakes, transforming into structured relational formats, and loading into optimized data warehouses, generating analysis-ready clinical data. Stage 3 consists of _data serving_. The clinical data is served directly to researchers for preliminary exploration and visualization. They can also build patient cohorts by querying the selection criteria, and MINDS will pull corresponding unstructured data like images from connected repositories, e.g., IDC.
#### 3.3.1 Stage-1: Data Acquisition
**Data sources:** Data acquisition is the first and crucial step in building the MINDS platform. This process involves gathering all publicly available structured and semi-structured data from the data sources. As mentioned earlier, the CRDC and other oncology data management initiatives host vast amounts of patient information, and we use them as the primary data sources for our system. These sources primarily include the three data commons portals, GDC, IDC, and PDC. Additionally, we use the CRDC's Cancer Data Aggregator (CDA) tool to map all the patient information across the commons into one cohesive database. This database then expands to accommodate the patient data stored across other portals, such as theBioPortal, Xena, and other relevant data sources [19, 20, 23]. It is pertinent to mention that we do not store any unstructured data in MINDS. The MINDS pulls the unstructured data from their respective data commons based on the cohort the users want to build and the modalities they require for processing through the portal APIs. Hence, we are not required to store large unstructured data such as gigabyte pathology images in our database.
For the initial version of MINDS, we leverage the GDC as the primary data source due to its comprehensive collection of up-to-date, publicly available oncology data. The GDC portal contains clinical, biospecimen, and molecular data across diverse cancer studies, representing over 86,000 cases spanning 78 projects. The GDC has the most extensive public data holdings out of the three NCI data commons. As of 2023, it hosts over 3 petabytes of genomic and clinical data from the NCI programs like The Cancer Genome Atlas (TCGA) and Therapeutically Applicable Research to Generate Effective Treatments (TARGET). The GDC also has a well-designed and detailed data model that structures and connects the clinical, biospecimen, and molecular data domains. The availability of this robust data dictionary and schema metadata makes the ingestion and integration of new GDC datasets simpler and more consistent. Leveraging thousands of richly annotated multi-omic cancer profiles, we can develop integrative and predictive models by utilizing all the public cases in the GDC for MINDS initial deployment. The breadth of tumor types enables the building of generalized models applicable across different cancers. As the MINDS data repository expands to incorporate more primary sources beyond GDC, the experience of integrating the GDC data provides a solid foundation to build upon. The tooling ETL workflows developed to ingest and harmonize GDC data can be extended to transform and connect new oncology datasets into the MINDS knowledge system.
**Data Acquisition Process:** We pull all semi-structured and structured data from the GDC data portal for all public cases, including TSV and JSON files containing various clinical (clinical, exposure, family history, follow-up, and pathology detail) and metadata of biospecimen (aliquot, analyte, portion, sample, and slide) information. This data is then uploaded into an Amazon S3 Ingest Bucket [24]. This bucket acts as the staging storage for the data before it is uploaded to the data lake. To orchestrate the full data lake setup, we utilize the AWS Data Lake Formation tool [25], which automates the transformation of the semi-structured data stored in the S3 bucket into a queryable data lake using AWS Glue crawlers to catalog the data and store it in data tables [26]. This process is discussed in further detail in Stage 2 of the system.
**Seamless Data Updating:** The data acquisition is not a one-time event but a continuous process that must be updated regularly to ensure the data lake is always up-to-date with the latest data. To achieve this, we use AWS Lambda serverless compute [27] to trigger Glue crawlers automatically whenever new data lands in the S3 bucket. This ensures our data lake is always up-to-date with the latest data without explicit manual synchronization. This also helps reduce the data transfer rates because the system updates the data lake only with the delta between the bucket and the data lake. The data acquisition process is designed to be robust and scalable, capable of handling the increasing volume of oncology data. It also ensures the safety and integrity of the data by establishing secure connections to the databases from which data needs to be extracted.
#### 3.3.2 Stage 2: Data Processing
**Data Extraction and Transformation to Structured Format:** Once the data is acquired, the next step is to clean, process, and aggregate this data. At this stage, the data is extracted from the data lake, transformed into a more structured format, and loaded into the data warehouse. This is done using Amazon AWS Glue [28], which ensures consistency and compatibility across data types and sources. AWS Glue performs the ETL actions using the AWS Glue crawler [26]. The crawler works in a series of steps to ensure the data is appropriately cataloged and ready for analysis. Figure 4 shows the internal workings of the AWS crawler that ensure the data is properly processed and ready for analysis, making it easier for users to extract valuable insights from the data. The steps involved in the AWS crawler workflow are as follows:
1. **Establish access-controlled database connections:** The crawler first establishes secure connections to the databases from which data needs to be extracted. This ensures the safety and integrity of the data in transit.
2. **Use custom classifiers:** If any custom classifiers are defined, they catalog the data lake and generate the necessary metadata. These classifiers help in identifying the type and structure of the data.
3. **Use built-in classifiers for ETL:** AWS's built-in classifiers perform ETL tasks for the rest of the data. This process involves extracting data from the source, transforming it into a more suitable format, and loading it into the data warehouse.
4. **Merge catalog tables into a database:** The catalog tables created from the previous steps are merged into a single database. During this process, any conflicts in the data are resolved to ensure consistency and deduplication.
5. **Upload catalog to a data store:** Finally, the catalog is uploaded to a data store to be accessed and utilized for analytics. This data store is a central repository where all the processed and cataloged data is stored.
**Data Dictionary, Schema, and Entity Relationships:** GDC provides a robust data dictionary and schema that structures clinical, biospecimen, and molecular data relationships. The GDC data model represents entities as nodes and relationships as edges in a graph database. When ingesting GDC data, the AWS Glue crawler leverages the data model and dictionaries to understand the semantics and properties of each data element. The crawler uses the GDC node schema definitions in YAML files to parse the JSON documents and infer the schema. The GDC case entity is defined with properties like case_id, disease_type, demographic, diagnoses, etc. When the crawler processes a case JSON document from the GDC portal, it maps the JSON properties to columns in a Glue table definition based on the GDC data model. This way, the GDC model's underlying graph structure transforms relationships into a relational view. The Glue crawler output is a table definition in the AWS Glue Data Catalog. Users can directly query and join with other clinical, biospecimen, and genomic tables ingested from GDC. The dictionaries also provide metadata like each property's data types and code lists. When creating data definition language (DDL) for the tables, the crawler leverages
Figure 3: Overview of the MINDS architecture implemented on AWS. **(A)** Data from multiple oncology sources is acquired. The pipeline for structured data is currently configured with GDC, with the ability to integrate other platforms, such as the University of California Santa Cruz Xena and cBIO portals. **(B)** The structured data from the source is acquired in an AWS Lake where multiple components such as S3 Bucket, Glue, and Lambda catalog and process the data. **(C)** Next, the Data Warehouse uses RDS and Redshift for structured data warehousing in the form of relational schema. The cataloged data is available to Athena and Quicksight for analytics and visualization. **(D)** The users can directly query the structured data for visualization. All unstructured data download pipelines using the Data Commons APIs from Cancer Research Data Commons (CRDC) are also shown. Using SQL queries, users can request all modalities data associated with the cohort. Resultantly, all the data from PDC, GDC, and IDC are pulled together, harmonized, formatted, and presented to the user ready to use for machine learning pre-processing.
this to assign appropriate column types, formats, and validations. This helps maintain data integrity and consistency during the transformation process.
**Uploading Data to Warehouse:** The data cataloged by the AWS Glue crawler is loaded into both Amazon RDS and Amazon Redshift for structured data warehousing. Loading the clinical and biospecimen data into RDS MySQL tables allows low-latency queries and efficient updates as new data arrives. However, for analytical and reporting queries scanning large swaths of historical data, Redshift is more optimal as it is a petabyte-scale data warehouse service for high-performance analytics and complex queries [29]. Redshift also enables scaling storage and computing independently. The catalog tables are incrementally loaded into Redshift using copy commands for fast bulk loading. Redshift Spectrum, a feature of Redshift, creates external tables that reference dataset locations in S3. This allows direct SQL querying of exabytes of unstructured data in S3 without loading or transforming the data into tables. Redshift Spectrum enables high-performance analytics directly on raw structured and semi-structured data. The AWS Glue Data Catalog is a unified metadata store, enabling tools like Amazon QuickSight and Athena. Athena is a serverless, interactive query service. This enables users to perform complex analyses and gain insights from the diverse data using standard SQL [30] to connect to the underlying data sources.
#### 3.3.3 Stage 3: Data Serving
**Dashboard:** At the data consumption stage, the structured data in the data warehouse is utilized for various purposes. The data consumption process is designed to provide users with an interactive and intuitive interface for exploring, visualizing, and analyzing the data. This is achieved through a dashboard built on Amazon QuickSight [31], a fully managed business intelligence service that enables data visualization and interactive analysis. Users can interact with the dashboard to explore various aspects of the data and identify trends, patterns, or correlations using QuickSight's machine learning-powered insights. Figure 5, shows some visualizations generated from the QuickSight dashboard using data from the warehouse.
**Unstructured Data Download Tools:** MINDS enables users to build focused, multimodal datasets for targeted analysis by combining warehouse-driven cohort queries with automated unstructured data collection. Patient cohorts are defined by querying the RDS database directly or using SQL through the Athena query editor. The case IDs can be extracted from the cohort, and the resulting list of case IDs is used to retrieve all related unstructured data from the GDC, IDC, and PDC portals using their respective API interfaces. As part of the MINDS toolkit, we provide a Python utility that accepts the RDS case ID list as input and programmatically calls the APIs to bulk download images, pathology, -omics, and other files for those specific cases. The downloaded data is organized into a folder structure with a top-level "/raw"
Figure 4: The AWS Glue crawler automates ETL in MINDS through a 5-step workflow. **(1)** Establish secure database connections. **(2)** Apply custom classifiers to catalog raw data. **(3)** Transform data using built-in classifiers. **(4)** Merge classifier outputs into unified databases. **(5)** Upload the final catalog to processed data stores. The proposed workflow extracts, standardizes, and structures heterogeneous multimodal data from diverse sources to enable advanced analytics applications.
Figure 5: Quicksight analytics and visualization generated using clinical data from MINDS, filtered based on the condition mentioned in each sub-figure. Energized by AWS Quicksight, MINDS can generate visualizations that can help researchers understand data attributes and distributions.
folder containing subfolders for each case ID. Each case folder contains the unstructured data objects from GDC, IDC, and PDC for that case. JSON manifest files are also generated to capture metadata like file IDs, types, and sources. This enables easy indexing and querying of the unstructured data extracts.
### Security and Management
**Security in S3 and Data Lake:** Security and management are critical aspects of any data management system. This aspect assumes greater importance when dealing with medical data that must be protected and controlled to ensure privacy. To ensure the security and privacy of the data, we employ several AWS security services and best practices in MINDS. Amazon S3, where our data lake resides, provides robust security capabilities, including bucket policies, access control lists (ACLs), and Identity and Access Management (IAM) policies to manage access to the data. All data is encrypted at rest using AWS Key Management Service (KMS) and in transit using a secure sockets layer (SSL).
**Security in Data Warehouse:** Amazon Redshift, our data warehouse, also provides many security features [29]. It is integrated with AWS IAM, allowing us to manage user resource access. It also includes support for SSL connections to ensure data is securely transported. Redshift also supports data encryption at rest using Key Management Service (KMS) and provides features like a virtual private cloud (VPC), audit logging, and compliance certification [32].
**Security in ETL and Dashboard:** For data processing and ETL tasks, AWS Glue provides several security features [33]. It is integrated with AWS Lake Formation, providing fine-grained, column-level access control. AWS Glue ETL jobs run in a secure, isolated environment, with AWS Glue providing all the necessary resources. In the data consumption stage, Amazon QuickSight uses AWS IAM and AWS Lake Formation for access control, allowing us to define who can access the data and what actions they can perform. QuickSight also supports encryption at rest with AWS KMS and in transit with SSL.
**Monitoring and Audit Logging:** In addition to the above-mentioned security measures, we also employ monitoring and logging using AWS CloudTrail and Amazon CloudWatch [34]. These services provide visibility into user activity and API usage, allowing us to detect unusual or unauthorized activities. This helps build audit trails and trigger security events in case of an undesired action. We also use Amazon RDS Multi-AZ deployments for redundancy, high availability, and failover support for database instances. Multi-AZ creates a primary RDS instance with a synchronous secondary standby instance in another Availability Zone (AZ) for enhanced redundancy and faster failover.
### Backups and Recovery Mechanisms
MINDS leverages AWS services' robust backup, redundancy, and disaster recovery capabilities to maximize system availability and protect against data loss. Amazon S3 buckets are versioned, with all object modifications saved as new versions. This allows restoring to any previous version. Cross-region replication sends object replicas to geographically distant regions to mitigate region-level failures. S3 object lock prevents accidental deletions during a specified retention period. RDS clusters run as Multi-AZ deployments with a standby replica in a secondary AZ for high availability, automatic failover, and fast recovery. Point-in-time restore rolls back to previous database states using retained backups. Database snapshots are stored in S3 for long-term durability. Redshift distributes replicas across nodes for local redundancy. It replicates snapshots and transaction logs to S3 to protect against node failures. Snapshots can restore clusters to any point in time. Combining versioning, redundancy, failover capabilities, and recovery automation, MINDS provides resilience against failures and minimizes disruption. Robust security protects against data loss from malicious events.
## 4 Results and Discussion
This section presents the results of implementing the proposed MINDS architecture for integrated multimodal oncology data management. We demonstrate MINDS' capabilities in cohort building, data tracking, and present its advantages over current solutions.
### Multimodal Data Consolidation
A fundamental challenge in developing integrated multimodal learning models is assembling the highly heterogeneous and fragmented data from myriad sources into unified datasets at sufficient scale. As shown in Table 1, MINDS directly addresses this by consolidating over 41,000 open-access cancer case profiles spanning diverse research programs into a structured 25.85 MB extract. This aggregated dataset encompasses clinical, molecular, and pathological data elements, providing a multifaceted view of each patient. Compared to petabyte-scale source systems, the extreme compression enables single-node processing and complex SQL analytics that are infeasible on individual repositories. The storage sizes reported for the GDC, PDC, and IDC refer to the total data contained in each repository. However, only a subset of cases in these repositories are open-access and available for research without access restrictions. For example, the GDC contains over 3 petabytes of genomic, imaging, and clinical data overall, but only 17.68 terabytes are associated with open-access cases that can be freely downloaded and analyzed. The 41,499 cases consolidated in MINDS are derived from these open repositories for unencumbered research use.
As shown in Table 2, the consolidated cases further encompass a wide spectrum of research initiatives, enhancing the generalizability of downstream analytical models. For example, the 11,315 cases from The Cancer Genome Atlas provide unmatched high-throughput molecular profiling, while the 18,004 cases from Foundation Medicine offer contemporary genomic assays. Spanning historical and modern cohorts guards against batch effects and chronological biases. This integrated consolidation of multimodal data is indispensable when training machine learning models to uncover hidden patterns. Access to aggregated clinical variables, multiple assay types, and outcomes across diverse patients prevents statistical biases and spurious correlations that arise from learning on isolated datasets. It also provides the large sample sizes needed for deep learning.
By harmonizing dispersed data silos into a unified resource, MINDS effectively addresses the primary bottleneck in large-scale multimodal healthcare machine learning model development - a sufficiently large, heterogeneous, and representative dataset for training and validation of models.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Data Source** & **Storage Size** & **\# of Cases** \\ \hline MINDS & 25.85 MB & 41,499 \\ PDC & 36 TB & 3,081 \\ GDC & 3.78 PB (17.68 TB open) & 86,962 \\ IDC & 40.96 TB & 63,788 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of storage size between MINDS and data commons.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Program** & **\# of Cases** \\ \hline Foundation Medicine (FM) & 18,004 \\ The Cancer Genome Atlas (TCGA) & 11,315 \\ Therapeutically Applicable Research to Generate Effective Treatments (TARGET) & 6,542 \\ Clinical Proteomic Tumor Analysis Consortium (CPTAC) & 1,526 \\ Multiple Myeloma Research Foundation (MMRF) & 995 \\ BEATAML1.0 & 756 \\ NCI Center for Cancer Research (NCICCR) & 481 \\ REBC & 440 \\ Cancer Genome Characterization Initiatives (CGCI) & 371 \\ Count Me In (CMI) & 296 \\ Human Cancer Model Initiative (HCMI) & 228 \\ West Coast Prostate Cancer Dream Team (WCDT) & 99 \\ Applied Proteogenomics OrganizationL Learning and Outcomes (APOLLO) & 87 \\ EXCEPTIONAL RESPONDERS & 84 \\ Oregon Health and Science University (OHSU) & 80 \\ The Molecular Profiling to Predict Response to Treatment (MP2PRT) & 52 \\ Environment And Genetics in Lung Cancer Etiology (EAGLE) & 50 \\ ORGANOID & 49 \\ Clinical Trials Sequencing Project (CTSP) & 44 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Distribution of cases by programs from GDC open cases present in MINDS.
### Cohort Building
Once aggregated data has been consolidated, tailored cohort extraction is needed to develop optimal machine learning training and test sets. Simple random sampling often fails to provide adequate cohort stratification along key variables. MINDS enables researchers to flexibly construct customized cohorts by querying the unified clinical data using performant SQL.
MINDS implements a flexible end-to-end workflow that allows users to submit analytical cohort queries and receive customized structured or unstructured data extracts. Figure 6 provides an overview of the MINDS system and all the data and query interactions with the user. The process begins with users formulating SQL-based queries that specify criteria to define a cohort of interest. These parameterized queries filter over patient attributes and allow the inclusion of any desired clinical, molecular, or demographic factors. For structured data, the submitted SQL query executes against MINDS' consolidated electronic health record database containing harmonized patient profiles. This filtered extraction returns a Pandas data frame containing detailed clinical records for all patients matching the cohort criteria. Alternatively, users can request unstructured data for their defined cohort. In this case, MINDS first extracts a list of unique patient case IDs for those meeting the criteria based on the SQL query parameters. These case IDs are then used to retrieve all associated unstructured medical objects related to those patients from connected repositories. This includes digital pathology slides, medical images like CT/MRI scans, -omics assay files, and other multimodal data assets. This flexible yet automated workflow allows researchers to obtain structured medical records from the electronic health record (EHR) or full multimodal datasets matching customized cohorts simply by submitting analytical SQL queries. The tight integration between cohort definition and data extraction enables the on-demand assembly of tailored data corpora for various biomedical applications.
Preliminary experiments demonstrate interactive cohort construction, with simple queries on a single clinical factor completed on average in 3-5 seconds. Even multidimensional queries joining clinical, molecular, and outcome data across tables are completed within 15 seconds. This enables rapid, iterative refinement of cohort criteria during model development. Researchers have full flexibility to extract customized sets for training algorithms by simply adjusting Boolean logic combining clinical, molecular, or biospecimen factors in the SQL queries. No system constraints are
Figure 6: Overview of the workflow in MINDS, starting from user query generation through returning the cohort data, structured and unstructured. The system starts with a user submitting an analytical query specifying cohort criteria. If the user requests structured data, the query is sent to a function that executes it against the consolidated electronic health record and clinical databases, returning a Pandas data frame containing matching patient records. Alternatively, if the user requests unstructured data for the cohort, the query is sent to another function that extracts a list of unique case IDs for patients meeting the criteria. This case list is then used to retrieve all associated unstructured data objects like medical images, genomic sequences, and pathology slides for those patients from connected repositories, including GDC, PDC, and IDC. The cohort-specific unstructured data extract is returned to the user for further analysis.
imposed. The ability to interactively construct bespoke cohorts by piping SQL queries directly on consolidated records has several key advantages for multimodal machine learning:
* MINDS allows researchers to build cohorts tailored to the problem. This prevents sampling biases linked to the availability of pre-defined cohorts.
* SQL combines and consolidates disparate clinical, molecular, and outcomes data from the entire period of medical treatment. This provides a complete view of each patient.
* Version IDs uniquely label dataset variants to enable precise tracking of changes during iterative model development. Researchers can pinpoint the exact dataset used to generate each model version.
* JSON manifests comprehensively log the dataset composition, including the originating queries, data sources, and extraction workflows. This provides full documentation of the data provenance.
### Data Tracking and Reproducibility
MINDS further simplifies multimodal analysis by automating the rebuild of full datasets tailored to each cohort. APIs and utilities extract images, -omics, and other unstructured data linked to cohort cases from connected repositories like GDC. Consistent organization and JSON manifest document datasets ready for consumption by machine learning models.
To ensure reproducibility, MINDS assigns unique version IDs to cohort datasets. Any changes trigger new versions, enabling precise data tracking to develop different model variants. Comprehensive data provenance from EHR queries to unstructured set regeneration enhances reproducibility in machine learning pipelines.
### Integrated Analytics
Once unified datasets have been constructed, interactive analytics and visualizations are needed to explore cohort characteristics, correlations, and model outputs. MINDS delivers rapid analysis over aggregated multimodal data through integrated dashboards powered by Amazon QuickSight. Optimized cloud data warehousing components like Amazon Redshift enable ad-hoc exploration across thousands of variables without performance lags. QuickSight's advanced machine learning-driven insights uncover subtle trends and patterns. User-defined charts visualize model performance metrics across various cohorts. Key advantages of integrated analytics include:
* Rapid hypothesis testing during exploratory analysis to refine cohorts and features.
* Understanding model performance across cohorts reveals generalization capabilities.
* Uncovering correlations between clinical factors, assays, and predictions guides feature engineering.
* Visualizations build trust by providing direct views into model behaviors.
### Limitations and Future Improvements
While MINDS has demonstrated significant benefits, there are several areas where the system could be improved. Including controlled data, a local deployment option, and enhanced analytics and visualization capabilities represent exciting directions for future work on MINDS. These improvements would increase the amount of data available in MINDS and enhance its utility for oncology research.
## 5 Conclusion
The MINDS was designed to address the challenges of integrating and managing large volumes of oncology data from diverse sources. MINDS provides a cost-effective and scalable solution for storing and managing oncology data through its innovative cloud technologies and data mapping techniques. It leverages public datasets to ensure reproducibility and enhance machine learning capabilities while providing a clear pathway for including controlled data in the future. Our results demonstrate that MINDS significantly reduces storage size and associated costs compared to traditional data storage methods. MINDS' compatibility with public datasets ensures no leaks of controlled data while allowing for reproducibility of results. The system also enhances machine learning capabilities by updating patient information as new data is released from clinical trials, providing transparency and reproducibility. |
2309.04423 | Vis-SPLIT: Interactive Hierarchical Modeling for mRNA Expression
Classification | We propose an interactive visual analytics tool, Vis-SPLIT, for partitioning
a population of individuals into groups with similar gene signatures. Vis-SPLIT
allows users to interactively explore a dataset and exploit visual separations
to build a classification model for specific cancers. The visualization
components reveal gene expression and correlation to assist specific
partitioning decisions, while also providing overviews for the decision model
and clustered genetic signatures. We demonstrate the effectiveness of our
framework through a case study and evaluate its usability with domain experts.
Our results show that Vis-SPLIT can classify patients based on their genetic
signatures to effectively gain insights into RNA sequencing data, as compared
to an existing classification system. | Braden Roper, James C. Mathews, Saad Nadeem, Ji Hwan Park | 2023-09-08T16:34:07Z | http://arxiv.org/abs/2309.04423v1 | # Vis-SPLIT: Interactive Hierarchical Modeling for mRNA Expression Classification
###### Abstract
We propose an interactive visual analytics tool, Vis-SPLIT, for partitioning a population of individuals into groups with similar gene signatures. Vis-SPLIT allows users to interactively explore a dataset and exploit visual separations to build a classification model for specific cancers. The visualization components reveal gene expression and correlation to assist specific partitioning decisions, while also providing overviews for the decision model and clustered genetic signatures. We demonstrate the effectiveness of our framework through a case study and evaluate its usability with domain experts. Our results show that Vis-SPLIT can classify patients based on their genetic signatures to effectively gain insights into RNA sequencing data, as compared to an existing classification system.
Humar-centered computingVisualizationVisualization application domainsVisual analytics
## 1 Introduction
RNA-Sequencing (RNA-Seq) generates data about the abundance/expression of RNA molecules. This technique allows the identification of expression patterns that represent different cell states, which can have special diagnostic or prognostic value in cancer research. However, the success of this approach depends on the specificity of the context under consideration, such as normal tissue biology, immunogenic mutational burden, genetic etiology, or specific treatments. The earlier RNA studies required laborious manual assessment of the importance of individual genes in such context, aided by traditional readily-available hierarchical clustering techniques.
To cluster or analyze high-dimensional gene data, dimensionality reduction techniques have been commonly used. There are two
types of dimensionality reduction techniques: linear methods such as Principal Component Analysis (PCA) [20] and non-linear methods such as Multi-Dimensional Scaling (MDS) [16] and t-Stochastic Neighbor Embedding (t-SNE) [33]. While non-linear methods have been used successfully for capturing local distances between data points [2, 10], they may require tools to explain them and rate their trustworthiness to generate better results [3].
Typical instances of RNA-Seq clustering tend to be static, so even relatively minor improvements such as adding or removing a feature in a cluster are prohibitively costly for subsequent reviewers or analysts, significantly limiting the pace of potential improvements via unplanned contributions. For example, the breast tumor RNA expression clusters were discovered using one-shot (model-free) hierarchical clustering and disseminated in static dendrograms and heatmaps [21, 28]. While the clusters have proven to carry a high degree of diagnostic and prognostic value in the ensuing years, progress on refinement of these results was slow. It was 8 years until an actual classifier derived from this discovery was developed, the PAM50 classifier [19], and the corresponding clusters it classifies are essentially unimproved after its proposal. Some machine learning techniques have found some success in improving classifications by training data-driven models [4, 5, 17]. However, these unsupervised, or "black box", techniques often lack transparency in their decisions and do not allow the incorporation of domain knowledge [15].
To address these issues, we propose Vis-SPLIT (Visually Separable Plots formed from Linear, Iterated Technique), an interactive clustering framework that utilizes PCA to present users with easily explainable projections. For our approach, we choose to use PCA due to its computational efficiency, interpretability, and the ability to visualize and interact with its inner workings [26], including the eigenvectors and eigenvalues of each Principal Component (PC). In the proposed method, an analyst applies PCA to a working dataset iteratively to identify increasingly subtle clusters [24]. Obvious partitions are made first, reducing the sizes of working clusters and ultimately revealing underlying patterns.
Vis-SPLIT provides four linked views that are designed to build a classification scheme for RNA-Seq data. Due to its iterative nature, only one projection is given at a time, along with other coordinated views to manipulate the projection into visually separable clusters. The framework enables domain experts to incorporate domain knowledge by exploring and selecting high-dimensional features/genes. We demonstrate the usability of Vis-SPLIT through a case study on an RNA-Seq dataset of breast cancer patients and evaluate the effectiveness of Vis-SPLIT through domain experts' feedback.
## 2 Related Work
Several clustering methods, including hierarchical methods [6] and the \(k\)-means algorithm [29] have been used to classify gene expression, but require tuning parameters and using appropriate methods for measuring similarity.
To address this issue, different interactive clustering methods have been successful when working with genetic data [18, 25, 34]. Van Long et al. [34] presented an interactive, density-based hierarchical clustering method, which can deal with noises in microarray experiments. Mukhopadhyay et al. [18] proposed an interactive multiobjective clustering (IMOC) algorithm, learning from user decisions to refine its results. Seo and Shneiderman [25] developed the Hierarchical Clustering Explorer for interactively exploring and visualizing large microarray data based on a hierarchical clustering method. However, these methods often lack explanation of the clustering process.
Some interactive tools utilize non-linear techniques to provide powerful data exploration features [11, 23, 27]. Somarakis et al. [27] used t-SNE for their two-dimensional embeddings while Hollt et al. [11] used a hierarchical variant, HSNE [22], to organize clusters into a radial hierarchy for exploration. Sabando et al. [23] trained a parametric feed-forward neural network to recreate the effects of t-SNE to be more fitting for their purpose. While t-SNE based methods easily identify neighborhoods and provide separable projections, they do not explain projections in terms of the input features.
For interpretability and efficient clustering, many methods utilize linear dimensional reduction methods such as PCA when working with high-dimensional data. PCA has been used successfully in the analysis of genetic data [12, 31, 32], but often lacks interactive configuration or is implemented and configured manually, targeted to a specific dataset. This limits the flexibility of an approach to be used with different datasets.
There are more generally applicable PCA systems [8, 13] that provide additional insight into a dataset by exposing certain features of the algorithm. iPCA [13] allows the user to modify dimensional contribution and visualize the eigenvectors within the PCs. DimLift [8] utilizes an iterative approach based on Factor Analysis of Mixed Data (FAMD), identifying obvious feature correlations first so that hidden patterns can be uncovered. While useful tools, these methods focus on data and algorithm exploration or hypothesis formation, while Vis-SPLIT allows users to quickly build classification models for gene expression data.
## 3 Design Requirements
RNA-Seq is a biological tissue measurement procedure and raw-data processing technique that ascertains the abundance (or "expression") of RNA transcripts in the sample for each gene. RNA-Seq is now routinely performed on tumor and adjacent tissue samples from cancer patients. The technique of honing in on expression patterns - representing cell states - with special diagnostic or prognostic value in research cohorts has been very successful, provided that the context under consideration is specific enough, e.g. with respect to tumor origin site, metastatic status, immunogenic mutational burden, specific genetic etiology, or specific treatments. However, discovering/classifying expression patterns is challenging due to the presence of high dimensionality [35] and noises [7]. Based on discussions with two biomedical researchers, we identified several design requirements (DRs) of Vis-SPLIT for classifying types of cancer based on genetic expression.
**DR1**. _Prove an overview of the distribution and features of individuals based on clusters:_ Domain experts are interested in the characteristics of each cluster so that they can identify significant features/genes of each cluster/group, explore how common/rare each cluster is, and understand an overall structure of the user-defined classification.
**DR2**. _Identify patterns of activation or inactivation for different genes or gene groups:_ In order to define meaningful clusters, an analyst needs to be able to identify genetic patterns that distinguish some individuals from the others. These patterns must be apparent within the working set of individuals so that the analyst can define classification rules to exploit the pattern's presence.
**DR3**. _Compare the similarities of gene group activation across clusters:_ It is common to analyze genetic distinction in terms of gene groups, or sets of genes whose collective activation is significant for the purposes of classifying and understanding an individual. These gene groups must be apparent and comparable across all clusters to confirm the results of a classification.
**DR4**. _Understand the link between clusters and diseases:_ Specific sub-types or classes of disease can be associated with each cluster, often defined by its highly activated gene groups and/or prior domain knowledge. Domain experts are interested in the survival rates and prognosis of these diseases.
## 4 Framework
As illustrated in Fig. 1, Vis-SPLIT has four main views: (A) the Hierarchical Overview, (B) the Heatmap Overview, (C) the Survival Analysis View, and (D) the PCA View.
### Hierarchical Overview
The Hierarchical Overview (Fig. 1A) shows the entire dataset being partitioned into individual clusters. In the Hierarchical Overview, we can discover the distribution of patients with similar genetic signatures across clusters (**DR1**). This view serves as a visual representation of the classification model being built.
Visually, the plot resembles a top-down Sankey Diagram, with each rectangular node representing a separation in the data defined by user interactions with the PCA view. In other words, the top of the diagram represents the whole dataset, with movement down the tree corresponding to iterative partitions in the data, eventually resulting in final clusters in the leaf nodes.
The width of nodes and bands corresponds to the number of individuals present in that portion of the classification model. Colors are assigned to each cluster as partitions are made, and can be traced down to the leaf node (representing a final cluster) or can be used to see the relative size difference in any parenting super-clusters or the dataset as a whole. A list of features (genes) can be found on each partitioning node, representing the features that were determined to be most different between the resulting clusters of a partition. A given feature \(i\) will be shown here if the following threshold is met:
\[|\mu_{i}^{a}-\mu_{i}^{b}|\geq\sigma_{avg}\]
where \(\mu_{i}^{a}\) and \(\mu_{i}^{b}\) represent the mean values of a feature \(i\) among the two resulting clusters, \(a\) and \(b\), and \(\sigma_{avg}\) represents 1 standard deviation from the mean of all differences in feature values:
\[\sigma_{avg}=\sqrt{\frac{\Sigma(d_{i}-\mu_{d})^{2}}{N}}\]
where \(d_{i}\) is the difference between the average value of feature \(i\) across cluster \(a\) and the average value of feature \(i\) across cluster \(b\), \(\mu_{d}\) is the average of differences \(d_{i}\) for all \(i\) features, and \(N\) is the number of features.
### Heatmap Overview
The Heatmap Overview (Fig. 1B) shows an overview of all individuals and their genetic signatures. This allows comparison of activation in different gene groups across all clusters (**DR3**)
Every individual is shown as a column, and every feature is represented as a row. The color of each bar encodes a gene expression value, blue (negative values) to yellow (zero) to red (positive values). As partitions are made in the data, this heatmap will reorganize individuals into separable vertical bands, corresponding to the cluster bands of the Hierarchical Overview. The corresponding cluster color is displayed at the top of each cluster in the Heatmap Overview. Additionally, any features that are identified as "important" for a given partition in the Hierarchical Overview will be grouped into a horizontal band, ordered internally by the highest expression.
### Survival Analysis View
The Survival Analysis view (Fig. 1C) depicts the relationship between formed clusters and diseases (**DR4**). This view provides a summary of probabilities that individuals for each cluster will survive up to a specific time. Kaplan-Meier analysis [14] is applied to the current clusters, and a curve is shown for each, colored to match that cluster's band in the Hierarchical Overview. A baseline curve is also shown as a dotted gray line, representing the dataset as a whole.
### PCA View
The PCA View (Fig. 1D) is designed to display activation patterns for different groups of genes (**DR2**). In the PCA View, an analyst can identify feature trends and make meaningful partitions in the data. We use the same color encoding as the Heatmap Overview.
#### 4.4.1 Projection
The Projection (Fig. 1-D.1), shows the current PCA projection. The projection will only include individuals from the selected node in the Hierarchical Overview and will utilize all features (genes) by default, though a limited feature space will be used if any features have been specifically selected. An analyst can choose any two Principle Components (PCs) for the Projection's axes, and can explore the configurations that utilize the most distinguishing features. Each individual is represented as a point and is placed relative to the selected PCs. In the Projection, an analyst can draw a divider line to partition the data based on an observed visual separation. Color can also be encoded on points based on selected features. This is done by averaging selected feature values for each individual and coloring them using the same blue-yellow-red color scale used in all of Vis-SPLIT's heatmaps. Additionally, if there is any existing classification for the dataset, it can be viewed for comparison by toggling an overlay showing categorical colors and a legend.
#### 4.4.2 Heatmaps
For both selected PCs in the Projection, a heatmap is aligned, seen in Fig. 1-D.2 and Fig. 1-D.3. Each heatmap is divided into bins that encapsulate the aligned points in the Projection. For the bottom heatmap, each row indicates a feature/gene and each column represents a bin containing all individuals stretching upwards into the scatterplot. Features are sorted based on values within the PC's eigenvector, prioritizing the most contributing genes for a given projection. For the right heatmap, the rows and columns are swapped.
Each cell is colored based on the average value of the given feature for the contained individuals within the given heatmap bin. Heatmap bins that do not hold any individuals will instead show gray outlined boxes for each feature.
Feature names are listed to the sides of the heatmap along with their respective values in the heatmap's PC. Features can be selected by clicking these labels, which updates the global feature selection across all the visual components in the PCA View. In the case that the user has selected any features, the selected features' labels remain black while any others are displayed in gray.
#### 4.4.3 Feature Loadings
The Feature Loadings plot (Fig. 1-D.4) shows the influence of each feature along the selected PCs in the Projection [9]. Each vector represents a feature with its length and direction indicating its influence in the Projection. A similar vector direction of features can indicate a correlation between them. Features' labels are placed along an outside circle to reduce visual clutter. Additionally, spacing forces are applied to the labels to reduce overlap. An analyst can select features by clicking their circle or label or by brushing. Unselected features and their vectors are grayed out and, like with the aligned heatmaps, any selection of features is global to the PCA View.
## 5 Case Study
We demonstrate a classification case of our workflow to group patients with breast cancer. In this case, we used the PanCancer Atlas breast cancer dataset [30]. This dataset tracks 50 genetic markers across 1082 individuals, which have been previously classified through the PAM50 test [36] as BRCA_LumA, BRCA_LumB, BRCA_Basal, BRCA_Her2, BRCA_Normal, or none.
To classify the dataset better than the PAM50, first, an analyst wants to divide the entire dataset into two parts through the Projection (Fig. 2A). To examine the features/genes of each group in detail, the analyst explores a heatmap for the second PC, where a few genes seem to be much more positive towards the top of the scatterplot: ERBB2, GRB7 and CDC6 (**DR2**). Additionally, the Feature Loadings plot (Fig. 2B) shows these features have a much stronger influence than the other features.
The analyst selects these seemingly correlated features from the Feature Loadings plot and encodes color to the Projection, confirming that individuals in the top part of the Projection have high expressions for these genes (Fig. 1A). Using their domain knowledge, the analyst knows these features to correlate to HER2 positive breast cancers, having higher levels of HER2 protein. Based on this finding, the analyst divides the dataset into two clusters.
The width of vertical bands in the Hierarchical and Heatmap Overviews reveal that about 10% of individuals fall within the new HER2 group (**DR1**). The analyst can also see that the three significant genes they identified earlier have been separated and raised to the top of the Heatmap Overview, highlighting the major difference between these two clusters' genetic signatures (**DR3**). The analyst also notes that the HER2 group has a flatter survival curve in the Survival Analysis View, from which hypotheses may be formed about the prognosis of disease found within these individuals (**DR4**).
Next, to further classify individuals who don't belong to the HER2 group, the analyst chooses the larger leaf node in the Hierarchy Overview. Again a separation is noticeable in the Projection, with one cluster near the bottom-left and another near the top-right (Fig. 1D). The analyst selects all features pointing towards these directions, and runs PCA again. The new projection is made using only about half of the total features, making it easier to focus on and linearly divide the clusters previously observed.
The analyst continues to look for feature patterns and splits the data until they can no longer make any meaningful partitions. Once the analyst has completed their interactions, they review the resulting model in the Overviews and Survival Analysis View. The produced Vis-SPLIT classification can then be compared with that of the PAM50 test (Fig. 3). While both heatmaps reveal cohesive genetic signatures across four distinct clusters, Vis-SPLIT classifies all examples, whereas the PAM50 test leaves some individuals within the less descriptive classifications of "BRCA_Normal" and "none". Though some of these previously unclassified individuals may be important to recognize as outliers, many have genetic signatures which strongly align with a formed cluster. Additionally, some genetic patterns are more clearly represented within Vis-SPLIT clusters. One example of these patterns is the top feature group within the HER2 positive clusters, which contains more overwhelmingly positive values for the three identified genes (Fig. 2A).
### Expert Feedback
Our Vis-SPLIT was reviewed by two biomedical researchers to evaluate the usefulness of the proposed framework. After domain experts were given time to use Vis-SPLIT, along with some guidance on its use, they provided overall positive feedback, claiming that the tool was intuitive and produced an easily interpretable model. They appreciated how fast it was to discover clusters and build a model, noting on first look that it "truly felt most sensible to split first [based on the HER2 group], because it was the most salient [within the Projection]." They liked using the Projection to extract clusters such as this one and one resembling PAM50's Basal group, and enjoyed the explanations provided by the aligned heatmaps and Loadings plot. For less significant distinctions, such as that between the orange and blue clusters from Fig. 3, the domain experts favored patterns seen in the aligned heatmaps and Loadings plot, which revealed a group of genes that spanned continuously from low expression to high expression across the cluster.
The experts also liked the ability to overlay existing classifications directly on their projections. When viewing PAM50 classes through this option, one expert noticed that some individuals seemed misclassified as HER2 by the PAM50 system. They reached this conclusion by encoding the genes known to correlate with this cancer onto the Projection, observing low representation on specific individuals and then overlaying the PAM50 classification. They felt it was useful to be able to identify these inconsistencies in the genetic expressions within PAM50 classes and were more confident that the classifications made through Vis-SPLIT better grouped similar genetic signatures.
The experts did note that the system did not provide any statistical analysing more in-depth comparison of survival curves. Additionally they mentioned a desire for more robust navigation in the Hierarchical Overview to return back to the previous states of the application, essentially being able to "undo" and/or review partitioning decisions.
## 6 Conclusion
We presented Vis-SPLIT, a novel framework for interactively clustering RNA-Seq datasets. We provide several techniques to cluster similar genes and identify groups of individuals linked to the subtypes of a disease. Finally, a case study and accompanying expert feedback is presented to demonstrate the tool's use.
There are several limitations of our current framework, which will be addressed in the future. First, although Vis-SPLIT has analyzed and visualized up to \(2,000\) individuals, any more individuals results in the Heatmap Overview becoming increasingly crowded and losing the ability to identify individuals, especially once there is less than a pixel dedicated to each. The addition of other summarization methods [1] and a zoom feature can alleviate this issue. With more features another limitation is seen in the Loadings Plot, where feature labels may stray further from the direction their corresponding vector. Finally, the approach could be strengthened with measurements of certainty for an example's classification within the model.
Vis-SPLIT can assist cancer researchers in the exploration of data to build new models and compare them with existing baselines. Outputted models can be used directly or can be summarized into simpler decision trees based on significant gene expressions.
Figure 3: A comparison of classification from the resulting Heatmap Overview of the PAM50 (top) and Vis-SPLIT (bottom), and (A) some regions where the difference is visible.
Figure 2: An example of (A) identifying a gene partition based on (B) selected features in the Feature Loadings plot. |
2310.00040 | Recovering the gravitational potential in a rotating frame: Deep
Potential applied to a simulated barred galaxy | Stellar kinematics provide a window into the gravitational field, and
therefore into the distribution of all mass, including dark matter. Deep
Potential is a method for determining the gravitational potential from a
snapshot of stellar positions in phase space, using mathematical tools borrowed
from deep learning to model the distribution function and solve the
Collisionless Boltzmann Equation. In this work, we extend the Deep Potential
method to rotating systems, and then demonstrate that it can accurately recover
the gravitational potential, density distribution and pattern speed of a
simulated barred disc galaxy, using only a frozen snapshot of the stellar
velocities. We demonstrate that we are able to recover the bar pattern speed to
within 15% in our simulated galaxy using stars in a 4 kpc sub-volume centered
on a Solar-like position, and to within 20% in a 2 kpc sub-volume. In addition,
by subtracting the mock "observed" stellar density from the recovered total
density, we are able to infer the radial profile of the dark matter density in
our simulated galaxy. This extension of Deep Potential is an important step in
allowing its application to the Milky Way, which has rotating features, such as
a central bar and spiral arms, and may moreover provide a new method of
determining the pattern speed of the Milky Way bar. | Taavet Kalda, Gregory M. Green, Soumavo Ghosh | 2023-09-29T18:00:00Z | http://arxiv.org/abs/2310.00040v1 | Recovering the gravitational potential in a rotating frame: _Deep Potential_ applied to a simulated barred galaxy
###### Abstract
Stellar kinematics provide a window into the gravitational field, and therefore into the distribution of all mass, including dark matter. _Deep Potential_ is a method for determining the gravitational potential from a snapshot of stellar positions in phase space, using mathematical tools borrowed from deep learning to model the distribution function and solve the Collisionless Boltzmann Equation. In this work, we extend the _Deep Potential_ method to rotating systems, and then demonstrate that it can accurately recover the gravitational potential, density distribution and pattern speed of a simulated barred disc galaxy, using only a frozen snapshot of the stellar velocities. We demonstrate that we are able to recover the bar pattern speed to within 15 % in our simulated galaxy using stars in a 4 kpc sub-volume centered on a Solar-like position, and to within 20 % in a 2 kpc sub-volume. In addition, by subtracting the mock "observed" stellar density from the recovered total density, we are able to infer the radial profile of the dark matter density in our simulated galaxy. This extension of _Deep Potential_ is an important step in allowing its application to the Milky Way, which has rotating features, such as a central bar and spiral arms, and may moreover provide a new method of determining the pattern speed of the Milky Way bar.
keywords: Galaxy: disc - Galaxy: kinematics and dynamics - Galaxy: structure - galaxies: spiral - galaxies: kinematics and dynamics
## 1 Introduction
One of the major goals of Milky Way dynamics is the recovery of the gravitational potential. This is because the gravitational potential is sourced by all forms of matter, both baryonic and dark. As far as is known at present, the dark component can only be mapped by its gravitational effects (direct or indirect). Recovering the gravitational potential is thus important for mapping the dark matter distribution in the Milky Way. Motivated by the ongoing _Gaia_ mission, which is providing six-dimensional phase-space information for tens of millions of stars (Gaia Collaboration et al., 2016, 2023), we tackle this problem using a data-driven method using mathematical tools from deep learning, which we term "_Deep Potential_" (Green & Ting, 2020; Green et al., 2023).
The dynamics of the trajectories of the stars in the Milky Way are dominated by gravitational forces. While we are able to measure the positions and velocities of the stars, stellar accelerations due to the Milky Way potential - typically on the order of \(1\,\mathrm{cm\,s^{-1}\,yr^{-1}}-\) are exceedingly difficult to directly measure with present-day instruments (Silverwood & Easther, 2019). Some systems do lend themselves to acceleration measurements, but they are either in the presence of strong accelerations or exotic systems such as pulsars or eclipsing binaries (Ghez et al., 2000; Chakrabarti et al., 2021; Phillips et al., 2021).
This means we are effectively limited to a single snapshot of the positions and velocities of the stars in the Milky Way. We describe this snapshot via its distribution function, \(f(\vec{x},\vec{v})\), which gives the density of stars in phase space (position and velocity). However, unless additional assumptions are made about the nature of a gravitational system, any gravitational potential is consistent with any snapshot of the distribution function, as the gravitational potential only determines the evolution of the system. To connect the distribution function to the potential, one typically assumes the system to be in a steady state, in which the distribution function does not vary with time. In this work, for the first time, we weaken this assumption and require only that stationarity hold in some arbitrarily rotating frame. We achieve this by concurrently finding both the potential and the rotation parameters which render the distribution function most stationary. This is important because many physical systems of interest, such as the Milky Way, contain rotating features. The Milky Way harbors a central bar (e.g., Liszt & Burton, 1980; Binney et al., 1991; Weinberg, 1992; Binney et al., 1997; Blitz & Spergel, 1991; Hammersley et al., 2000; Wegg & Gerhard, 2013; Shen & Zheng, 2020) and spiral arms (e.g., Oort et al., 1958; Georgelin & Georgelin, 1976; Gerhard, 2002; Churchwell et al., 2009; Reid et al., 2014), both of which break the standard, non-rotating stationarity assumption. The properties of the Milky Way bar are still not fully determined, with the two leading models being a slowly rotating long bar or a fast-rotating short bar (e.g., Clarke & Gerhard, 2022). The method we present in this paper should be sensitive to the pattern speed of the Milky Way bar or local spiral features, and could help determine whether the bar has a slow or fast pattern speed.
Existing methods for modeling the dynamics typically rely on taking velocity moments of the Collisionless Boltzmann Equation via
Jeans modeling (Binney and Tremaine, 2008) or rely on simplified models for the distribution function (e.g., Schwarzschild, 1979; Syer and Tremaine, 1996; McMillan and Binney, 2008; Bovy and Rix, 2013; Magorrian, 2014). This has been historically motivated by either the lack of or small quantities of available full six-dimensional phase-space data, but also by Milky Way by and large being axisymmetric or having other symmetries which lends itself to aforementioned modeling approaches. However, with the large quantitative and qualitative improvements brought by _Gaia_, capturing the full complexity of the stellar kinematic data - including non-axisymmetric structures - has become more feasible. _Deep Potential_ goes beyond simple parametric models and borrows several techniques from the realm of deep learning. With the methodological improvements developed in this work, _Deep Potential_ makes the following minimal assumptions about the underlying physics of the kinematic system:
1. The motions of stars are guided by a gravitational potential \(\Phi(\vec{x})\).
2. We observe the phase-space coordinates of stars (the "kinematic tracers") that are statistically stationary in a rotating frame.
3. The overall matter density is non-negative everywhere: \(\rho(\vec{x})\geq 0\). We can express this via gravitational potential using the Poisson equation as \(\rho(\vec{x})=\nabla^{2}\Phi/(4\pi G)\geq 0\).
Notably, we do not assume that the gravitational potential is sourced by the observed kinematic tracers, as other matter components (_e.g._, dark matter) can contribute to the potential.
Our previous work on this method outlined its theoretical motivations and demonstrated its effectiveness on simpler toy models with observational errors and non-stationarities (Green and Ting, 2020; Green et al., 2023). These toy models involved drawing particles from analytic models of stationary systems or evolving a set of tracer particles in a fixed background potential. In this work, we move further and demonstrate the method on a self-consistent \(N\)-body simulation of a disc galaxy with a prominent central bar, and we further relax the assumption that the distribution function be stationary in the observed, "laboratory" frame of reference. Furthermore, recent work by Ghosh et al. (2023) demonstrated that the presence of a prominent bar produces systematic biases in recovering the underlying distribution function and gravitational potential when using action-based dynamical modeling.
Similar methods to _Deep Potential_, using normalizing flows to represent the stellar distribution function and then determining the gravitational potential by assuming stationarity, have been developed by An et al. (2021), Naik et al. (2022), and Buckley et al. (2023, 20). Lim et al. (2023) recently applied normalizing-flow-based modeling to a sample of Milky Way stars in order to estimate the local dark matter density. The major qualitative addition made by our present work over previous methods is the relaxation of the stationarity assumption, to hold in an arbitrarily rotating frame. This is an important advance in order to accurately model the Milky Way, which harbors rotating features such as a central bar.
In this paper, we describe and expand on the methodology of _Deep Potential_ (Section 2), outline the \(N\)-body simulation and the selection of the dataset (Section 3), test the performance of the method (Sections 4 and 5) and discuss future prospects (Section 6).
## 2 Method
This work builds on the "_Deep Potential_" method, which is explained in Green and Ting (2020) and Green et al. (2023). In this work, we extend _Deep Potential_ to allow for concurrent fitting of both the gravitational potential and the rotating frame in which the system appears most stationary. In a barred galaxy, for example, this could correspond to a frame rotating at the pattern speed of the bar. Here, we briefly review the key components of the _Deep Potential_ method and then derive a generalized stationarity assumption in a rotating frame.
The first assumption of _Deep Potential_ is that stars orbit in a background gravitational potential, \(\Phi(\vec{x})\). The density of an ensemble of stars in six-dimensional phase space (position \(\vec{x}\) and velocity \(\vec{v}\)) is referred to as the distribution function, \(f(\vec{x},\vec{v})\). The evolution of the distribution function is described by the Collisionless Boltzmann Equation:
\[\frac{\mathrm{d}f}{\mathrm{d}t}=\frac{\partial f}{\partial t}+\sum_{i}\left(v_ {i}\frac{\partial f}{\partial x_{i}}-\frac{\partial\Phi}{\partial x_{i}}\frac{ \partial f}{\partial v_{i}}\right)=0. \tag{1}\]
Our second assumption is that the distribution is stationary. In previous work, we assumed that stationarity holds in the laboratory frame (_i.e._, the frame in which the positions and velocities of the stars are measured, such as the barycenter of the Solar System): \(\partial f/\partial t=0\). However, disc galaxies typically have rotating features, such as bars and spiral arms. In a barred galaxy, for example, it would be reasonable to assume that the galaxy would appear more stationary in a frame that co-rotates with the bar, instead of in an inertial frame. We therefore generalize the stationarity condition to a frame that is rotating with with angular speed \(\vec{\Omega}\) around an axis passing through a point \(\vec{x}_{0}\) in space. We additionally allow the stationary frame to be move with constant velocity \(\vec{v}_{0}\) relative to the laboratory frame. In the Milky Way, \(\vec{x}_{0}\) and \(\vec{v}_{0}\) could represent the location and velocity of the Galactic Center relative to the Solar System, and \(\vec{\Omega}\) (directed along the rotation axis of the Galaxy) could represent the pattern speed of either the central bar or of the spiral arms. Here, we work out the stationarity condition for the general case when \(\vec{x}_{0}\neq\vec{v}_{0}\neq\vec{0}\) (for real observations, the location of the rotation axis and the velocity of the center of system are not always zero). In general, the parameters describing the stationary frame can either be fixed, or can be determined concurrently with the gravitational potential. Though we derive the stationarity condition in full generality, in the numerical experiments in this work, we will later fix \(\vec{x}_{0}\) and \(\vec{v}_{0}\) to zero.
In the following, we denote the partial derivative of the distribution function w.r.t. time _in the rotating frame_ as \((\partial f/\partial t)_{\Omega}\). The generalized stationarity condition states that this partial derivative should equal zero. By translating the rotating-frame partial derivative to partial derivatives in the laboratory frame, we obtain our generalized stationarity condition in terms of laboratory-frame quantities:
\[\left(\frac{\partial f}{\partial t}\right)_{\Omega}=\frac{\partial f}{ \partial t}+\sum_{i}\left(u_{i}(\vec{x})\frac{\partial f}{\partial x_{i}}+w_{ i}(\vec{v})\frac{\partial f}{\partial v_{i}}\right)=0, \tag{2}\]
\[\vec{u}(\vec{x})=\vec{\Omega}\times(\vec{x}-\vec{x}_{0})+\vec{v}_{0}, \tag{3}\]
\[\vec{w}(\vec{v})=\vec{\Omega}\times(\vec{v}-\vec{v}_{0}). \tag{4}\]
For a full derivation of this transformation, see Appendix A. Combining this with the Collisionless Boltzmann Equation, we arrive at
\[\left(\frac{\partial f}{\partial t}\right)_{\Omega}=\sum_{i}\left[(u_{i}-v_{i}) \frac{\partial f}{\partial x_{i}}+\left(\frac{\partial\Phi}{\partial x_{i}}+ w_{i}\right)\frac{\partial f}{\partial v_{i}}\right]=0. \tag{5}\]
If the distribution function is truly stationary, the gravitational potential can be uniquely determined by solving Eq. (5). Realistic physical systems will, however, not be completely stationary, and as such, there may not exist any potential which would render the system
stationary (See An et al., 2021 and Green et al., 2023 for discussion). In general, therefore, _Deep Potential_ recovers the potential which minimizes some measure (to be discussed below) of the total non-stationarity in the system. Note that we do not assume that the gravitational potential is sourced by the observed stellar population alone. Accordingly, we do not impose the condition
\[\nabla^{2}\Phi(\vec{x})=4\pi G\int f(\vec{x},\vec{v})\;\mathrm{d}^{3}\vec{v}\;. \tag{6}\]
### Modeling the distribution function
In practice, when we observe stellar populations, we obtain a discrete sample of points in phase space, rather than a smooth distribution function \(f(\vec{x},\vec{v})\). In order to obtain gradients of the underlying distribution function, we require a continuous, differentiable object representing \(f(\vec{x},\vec{v})\). For this purpose, we use normalizing flows, which are a class of algorithms used for density estimation in unsupervised machine learning (for a review, see Kobyzev et al., 2019). A normalizing flow works by learning a set of invertible coordinate transformations that turn a simple distribution, usually a normal distribution, into a more complex one that fits the observed data. The complexity of the distributions it can capture is limited by the number of parameters describing the coordinate transformations. There are many approaches for constructing normalizing flows, and the field is in constant development. In this work, we opt to use FFJORD (Grathwohl et al., 2018), though the particular choice of normalizing flow method is not critical to the working of _Deep Potential_. The main drawback of normalizing flows is that most implementations assume the training data to be continuous everywhere. This, however, is not a significant problem, as most stellar systems exhibit the same property.
Given a sample of \(n\) stars with positions \(\vec{x}_{i}\) and velocities \(\vec{v}_{i}\), we train a normalizing flow \(f(\vec{x},\vec{v})\) using stochastic gradient descent to obtain the parameters of the flow that maximize the log-likelihood
\[L_{f}=\sum_{i=1}^{n}\ln f\left(\vec{x}_{i},\vec{v}_{i}\right)\;. \tag{7}\]
The loss is supplemented with Jacobian and kinetic regularization (Finlay et al., 2020), as detailed in Section 2.3. When doing subsequent analysis, we also multiply the output of the normalizing flow by \(n\). This is because normalizing flows are normalized to one, but we're interesting in the output being the number density of training data. The great advantage of using a normalizing flow is that our representation is both highly flexible and auto-differentiable. When implemented in a standard deep-learning framework, such as TensorFlow (Abadi et al., 2015), PyTorch (Paszke et al., 2019) or JAX (Bradbury et al., 2018), it is possible to automatically differentiate the distribution function at arbitrary points in phase space, in order to obtain the terms \(\partial f/\partial\vec{x}\) and \(\partial f/\partial\vec{v}\) in the Collisionless Boltzmann Equation.
### Modeling the gravitational potential
After learning the distribution function, we find the gravitational potential \(\Phi(\vec{x})\) and angular rotation speed \(\Omega\) that best satisfy the Collisionless Boltzmann Equation and generalized stationarity assumption given in Eq. (5). To parameterize the gravitational potential, we use a feed-forward neural network which takes as input a three-dimensional vector \(\vec{x}\) and outputs a scalar, \(\Phi\). We concurrently train the parameters of the potential and \(\Omega\) to minimize
\[L_{\Phi}=\int\mathcal{L}\left[\left(\frac{\partial f(\vec{x},\vec{v})}{ \partial t}\right)_{\Omega}\nabla^{2}\Phi(\vec{x})\right]f(\vec{x},\vec{v}) \;\mathrm{d}^{3}\vec{x}\;\mathrm{d}^{3}\vec{v}\;, \tag{8}\]
where \(\mathcal{L}\) is the differential contribution to the loss of an individual point in phase space, given by
\[\mathcal{L}=\sinh^{-1}\left[\alpha\left|\left(\frac{\partial f}{\partial t} \right)_{\Omega}\right|\right]+\lambda\sinh^{-1}\left(\beta\max\left\{-\nabla ^{2}\Phi,0\right\}\right)\;. \tag{9}\]
The first term penalizes non-stationarity in a frame rotating with angular speed \(\Omega\) while the second term penalizes negative mass densities. We first take the absolute value of \((\partial f/\partial t)_{\Omega}\), in order to penalize positive and negative changes in the phase-space density equally. The inverse hyperbolic sine function down-weights large values, while the constant \(\alpha\) sets the level of non-stationarity at which our penalty transitions from being approximately linear to being approximately logarithmic. The loss is supplemented with \(\ell_{2}\) regularization based on the neural-network weights that describe \(\Phi(\vec{x})\). The integral in Eq. (8) is computationally expensive to evaluate directly, but can be approximated by averaging \(\mathcal{L}\) over \(m\) samples drawn from the distribution function, where \(m\) is a sufficiently large number:
\[L_{\Phi}\approx\frac{n}{m}\sum_{i=1}^{m}\mathcal{L}\left[\left(\frac{ \partial f(\vec{x}_{i},\vec{v}_{i})}{\partial t}\right)_{\Omega},\nabla^{2} \Phi(\vec{x}_{i})\right]\;. \tag{10}\]
The constant \(n/m\) comes from the normalization of the distribution function, and can be omitted when implementing the loss function.
### Implementation
In this paper, we implement _Deep Potential_ in TensorFlow 2 (Abadi et al., 2015) and using TensorFlow Distributions (Dillon et al., 2017). All of our code is publicly available under a permissive license that allows reuse and modification with attribution, both in archived form at [https://doi.org/10.5281/zenodo.8390759](https://doi.org/10.5281/zenodo.8390759) and in active development at [https://github.com/gregreen/deep-potential](https://github.com/gregreen/deep-potential).
To represent the distribution function, we use a chain of three FFJORD normalizing flows, each with eight or twenty densely connected hidden layers, depending on the system, of hidden_size neurons (in this paper, we use hidden_size = 128 or 256 for different systems) and a tanh activation function. For our base distribution, we use a multivariate Gaussian distribution with mean and variance along each dimension set to match the training data set. During training, we impose Jacobian and kinetic regularization with strength \(5\times 10^{-4}\)(Finlay et al., 2020), which penalizes overly complex flow models and tends to reduce training time. We train our flows using the rectified Adam optimizer (Liu et al., 2019), with a batch size of \(2^{13}\)(8096). We find that this relatively large batch size leads to faster convergence (in wall time) than more typical, smaller batch sizes. We begin the training with a "warm-up" phase that lasts 2048 steps, in which the learning rate linearly increases from 0 to 0.001. Thereafter, we use a constant learning rate. We decrease the learning rate by a factor of two whenever the training loss fails to decrease 0.01 below its previous minimum for 512 consecutive steps (this period is termed the "patience"). We terminate the training when the loss drops below \(10^{-6}\).
After training our normalizing flow, we draw \(m=2^{21}\) (\(\sim 2\) million) phase-space coordinates, and calculate the gradients \(\partial f/\partial\vec{x}\) and \(\partial f/\partial\vec{v}\) at each point (using auto-differentiation), for use in learning the gravitational potential.
We represent the gravitational potential using a feed-forward neural network with four densely connected hidden layers, each with 512 neurons and a tanh activation function (we eschew more commonly used activation functions, such as ReLU, which have discontinuous derivatives, as these may lead to unphysical potentials). The network takes a three-dimensional input (the position \(\vec{x}\) in space), and produces a scalar output (the potential). No activation function is applied to the final scalar output. We add in an \(\ell_{2}\) loss on the potential network weights with strength \(0.1/n\_weights\), where \(n\_weights\) is the total number of weights in the network. We train the network using the rectified Adam optimizer, with batches of \(2^{15}\) (\(32\,768\)) phase-space coordinates. We use a similar learning-rate scheme as before, with a warm-up phase lasting 2048 steps, an initial learning rate of 0.001, and a patience of 2048 steps. In the potential loss function (Eq. 9), we set \(\alpha=1\times 10^{5}\), \(\beta=1\), and \(\lambda=1\), which affect the penalties on non-stationarity and negative gravitational mass densities.
When fitting both the distribution function and the gravitational potential, we reserve \(25\,\%\) of our input data as a validation set. After each training step, we calculate the loss on a batch of validation data, in order to identify possible overfitting to the training data. Such overfitting would manifest itself as a significantly lower training loss than validation loss. In the experiments in this paper, no significant overfitting is observed--the difference in the likelihoods of the training and validation sets is typically less than 1%.
The choices made here are by no means set in stone, and can be altered without changing the overall _Deep Potential_ framework. In particular, rapid advances have been made in research into normalizing flows over the preceding years. As more accurate and/or computationally efficient flows are developed, they can be used by _Deep Potential_.
## 3 Fiducial \(N\)-body Bar Model
To demonstrate _Deep Potential_ in a rotating frame, we make use of an \(N\)-body simulation of a collisionless stellar disc that subsequently develops a strong bar in the central region. In Section 3.1, we describe the initial equilibrium set-up, and the basic structural parameters pertaining to the fiducial bar model. In Section 3.2, we explain some of the key bar properties which will be used later in this work. Finally, in Section 3.3, we specify the tracers that we use for training our normalizing flows.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \(R_{\mathrm{d}}\) & \(h_{x}\) & \(M_{\mathrm{disc}}\) & \(R_{\mathrm{H}}\) & \(M_{\mathrm{dm}}\) & \(n_{\mathrm{disc}}\) & \(n_{\mathrm{dm}}\) \\ (kpc) & (kpc) & (\(\times 10^{11}M_{\odot}\)) & (kpc) & (\(\times 10^{11}M_{\odot}\)) & (\(\times 10^{5}\)) & (\(\times 10^{5}\)) \\ \hline
4.7 & 0.3 & 1 & 10 & 1.6 & 10 & 5 \\ \hline \end{tabular}
\end{table}
Table 1: Key structural parameters for the initial, equilibrium model of our simulated galaxy. This model is subsequently integrated in a self-consistent manner for 9 Gyr, during which time it develops a central bar. See Section 3 for details of the \(N\)-body simulation.
Figure 1: Face-on distribution of all stellar particles, calculated at \(t=2,4.25\), and 9 Gyr for our fiducial bar model. The dashed black curves denote the constant density contours. The blue circle in each panel denotes the extent of the bar (\(R_{\mathrm{bar}}\)) whereas the magenta circle denotes the location of the corotation radius (\(R_{\mathrm{CR}}\)). The galaxy harbors a prominent bar, which grows in radial extent from the earliest to the latest time step.
Figure 2: A measure of the radial extent of the bar in our simulated galaxy at three time steps. At each time step, we measure the radial profile of the \(m=2\) Fourier coefficient of stellar density in cylindrical annuli (normalized by the \(m=0\) coefficient) using Eq. 13. A prominent peak in the radial profile of the \(m=2\) Fourier coefficient clearly demonstrates the presence of a central bar at all three time steps. The central bar extends radially over time, becoming increasingly prominent in the stellar mass distribution.
### Simulation set-up and equilibrium configuration
The details of the initial equilibrium model were previously explained in Ghosh et al. (2023b) (See **r**thick**0.0** there). Here, for the sake of completeness, we briefly mention the equilibrium set-up for our fiducial bar model.
The equilibrium model consists of an axisymmetric stellar disc which is embedded in a live dark matter halo. The stellar disc is modeled with a Miyamoto-Nagai profile (Miyamoto & Nagai, 1975) with a potential of the form
\[\Phi_{\rm disc}=-\frac{GM_{\rm disc}}{\sqrt{R^{2}+\left(R_{\rm d}+\sqrt{z^{2} +h_{\rm z}^{2}}\right)^{2}}}\,, \tag{11}\]
where \(R_{\rm d}\) and \(h_{\rm z}\) are the characteristic disc scale length and scale height, respectively, and \(M_{\rm disc}\) is the total mass of the stellar disc. The dark matter halo is modeled with a Plummer sphere (Plummer, 1911), with potential of the form
\[\Phi_{\rm dm}(r)=-\frac{GM_{\rm dm}}{\sqrt{r^{2}+R_{\rm H}^{2}}}\,, \tag{12}\]
where \(R_{\rm H}\) is the characteristic scale length, and \(M_{\rm dm}\) is the total halo mass. Here, \(r\) and \(R\) are the radius in the spherical and the cylindrical coordinates, respectively. In Table 1, we list the key values of the structural parameters for stellar disc as well as the dark matter halo. The total number of particles used to model each of these structural components is also listed in Table 1.
The initial conditions of the stellar disc are obtained using an iterative method (See Rodionov et al., 2009). For this model, we constrain the density profile of the stellar disc while allowing the velocity dispersions (both vertical and radial components) to adjust such that the system converges to an equilibrium solution (for details, see Fragkoudi et al., 2017; Ghosh et al., 2023b). The simulation is run using a TreeSPH code by Semelin & Combes (2002). A hierarchical tree method (Barnes & Hut, 1986) with opening angle \(\theta=0.7\) is used for calculating the gravitational force, which includes terms up to quadrupole order in the multipole expansion. In addition, we use a Plummer potential for softening the gravitational forces with a softening length \(\epsilon=150\) pc. The equations of motion are integrated using a leapfrog algorithm (Press et al., 1986) with a fixed time step of \(\Delta t=0.25\) Myr. The model is evolved for a total time of 9 Gyr.
### Properties of the stellar bar
To study the robustness of the _Deep Potential_ in a rotating frame when applied to a barred galaxy, we choose three snapshots, namely at \(t=2\) Gyr, \(4.25\) Gyr, and \(9\) Gyr from the fiducial bar model. Fig. 1 shows the corresponding face-on density distribution of stellar particles at these three time steps. A visual inspection reveals that at all three time steps the model harbors a prominent stellar bar in the central region.
We quantify the strength of the bar by taking the Fourier decomposition (in azimuthal angle \(\phi\)) of the density in annuli (of cylindrical radius \(R\)). We then calculate the strength of the normalized \(m=2\) Fourier coefficient as a function of \(R\):
\[\frac{A_{m}}{A_{0}}\bigg{|}_{R}=\left|\frac{\sum_{j}M_{j}e^{im\phi_{j}}}{\sum _{j}M_{j}}\right|\,, \tag{13}\]
where \(A_{m}\) denotes \(m\)-th coefficient of the Fourier moment of the stellar density distribution, \(M_{j}\) is the mass of the \(j^{\rm th}\) particle (not to be confused with the Fourier coefficient), and \(\phi_{j}\) is the corresponding azimuthal angle. For each radius \(R\), the summation runs over all particles within the radial annulus \([R,R+\Delta R]\), with \(\Delta R=0.5\) kpc. Fig. 2 shows the corresponding radial profiles of the \(m=2\) Fourier coefficient at the three selected time steps. We see that at \(t=2\) Gyr, the bar is rapidly forming while at \(t=4.25\) Gyr the bar reaches its maximum strength (i.e., highest peak value of the \(m=2\) Fourier coefficient). By the end of the simulation run at \(t=9\) Gyr, the bar remains strong. For a detailed exposition of the temporal growth of the bar, the reader is referred to Ghosh et al. (2023b).
The remaining two properties of the bar that are relevant to this work are the extent of the bar, \(R_{\rm bar}\), and the pattern speed of the bar, \(\Omega_{\rm bar}\). Following Ghosh & Di Matteo (2023), at time \(t\), we define \(R_{\rm bar}\) as the location where \(A_{2}/A_{0}\) drops to \(70\%\) of its peak value. The corresponding extent of \(R_{\rm bar}\) is indicated in Fig. 1 by a blue circle. We measure the bar pattern speed by fitting a straight line to the temporal variation of the phase-angle of the \(m=2\) Fourier mode. This method assumes that the bar rotates rigidly with a single pattern speed in that time-interval. The bar slows with time, with bar pattern speeds of \(20.7\) km s\({}^{-1}\) kpc\({}^{-1}\), \(12.15\) km s\({}^{-1}\) kpc\({}^{-1}\), and \(8.1\) km s\({}^{-1}\) kpc\({}^{-1}\) at \(2\), \(4.25\) and \(9\) Gyr, respectively. A rotating bar induces a number of resonances, namely, corotation (CR), the Inner Lindblad Resonance (ILR) and the Outer Lindblad Resonance (OLR). To determine the locations of these resonances, we first need to compute the radial variation of the circular velocity (equivalently, the rotation curve). At time \(t\), the circular velocity \(v_{\rm c}\) is calculated with the asymmetric drift correction (?):
\[v_{\rm c}^{2}=v_{\phi}^{2}+\sigma_{\phi}^{2}-\sigma_{R}^{2}\left(1+\frac{{\rm d }\ln\rho}{{\rm d}\ln R}+\frac{{\rm d}\ln\sigma_{R}^{2}}{{\rm d}\ln R}\right)\,. \tag{14}\]
Here, \(v_{\phi}\) is the azimuthal velocity, whereas \(\sigma_{R}\) and \(\sigma_{\phi}\) denote the radial and the azimuthal velocity dispersion, respectively. Using the rotation curve, we calculate the radial location of the CR, defined by \(\Omega(R=R_{\rm CR})=\Omega_{\rm bar}\). The corresponding \(R_{\rm CR}\) values are indicated in Fig. 1. We also provide the numerical values of \(R_{\rm bar}\) and \(R_{\rm CR}\) in Table 3. As the bar in our fiducial model slows down with time, the location of the CR is pushed farther out in the disc.
### Dataset selection
In order to create samples of kinematic tracers for _Deep Potential_, we randomly select \(n=2^{19}\) (\(524\,288\)) stellar particles at each time step, from inside a cylindrical volume of radius \(R=16\) kpc (centered at the origin of the system) and half-height (in \(z\)) of \(H=2\) kpc.
At each time step, we orient the coordinate system such that the \(z\)-axis is parallel to the angular momentum of the stellar particles and
\begin{table}
\begin{tabular}{l|l l l l} \hline shell name & \(R_{\rm in}\) & \(R_{\rm out}\) & \(n\) & \(n_{\rm with\, padding}\) \\ \hline
1 & 0 kpc & 2 kpc & 123997 & 168647 \\
2 & 2 kpc & 5 kpc & 124433 & 228354 \\
3 & 5 kpc & 10 kpc & 146839 & 315165 \\
4 & 10 kpc & 16 kpc & 129019 & 327434 \\ \hline combined & 0 kpc & 16 kpc & 524288 & \\ \hline \end{tabular}
\end{table}
Table 2: At each time step, the simulated galaxy is divided into spherical shells, which are separately modeled using normalizing flows and then stitched together. Above, we describe the shells used for partitioning the training volume at time step \(t=2\) Gyr. The radii \(R_{\rm in}\) and \(R_{\rm out}\) of the inner and outer cylindrical surfaces of each of the shells is given, along with the number of tracer particles \(n\) inside each of them. \(n_{\rm with\, padding}\) refers to the number of particles after padding the shells with additional “virtual particles.” These virtual particles are added in order to transform the sharp inner and outer boundaries of the shells into a smooth roll-off. This cases the normalizing flow training.
the origin corresponds to the peak density of the bar. Choosing the peak density as the origin is important, because there is a displacement in the order of \(100\,\mathrm{pc}\) between the peak density and the center of mass of the stellar particles of the system. This is most likely caused by the system being in a transient state and having differential rotation between the central and outer regions.
Each stellar particle has a mass of \(92\,000\,\mathrm{M}_{\odot}\). To represent the composite particles as a collection of individual stars, we would need to upsample each composite particle by assuming that the density follows a Plummer sphere with \(\varepsilon=150\,\mathrm{pc}\) (this is caused by the softening length). This can however introduce artificial clumping that can manifest itself as spurious densities or accelerations inferred from the gravitational potential. With this in mind, we instead treat the stellar particles as individual stars, and do not perform any up-sampling.
We also note that as the method currently stands, the training data in the chosen volume is assumed to have uniform completeness (_i.e._, each star has the same probability of being selected). If the completeness were not uniformly complete, the spatially varying selection function would introduce spurious gradients into the distribution function and hence invalidate the stationary assumption. Uniform completeness is easy to guarantee for in the mock dataset, but with real observations, it is influenced by multiple factors such as crowding, dust, color and scanning patterns. Hence, for real Milky Way datasets, we expect modeling of the selection function to be critical.
## 4 Normalizing flow
### Constructing the normalizing flow
When training the flows for the different time steps, three limitations of our FFJORD implementation must be addressed. The first challenge arises from the fact that the densities of the tracer particles vary by up to three orders of magnitude between the central and outer regions of the training volume. These large variations in density cause systematic biases in volumes of high density. To address this, we split the training volume into four concentric cylindrical shells, such that
Figure 3: A demonstration of the performance of our normalizing flow model of the stellar phase-space distribution function in two projections. At time \(t=2\,\mathrm{Gyr}\), we plot 2D histograms of selected stellar particles (left column) and of \(2^{21}=2097152\) synthesised particles sampled from the trained normalizing flow (middle column), and a comparison between the two (right column). The top row shows a face-on view of the galaxy, while the lower panel shows one velocity-space projection (\(v_{\Phi}\) vs. \(v_{\mathrm{R}}\)) of the galaxy. The density of the synthesised particles has been renormalized by an overall constant to match the density of the stellar particles. For each bin, we define the Poisson significance as being \((n_{\mathrm{NF}}-n_{\mathrm{data}})/\sqrt{n_{\mathrm{data}}}\), where \(n_{\mathrm{NF}}\) is the renormalized number of samples in the bin drawn from the normalizing flow and \(n_{\mathrm{data}}\) is the number of stellar particles in the same bin. As can be seen above, our normalizing flow captures all prominent features in the galactic stellar distribution, including the central bar and spiral arms. We thus obtain a smooth, differentiable representation of the galactic stellar population.
the variation in the density of the tracer particles in each shell is lessened. This means training a separate normalizing flow for each shell. Each concentric shell is bounded by an inner surface with cylindrical radius \(R=R_{\rm in}\), an outer surface with cylindrical radius \(R=R_{\rm out}\), and two flat surfaces at \(z=\pm H=\pm 2\,\)kpc. For details on the number of particles in each shell for the first time step, see Table 2 (the other two time steps are treated in a similar fashion).
The second limitation comes from the difficulty that FFJORD has in capturing discontinuities in the training data. When crossing the boundary of the training volume, the density of the tracer particles discontinuously drops from a finite value (inside the volume) to zero (outside the volume), causing the first and higher derivatives of the distribution function to be undefined. FFJORD outputs a continuous normalizing flow that attempts to model the discontinuity, but ends up introducing systematic biases in the boundary region. The third limitation comes from the known difficulty of FFJORD in capturing volumes that are not topologically equivalent to spheres (in this case, a cylindrical shell is topologically equivalent to a donut). These latter two limitations are addressed by adding additional "virtual" particles outside the boundary, such that the volume inside \(R<R_{\rm in}\), \(|z|<H\) is filled with a roughly uniform density of particles and the distribution function drops smoothly to zero outside the cylinder defined by radius \(R_{\rm out}\) and half-height \(H\). We train a normalizing flow with the virtual particles included. Subsequently, to draw a sample from one of the shells, we reject the sample that fall outside the chosen volume. For more technical details about the virtual particles, see Appendix B.
In the end, we retrieve a sample of \(m=2^{21}=2097\,152\) points from the desired distribution function by drawing a proportionate number of samples from each of the four cylindrical shells separately and concatenating them together. The resulting sample is then used for training the potential.
### Validation
The most straightforward diagnostic of the trained normalizing flows is a comparison of their predicted phase space density with the training data. In Fig. 3, we compare two-dimensional projections of our normalizing flow and the true stellar distribution function for one time step (\(t=2\,\)Gyr). As can be seen, there are no visible systematic biases in the recovered phase space density, even in regions of low and high density. We observe similar behavior for the other two time steps.
Since the N-body simulation consists of discrete particles, rather than a smooth phase-space density (even though an individual particle is spatially a Plummer sphere, it is a delta function in velocity space), it is not possible to directly compare the gradients \(\partial f/\partial\vec{x}\), \(\partial f/\partial\vec{v}\) of the trained flow with its corresponding true values. However, the stitching procedure provides a surprising diagnostic that proves to be useful. At the boundaries between two neighboring shells, both the densities and the gradients of the flows should match, as their combination must represent a smooth distribution function. We can use this as a test of how well the normalizing flow performs at the boundaries. This effect is clearest when plotting a face-on projection of \(\partial f/\partial R\) in the mock galaxy, as in Fig. 4. We note that this diagnostic serves to supplement the two-dimensional projections of the phase space density, not replace it. Even when the phase space densities do not show obvious systematics, we find signs of discontinuities in \(\partial f/\partial R\). There are also cases in which the reverse is true. With our final normalizing flow implementation and stitching procedure, such discontinuities are not readily apparent.
## 5 Gravitational potential
After obtaining a total of \(m=2^{21}\) samples for each time step, including the gradients \(\partial f/\partial\vec{x}\) and \(\partial f/\partial\vec{v}\), we train a potential neural network \(\Phi(\vec{x})\) and rotation speed \(\Omega\) (for each time step) that best minimizes the aforementioned loss function in Eq. 10. In general, we find training the potential to be more robust and straightforward than training the normalizing flows. In particular, given a sample of distribution function gradients, different choices for the structure and hyperparameters of the potential neural network yield very similar results.
Given the gravitational potential, it is possible to compare the predicted accelerations and densities (from the gradients and Laplacian of the potential, respectively) with the ground truth obtained from the simulation itself. Importantly, \(\Phi(\vec{x})\) represents the contribution from all forms of matter, including dark matter, so the comparisons with the simulation must be performed accordingly.
### Modeled rotation speed
The rotation speeds that _Deep Potential_ captures at different time steps are listed in Table 3. These can be directly compared with the rotation speed of the bar (as measured from the \(m=2\) Fourier mode of stellar density in the galactic midplane; see Section 3.2), as the bar is the most significant rotating feature present in the system. Despite the large non-stationarities and secular evolution of the system, we capture the rotation speed to within \(\sim\)3 km s\({}^{-1}\) kpc\({}^{-1}\), or \(\sim\)20 % at all time steps. _Deep Potential_ thus recovers the bar rotation speed using only a frozen snapshot of the stellar kinematics.
Figure 4: A 2D projection of the radial gradients in stellar density at time \(t=2\,\)Gyr. We draw a sample of \(2^{21}\) particles, and take the median \(\partial f/\partial R\) in \(x-y\) bins. We compress the color scale by applying the arcsinh function. As evidenced by the smoothness of the radial gradients, our stitching process, in which we fit the distribution function of cylindrical annuli separately and the join them together, does not introduce prominent boundary effects (_i.e._, discontinuities).
### Comparison of the modeled accelerations
We obtain the accelerations predicted by the gravitational model by calculating the gradients of the model via auto-differentiation: \(\vec{a}=-\vec{\nabla}\Phi(\vec{x})\). We can compare this with the ground truth from the simulation by summing over the contributions of the particles in the \(N\)-body simulation, both stellar and dark matter (and accounting for the softening length). Fig. 5 compares the prediction with the ground truth at time step \(t=2\,\mathrm{Gyr}\). In most of the galaxy, we recover accelerations to within \(\sim\)20 %. The largest discrepancies occur near the center of the galaxy, where the bar is strongest. However, the major axis of the bar is almost exactly reproduced, and can be immediately identified in the spatial pattern of \(a_{x}\) and \(a_{y}\).
### Comparison of the modeled density
We obtain model densities by taking the Laplacian of the modeled gravitational potential, once again by auto-differentiation: \(\rho_{\mathrm{model}}=\nabla^{2}\Phi/(4\pi G)\). Because \(\Phi(\vec{x})\) represents the overall gravitational potential, sourced by both stellar and dark matter particles, the density estimate also represents the total density. We can compare this with the ground truth from the simulation, by summing over the contributions of all the particles. Due to the softening length of the simulation, each particle represents a Plummer sphere density with scale length \(\varepsilon=150\,\mathrm{pc}\).
The model vs. ground truth comparison for all time steps is shown in Fig. 6. We see that the major features of the simulated galaxy are reproduced, including the central bar and spiral arms. In general, the model performs worse in the presence of low densities (See the outer regions at \(t=9\,\mathrm{Gyr}\)) and high density gradients, such as when moving along the semi-minor axis of the bar near the galactic center. We also observe ubiquitous small-scale fluctuations in the predicted density. Identifying the origin of these fluctuations is difficult, as the effects of the potential causes are difficult to disentangle. One possible cause could be the inherent non-stationarity of the data, stemming from the system being in a state of evolution. This effect could be compounded
Figure 5: A comparison of the gravitational accelerations inferred by _Deep Potential_ in the midplane of our simulated galaxy with the true accelerations. For the snapshot at \(t=2\,\mathrm{Gyr}\), we plot ground-truth accelerations in the \(z=0\) plane obtained from the simulation (left column), recovered accelerations inferred from the gravitational model (middle column), and a comparison between the two (right column). At each point in the plane, we normalize the accelerations by the modulus of the true acceleration vector, so that the plotted values are always of order unity. We only plot the \(x\) and \(y\) components of acceleration, as the \(z\)-component at \(z=0\) is very close to zero, and out of the plane is similar to the density estimates from the gravitational potential. In the midplane, we recover the overall smooth pattern of accelerations over the disc, with small-scale fluctuations at the level of five percent.
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline \(t\) & \(R_{\mathrm{bar}}\) & \(R_{\mathrm{CR}}\) & \(\Omega_{\mathrm{bar}}\) & \(\Omega\) \\ (Gyr) & (kpc) & (kpc) & (\(\mathrm{\,\mathrm{Gyr}\,s^{-1}\,kpc^{-1}}\)) & (\(\mathrm{\,\mathrm{m\,s^{-1}\,kpc^{-1}}}\)) \\ \hline
2 & 5 & 10.5 & 20.7 & 17.5 \\
4.25 & 8 & 17.5 & 12.15 & 14.2 \\
9 & 11.5 & 23.8 & 8.1 & 10.4 \\ \hline \end{tabular}
\end{table}
Table 3: For all three time steps of our simulated galaxy, we compare the rotation speed of the bar (\(\Omega_{\mathrm{bar}}\), calculated by considering the time-evolution of the \(m=2\) Fourier mode of stellar density, as outlined in Section 3.2) with the best-fit value for the rotation speed \(\Omega\) of the frame shift which renders the system most stationary, according to _Deep Potential_. We also provide the values for the radius (\(R_{\mathrm{bar}}\)) and corotation radius (\(R_{\mathrm{CR}}\)) of the bar. In all three time steps, we recover the pattern speed of the bar to within \(-3\,\mathrm{km\,s^{-1}\,kpc^{-1}}\), corresponding to a maximum fractional error of \(\sim\)20 %.
by the smoothing length of each individual particle manifesting itself as spurious densities over a characteristic scale that is comparable to the smoothing length (\(\varepsilon=150\,\mathrm{pc}\)). Finally, discrepancies in the accurate representation of the underlying distribution function by the normalizing flow could also contribute. Particularly, in the last two time steps, the hyperparameters for the normalizing flow are not tuned as thoroughly as for the first time step.
The effects from non-stationarity can be lessened by smoothing the density maps with some suitable kernel, as done in Buckley et al. (2023b), for example. This is useful when we are interested in the
Figure 6: Comparison of the true total mass density in the plane of our simulated galaxy with the density recovered by _Deep Potential_. In the left column, we plot the total density at the three time steps (\(t=2\), \(4.25\), and \(9\,\mathrm{Gyr}\)), calculated by summing the densities from the stellar and dark matter particles. The middle column shows the density implied by the gravitational potential that _Deep Potential_ recovers from the stellar kinematics. In the right column, we plot the fractional difference between the recovered and the true density. At each time step, we recover the major features of the galaxy, including the central bar and spiral arms. Though the true density varies over several orders of magnitude over the plotted region of the galaxy, we generally recover the density to a few tens of percent, with the residuals taking the form of spatially oscillating (with scale \(\sim\!1\,\mathrm{kpc}\)) fluctuations about zero. We note that the density is most accurately recovered when the bar is weakest (at \(t=2\) and \(4.25\,\mathrm{Gyr}\)), and that the largest density residuals tend to occur at small galactic radii, along the bar minor axis.
average density in a neighborhood of some particular point. However, we do not focus on smoothed densities in this work, as it does not provide significant additional insight into the performance of our rotating-frame framework.
By considering the modeled density and subtracting the ground-truth baryonic density, we can build an estimate for the dark matter density in the system. This approach is motivated by the fact that stellar density can be estimated from direct observations, while dark matter is not directly observable. For example, in systems such as the Milky Way, there are analytic models of different baryonic components (McKee et al., 2015).
We provide a model estimate of the radial profile of the dark matter halo at time step \(t=2\,\)Gyr in Fig. 7. The modeled density deviates from the ground truth in volumes where baryonic matter dominates over dark matter, and where the overall density is small. For \(r>2\,\)kpc, the dark matter profile is reconstructed to within a factor of two.
### Quantifying non-stationarities
Real galaxies are never perfectly stationary. Even though we train the gravitational potential and \(\Omega\) to minimize non-stationarities in the rotating frame, we do not expect them to be able perfectly render \((\,\partial f/\partial t)\,\lambda_{\Omega}\) zero everywhere in phase space. This is also because the system is overconstrained: we are searching for a three-dimensional gravitational potential that renders the distribution function stationary at every point in six-dimensional phase space.
We can quantify non-stationarities by plotting the average \((\,\partial\ln f/\partial t)\,\lambda_{\Omega}\) in different regions of phase space. \((\,\partial\ln f/\partial t)\,\lambda_{\Omega}\) serves as a more useful measure than \((\,\partial f/\partial t)\,\lambda_{\Omega}\) because it is more interpretable, corresponding to the inverse characteristic timescale during which the distribution function undergoes significant changes. Fig. 8 shows a comparison between the non-stationarities of a gravitational potential trained in a rotating frame and one that is trained in the laboratory frame. We can see that in this system, the rotating frame yields significantly better results. Notably, the nonrotating potential has clear imprints from the central bar that are not visible in the rotating potential.
### Rotation speed recovery in a sub-volume imitating the Solar neighborhood
This work so far has focused on the validation of _Deep Potential_ on the simulated galaxy as a whole. While this is important for the overall validation of the method within a rotating frame, we can also test how well the method recovers the bar rotation speed in a sub-volume which more closely resembles what we would observe from our position within the Milky Way. To this end, we select two spherical volumes of radius \(2\,\)kpc and \(4\,\)kpc, centered around a point at a distance of \(8\,\)kpc and an angle of \(\sim 20^{\circ}\) with respect to the galactic bar, akin to how the Sun is positioned in the Milky Way (Gaia Collaboration et al., 2023). Because of the two-fold symmetry of the galactic bar, we also produce mirrored datasets by reflecting the selected volumes with respect to the galactic center. We follow the same procedure for all time steps. One of the selected sub-volumes in the first time step is visualized in Fig. 9.
At each time step, we train a normalizing flow and a gravitational potential, along with the rotation speed in the same fashion as outlined in the previous sections. _Deep Potential_ recovers the rotation speed to within \(20\,\%\) and \(15\,\%\) for the \(2\,\)kpc and \(4\,\)kpc datasets, respectively, for all time steps, which is, notably, slightly better than its performance when trained on the entire volume of the galaxy. We hypothesize that this may be due to the fact that the our sub-volume avoids the galactic center, where _Deep Potential_ has the greatest difficulty imposing stationarity (see the left panel of Fig. 8). In general, one can remain optimistic about the ability of _Deep Potential_ to recover the Milky Way bar rotation speed from stars observed with a few kiloparsecs of the Sun.
## 6 Conclusions
In this paper, we have demonstrated the _Deep Potential_ method in a self-consistent \(N\)-body simulation of a barred galaxy at three differ
\begin{table}
\begin{tabular}{l|l|l|l} \hline \(t\) & \(\Omega_{\rm bar}\) & \(\Omega_{\rm 2kpc}\) (mirrored) & \(\Omega_{\rm 4kpc}\) (mirrored) \\ (Gyr) & \((\,{\rm km\,s^{-1}\,kpc^{-1}})\) & \((\,{\rm km\,s^{-1}\,kpc^{-1}})\) & \((\,{\rm km\,s^{-1}\,kpc^{-1}})\) \\ \hline \(2\) & \(20.7\) & \(23.7\) (\(23.7\)) & \(20.9\) (\(19.6\)) \\ \(4.25\) & \(12.15\) & \(14.1\) (\(14.3\)) & \(13.9\) (\(13.1\)) \\ \(9\) & \(8.1\) & \(9.81\) (\(8.84\)) & \(8.53\) (\(7.96\)) \\ \hline \end{tabular}
\end{table}
Table 4: For all three time steps of our simulated galaxy, a comparison between the true rotation speed of the bar (\(\Omega_{\rm bar}\)) and the rotation speed that renders the system most stationary, according to _Deep Potential_, in various sub-volumes imitating the Solar neighborhood. The sub-volumes are centered around a point at a distance of \(8\,\)kpc and an angle of \(\sim 20^{\circ}\) with respect to the galactic bar. We provide values for populations with radius \(2\,\)kpc and \(4\,\)kpc, and for their mirrored versions on the opposite side of the galaxy (values in parentheses). The bar rotation speed is captured to within \(\sim 20\%\) and \(\sim 15\%\) for the \(2\,\)kpc and \(4\,\)kpc datasets respectively.
Figure 7: A comparison between the radial profiles for dark matter \(\rho_{\rm dm}\) (orange lines) and overall matter \(\rho_{\rm tot}\) (blue lines) predicted by the model (dashed lines) and the ground truth (solid lines, labeled “data”) at time \(t=2\,\)Gyr. The model estimate for the dark matter density is obtained by subtracting the true stellar density from the overall model density (See Section 5.3 for discussion). The profiles are obtained by calculating the average densities in a volume \(|z|<1\,{\rm kpc},r<16\,\)kpc along \(r\), where \(r\) refers to the spherical radial distance. We accurately recover the radial profile of total (stellar plus dark-matter) density across the entire studied volume, which extends to \(16\,\)kpc. In the central, heavily baryon-dominated region of the galaxy, even small fractional errors in the recovered total density are sufficient to significantly alter the recovered dark-matter density. However, for \(r>2\,\)kpc, where the baryons are less dominant, we recover the dark-matter density profile to within a factor of two.
ent time steps over the course of the evolution of the system. We have methodologically extended the _Deep Potential_ to impose stationarity in a rotating frame. We achieve this by selecting a population of stellar particles and simultaneously fitting a gravitational potential and a rotation speed that best renders the population stationary in the rotating frame. For our \(N\)-body simulation, in a 16 kpc dataset encompassing the majority of the simulated galaxy, _Deep Potential_ recovers the rotation speed of the galactic bar to within 20 %, accelerations in the system to within 20 % and densities to within 50 %. We also recover the radial dark matter profile in outer regions of the galaxy (\(r>2\) kpc). The main features of the galaxy, such as the spiral arms and the central bar, are successfully recovered, even in the presence of strong intrinsic non-stationarities in the galactic stellar population. The modeled densities, however, exhibit small scale fluctuations, which could be caused by the intrinsic non-stationarity of the data, the smoothing length of the particles, or discrepancies in the modeling of the underlying distribution function with a normalizing flow. We have additionally demonstrated that _Deep Potential_ is capable of recovering the rotation speed in smaller sub-volumes imitating the Solar neighborhood: the rotation speed of the bar is captured to within 20 % and 15 % for sub-volumes of radius 2 kpc and 4 kpc respectively.
Working in an arbitrarily rotating frame is important for modeling real-life galaxies, which often have rotating features such as spiral arms or a central bar. The Milky Way serves as a good candidate for the next application of _Deep Potential_, with the availability of six-dimensional phase space info for tens of millions of stars from Gaia Data Release 3 (Gaia Collaboration et al., 2023). In particular, _Deep Potential_ may be able to determine the pattern speed of the Milky Way bar, as well as the radial dark-matter density profile. Indeed, there has been recent work on applying normalizing-flow-based modeling on the Milky Way. Lim et al. (2023) saw the first application on the Milky Way for inferring local dark matter densities.
There are also several other avenues for future development. One could extend the _Deep Potential_ formalism to account for non-uniform observational completeness in the kinematic tracers and errors on measured phase-space locations. Further, if there is data available for additional dimensions that are relevant for selecting populations which share the same dynamical history, such as metallicity or alpha abundance, one could train normalizing flows incorporating those dimensions, and then enforce stationarity on the distribution function while conditioning on the extra dimensions. Finally, one can model systems where full six-dimensional phase-space data is not available by imposing symmetries on the system, such as ax
Figure 8: The level of inferred non-stationarity in the stellar population across the disc of our simulated galaxy, when applying _Deep Potential_ either in a rotating frame (left panel) or in the non-rotating “laboratory” frame (right panel). For time step \(t=2\) Gyr, we calculate the non-stationarity, \(\partial\ln f/\partial t\), calculated from our modeled distribution function and recovered potential. In the left panel, we work in the rotating frame inferred by _Deep Potential_ (with \(\Omega=17.5\) km s\({}^{-1}\) kpc\({}^{-1}\)) and use the corresponding recovered gravitational potential. In the right panel, we work in the laboratory frame, and use the gravitational potential inferred by _Deep Potential_ for that frame. Both plots show the \(x-y\) plane with \(z=0\). For each bin in \(x-y\), the median value of \(\partial\ln f/\partial t\) is calculated by averaging over the velocity dimensions, weighted by the distribution function. \(\partial\ln f/\partial t\) can be interpreted as the inverse timescale during which the distribution function undergoes significant changes. We overlay stellar isodensity contours, in order to indicate the location and extent of the galactic bar and spiral arms. When allowed to work in a rotating frame (left panel), _Deep Potential_ finds a gravitational potential that renders the system nearly stationary. When constrained to work in a non-rotating frame (right panel), _Deep Potential_ is unable to find a steady-state solution, and significant non-stationarities – particularly along the galactic bar – are present. This comparison demonstrates the improvement gained by allowing _Deep Potential_ to work in rotating frames, particularly in systems such as barred galaxies.
isymmetry. While this is not relevant for the Milky Way, it can be important for external galaxies or compact systems where some of the dimensions (such as distance) are missing.
## Acknowledgements
This work was supported by funding from the Alexander von Humboldt Foundation, through Gregory M. Green's Sofja Kovalevskaja Award, and made use of the HPC system Raven at the Max Planck Computing and Data Facility. The authors thank Paola Di Matteo for providing the simulation data, Hans-Walter Rix for useful discussions about adapting _Deep Potential_ to a rotating frame, and Tristan Cantat-Gaudin for suggesting that we test our method in a Solar-like galactic sub-volume. This work has made use of the computational resources obtained through the DARI grant A0120410154 (P.I. : P. Di Matteo).
## Data Availability
All of our code, trained models, training data, and simulation snapshots, as well as Python notebooks to generate paper plots, are publicly available under a permissive license that allows reuse and modification with attribution, both in archived form at [https://doi.org/10.5281/zenodo.8390759](https://doi.org/10.5281/zenodo.8390759) and in active development at [https://github.com/gregreen/deep-potential](https://github.com/gregreen/deep-potential).
|
2309.12527 | Lorentzian wormhole in the framework of loop quantum cosmology | In this paper, we construct a traversable static Lorentzian wormhole in the
effective scenario of Loop Quantum Cosmology (LQC), where the field equations
are modified due to the ultraviolet (UV) corrections introduced at large
space-time curvatures. A stable wormhole can be constructed in the effective
scenario without the violation of Null energy condition (NEC) by physical
matter at the throat. The NEC is effectively violated due to the corrections in
the field equations from LQC, resolving the Weyl curvature singularity at the
throat. However, the physical matter does violate the Strong energy condition
(SEC), suggesting the interesting possibility that dark energy can be harnessed
into a wormhole. A possible explanation for this is the presence of inherent
pressure isotropy in the UV-corrected field equations (discussed and compared
to braneworld wormholes in the discussion). No additional exotic ingredient
(violating NEC) is required, avoiding quantum instabilities. The tidal forces
at the throat do not diverge and also the throat is found to be stable. The
wormhole features an attractive geometry. LQC can resolve both types of
curvature singularities appearing at the black hole center and wormhole throat,
without exotic matter. | Rikpratik Sengupta, Shounak Ghosh, Mehedi Kalam | 2023-09-21T22:56:14Z | http://arxiv.org/abs/2309.12527v1 | # Lorentzian wormhole in the framework of loop quantum cosmology
###### Abstract
In this paper, we construct a traversable static Lorentzian wormhole in the effective scenario of Loop Quantum Cosmology (LQC), where the field equations are modified due to the ultraviolet (UV) corrections introduced at large space-time curvatures. A stable wormhole can be constructed in the effective scenario without the violation of Null energy condition (NEC) by physical matter at the throat. The NEC is effectively violated due to the corrections in the field equations from LQC, resolving the Weyl curvature singularity at the throat. However, the physical matter does violate the Strong energy condition (SEC), suggesting the interesting possibility that dark energy can be harnessed into a wormhole. A possible explanation for this is the presence of inherent pressure isotropy in the UV-corrected field equations (discussed and compared to braneworld wormholes in the discussion). No additional exotic ingredient (violating NEC) is required, avoiding quantum instabilities. The tidal forces at the throat do not diverge and also the throat is found to be stable. The wormhole features an attractive geometry. LQC can resolve both types of curvature singularities appearing at the black hole centre and wormhole throat, without exotic matter.
## 1 Introduction
Wormholes are geometrical structures which appear as a solution to the field equations of Einstein's General Relativity (GR). Although one of the most attractive predictions of GR, they have not yet been detected directly by observations. Einstein himself along with Rosen [1] visualized wormholes as space-time bridges connecting two different space-time points across the universe, acting as shortcut paths of space-time travel between them. The first rigorous mathematical study of wormholes was performed by Fuller and Wheeler in the early 1960s [2], a decade marking lot of the formal developments of modern GR that changed the outlook towards the subject. However, the result obtained by them left wormholes to be a subject of academic interest only. In fact, the path breaking paper of Morris and Thorne [3] in 1988 that caused a revolution in wormhole physics was originally considered as an academic tool for better understanding of GR, but the profound implications of the results obtained made them write a second follow up paper on the subject in the same year [4].
Fuller and Wheeler had found that although wormhole geometry described by tubular shaped objects with two openings (mouths) spreading out to be asymptotically flat at infinitely large radial distances from the throat (narrow region connecting the two mouths) did exist as static, spherically symmetric solutions to the Einstein field equations (EFE), realistic Schwarzschild wormholes were unstable at the throat due to development of infinitely large gravitational tidal forces resulting in a Weyl curvature singularity at the throat (diverging Weyl tensor). The generation of the large tidal forces can be understood physically from the fact that the matter at the throat be attracted gravitationally by the two mouths in opposite directions. The idea of Morris-Thorne (MT) to avert this Weyl singularity was simple and elegant. If the matter at the throat be replaced by a form of exotic gravitationally repulsive matter, then the tidal forces developing at the throat can be prevented from diverging. However, one has to pay the price that considering the energy density of such matter always remains positive, the Null Energy Condition (NEC) \(\rho+p\geq 0\) has to be violated by the matter at the throat. Although, a violation of the Strong energy condition (SEC) \(\rho+3p\geq 0\) is an essential condition to obtain an accelerating universe in a cosmological context and has been
realized in inflationary and dark energy models involving scalar fields or non-linear equation of states (EoS), violation of the NEC is an even bigger ask and may lead to quantum instabilities of the vacuum. This issue shall be taken up in more details later in this paper.
Another very important factor in wormhole physics is the radial metric potential of the static, spherically symmetric metric known as the shape function, as it determines the shape of the wormhole. As per the MT prescription, in order to build a traversable wormhole that can possibly allow any form of human traversability with limited tidal forces preventing the traveller from getting ripped apart at the throat, the shape function \(b(r)\) must satisfy a number of criteria, given as follows: (i) the shape function at the throat radius \(r_{0}\) must be equal to the throat radius itself (\(b(r_{0})=r_{0}\)). (ii) For radial distances \(r>r_{0}\), the ratio of the shape function at any given radial distance \(r\) to that radial distance must be less than unity (\(\frac{b(r)}{r}<1\)). (iii) The first derivative of the shape function with respect to the radial distance \(r\) at the throat must be less than unity (\(\frac{db(r)}{dr}|_{r}=r_{0})<1\)). The final condition implies a minimal throat size, thereby minimizing the amount of exotic matter required at the throat to violate the NEC.
In order to violate any of the energy conditions, either the matter sector or the geometry sector of the EFE has to be modified via a modification in the matter or gravitational Lagrangian. Such modifications can alter the relativistic behaviour either at the ultraviolet (UV) or infrared (IR) scales through correction terms in the EFE. The presently observed acceleration of the universe [5, 6] at low energy (IR scale) requires a violation of at least the SEC (some models violate the NEC also). This can be sourced by modifications in the matter sector via - minimally coupled scalar fields dubbed as the quintessence [7, 8] with suitable steep potentials, by a fluid known as Chaplygin gas [9, 10] that is described by a non-linear EoS and finds its origin in extra dimensional theories, or a phantom fluid that is described by a supernegative EoS with an EoS parameter \(<-1\)[11, 12, 13]. Alternatively, late time acceleration can also be achieved by modifying the geometry sector [14, 15, 16]. At the UV scale it is more useful to modify the geometry sector due to the high energy density and large space-time curvature. The two most acceptable effective modified scenarios in this context are the Loop Quantum Cosmology (LQC) [17, 18] and the braneworld scenario [19, 20].
Traversable wormholes have been constructed in literature with both the approaches modifying matter [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32] as geometry [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44] sectors. A possibility of existence of wormholes in certain regions of our galaxy has been explored recently [45]. Both the UV corrected effective scenarios are known to resolve the strong Ricci curvature singularity at the centre of the black hole [46, 47]. Also, the initial big bang singularity is found to be resolved in the LQC scenario [17, 18]. This is a key motivation behind the attempt to construct traversable wormholes using these UV corrected frameworks, where the Weyl curvature singularity may be resolved at the wormhole throat to make them traversable. We have successfully constructed traversable wormhole in the Randall-Sundrum II (RSII) braneworld scenario [41]. The RSII model has an inherent \(Z_{2}\) symmetry which is absent in LQC. It has been shown by Konoplya and Zhidenko [48] that a fully consistent traversable wormhole can be constructed from normal matter with coupled Maxwell and Dirac fields in the absence of \(Z_{2}\) symmetry at the throat in a relativistic context. The LQC scenario can be realized in (\(3+1\))-dimensions and one need not be sceptical about the existence of extra dimensions. In this paper, we attempt to construct a traversable wormhole in the framework of LQC which is an effective scenario that avoids the conceptual problems arising from the quantum mechanical interpretations of the gravitational system and helps to provide a much better understanding of the classical singularity. We solve the modified EFE in the LQC scenario for a spherically symmetric matter distribution to obtain the wormhole shape function and also check the validity of the NEC in the effective matter description. The unknown model parameters are estimated by applying the junction conditions at the wormhole surface. The components of the tidal acceleration at the wormhole throat have been computed and confining them to physically justifiable values, an upper limit on the velocity of the traveller traversing the wormhole is obtained. Also, a linearized stability analysis is performed to ensure stability of the traversable wormhole and the nature of the wormhole geometry can be inferred from obtaining the radial acceleration. We conclude with a discussion on the physical consequences of the results obtained.
## 2 Mathematical model of the wormhole
In this section a static, spherically symmetric and traversable wormhole model is constructed that is stable under the linearized stability analysis. The validity of the NEC is checked along with the traversability criteria computing the tidal forces at the throat. The junction conditions have been made use of to determine the unknown model parameters which have been used to make the plots. The surface density and surface pressure have also been computed. The stability analysis has been performed successfully.
### Solution for the wormhole shape function
A static, spherically symmetric wormhole is described by the line element
\[ds^{2}=e^{v(r)}dt^{2}-\frac{dr^{2}}{1-\frac{b(r)}{r}}-r^{2}(d\phi^{2}+sin^{2} \theta d\phi^{2}). \tag{1}\]
Here the radial metric potential \(b(r)\) denotes the shape function as it represents the shape of the wormhole and the temporal metric potential \(v(r)\) is the redshift function of the wormhole, which basically gives a measure of the redshift due to the loss in energy when a particle escapes the strong gravitational field of the wormhole due to emission from it.
The modified field equations in LQC scenario for the wormhole metric (1) turn out to have the form
\[\frac{b^{\prime}}{r^{2}}=8\pi\rho\left(1-\frac{\rho}{\rho_{c}} \right), \tag{2}\] \[\left(1-\frac{b}{r}\right)\left(\frac{v^{\prime}}{r}+\frac{1}{r^{ 2}}\right)-\frac{1}{r^{2}}=8\pi\left(p-\frac{\rho\left(2p+\rho\right)}{\rho_{c }}\right),\] (3) \[\left(1-\frac{b}{r}\right)\left(v^{\prime\prime}+v^{\prime 2}+ \frac{v^{\prime}}{r}\right)-\frac{b^{\prime}-b}{2r}\left(v^{\prime}+\frac{1}{r }\right)\] \[=8\pi\left(p-\frac{\rho\left(2p+\rho\right)}{\rho_{c}}\right). \tag{4}\]
Here, the matter source is a perfect fluid having stress-energy tensor of the form \(T_{\rm v}^{\mu}=diag\left(\rho,-p,-p,-p\right)\). The fluid obeys a linear EoS \(p(r)=\mu\rho(r)\). The parameter \(\rho_{c}\) is extremely important in LQC as it denotes the maximum or critical density, beyond which the energy density cannot rise further, thus preventing the formation of a curvature singularity due to diverging energy densities. However, the curvature singularity at the throat of the wormhole is not due to diverging energy densities but due to diverging tidal forces in the radial and tangential directions and hence it remains to be seen whether such a curvature singularity can be resolved in the framework of LQC, giving rise to a stable traversable wormhole. As we seen in the RHS of Eqs. (2)-(4), the additional terms quadratic in stress energy arise due to the effective UV corrections to the space-time geometry in the classical picture. The are accounted for in the matter sector to provide an effective matter description. It is worth noting that the effective pressures in the radial and tangential directions are identical, resulting in an inherent pressure isotropy as contrasted to models of braneworld gravity where the anisotropy is generated from the extra dimensional contribution [41].
We have applied the equations for homogenous LQC to the spherically symmetric spacetime. There is an issue with the covariance in this class of models with consideration of local physical degrees of freedom and it was first shown by Bojowald and Brahma [49] that the covariance breaks on extending such models beyond a background treatment. The possible reason behind this lies in the non-Riemannian nature of the spacetime structures involved in such a treatment. On considering possible generalizations of the spacetime structures, covariance may be considered in the sense of realizing an identical count of gauge transformations compared to the classical theory with the exception of slicing independence prevalent in Riemannian geometry [50; 51]. The spacetime structure of quantum corrected black hole geometries were studied [52] but due to certain misinterpretations of the quantum corrected phase space [53; 54] and asymptotic [55] behaviour, some inconsistencies were found with the treatment [56]. A possible solution to this may involve field redefined metric components arising from certain generators of modified hypersurface deformations leading to the applicability of line elements in specific spacetime regions [57; 58]. The most general covariant theory considering spherical symmetry have been derived at a canonical level [59]. The modified gravitational behaviour of symmetry reduced LQC models lack a covariant modified spherically symmetric solution [60]. A generalized form of covariance described by non-Riemannian geometry could be helpful. Our wormhole model constructed in the LQC setup is an elementary one and this is one of the main limitations that we hope to address in recent future.
The temporal metric potential is assumed to be given by the Kuchowicz function [61]
\[e^{v(r)}=e^{Br^{2}+2\ln C}, \tag{5}\]
where the constant parameter \(B\) has dimension of inverse length squared while the parameter \(C\) is a dimensionless constant. The reason behind the choice of the Kuchowicz potential as the redshift function has been stated in the discussion section.
The energy density of the matter inside the wormhole can be found making use of the redshift function and the considered linear EoS of the constituent matter
\[\rho(r)=C_{1}e^{-\frac{\left(\mu+1\right)Br^{2}}{2\mu}}, \tag{6}\]
where \(C_{1}\) denotes a constant of integration.
Making use of the expressions for the quantities available at hand in the modified EFE, the shape function can be obtained and is found to be given by
\[b(r) = \frac{6r\epsilon^{-2\hbar r^{2}}}{B\rho_{\kappa}(\mu-1)(3\mu-1) \left(2Br^{2}+1\right)}\left(-\frac{8\pi C_{1}\mu^{2}\rho_{c}\left(\mu-1 \right)}{3}\right.\] \[\times e^{\frac{B^{2}\left(\mu_{0}-1\right)}{2\mu}}+\left(4\mu C_{ 1}^{2}\pi(2\mu+1)e^{\frac{B^{2}\left(\mu-1\right)}{\mu}}\right.\] \[\left.+\rho_{c}\left(\left(Br^{2}+\frac{1}{2}\right)e^{2\hbar r^{2 }}+\frac{C_{2}}{2}\right)B(\mu-1)\right)\left(\mu-\frac{1}{3}\right)\right).\]
\(C_{2}\) is another integration constant that may be determined from the junction conditions.
The shape function is plotted along the radial distance in Fig. 1 and it turns out to represent the shape of the wormhole quite well. The desired properties of the shape function to successfully describe a wormhole are also satisfied. At the throat radius \(r_{0}=0.5\,\mathrm{km}\), the shape function has an identical value and the ratio \(\frac{b(r)}{r}\) is well maintained to be less than unity
for all radial distances within the wormhole surface greater than the throat radius.
### Validity of NEC
The geometrical modifications arising due to LQC can be effectively expressed as modifications in the matter sector, replacing the energy-momentum tensor of the perfect fluid matter source by an effective energy-momentum tensor of the form \(T_{\nu}^{\mu(eff)}=diag(\rho^{eff},-p^{eff},-p^{eff},-p^{eff})\), the time component of which is the effective energy density expressed as
\[\rho^{eff}=8\pi\left(\rho\left(1-\frac{\rho}{\rho_{c}}\right)\right), \tag{8}\]
and the isotropic spatial components turn out to have the form
\[p^{eff}=8\pi\left(\rho+\frac{\rho\left(p-\frac{\rho}{2}\right)}{\frac{\rho_{c }}{2}}\right). \tag{9}\]
Summing up the effective energy density and effective pressure
\[\rho_{eff}+p_{eff} = -\frac{16C_{1}\left(\mu+1\right)\pi}{\rho_{c}}\left(C_{1}\mathrm{ e}^{-\frac{(\mu+1)R\rho^{2}}{2\mu}}-\frac{\rho_{c}}{2}\right) \tag{10}\] \[\times\mathrm{e}^{-\frac{(\mu+1)R^{2}}{2\mu}}.\]
We represent a plot of the variation of the summed up effective energy density and effective pressure along the radial expanse of the wormhole. As we see from Fig. 2, the sum is always a negative quantity within the wormhole and hence the NEC is effectively violated, although we find \(\mu>-1\) from the junction conditions as we shall see in the following subsection. So, we can say that it is the effective EoS parameter \(\mu_{eff}\) arising from UV corrections that violates NEC as \(\mu_{eff}<-1\).
### The junction conditions
The spacetime exterior to the wormhole surface is a vacuum and can be described by the Schwarzschild line element which has the well known form
\[ds^{2} = \left(1-\frac{2M}{r}\right)dt^{2}-\left(1-\frac{2M}{r}\right)^{- 1}dr^{2} \tag{11}\] \[-r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2}),\]
where \(M\) is the total mass of the wormhole.
The presence of matter at the wormhole surface leads to an extrinsic discontinuity resulting in a non-zero surface energy density and surface pressure. The wormhole surface behaves like a junction between the interior of the wormhole and the exterior Schwarzschild space-time in order to make the wormhole space-time geodesically complete. Thus, the junction conditions due to Israel and Darmois [62; 63] are applicable at the wormhole surface resulting in continuity of the metric potential across it. Though, it does not always guarantee a continuity of the derivative of the metric potential across the surface. We compute the surface density and surface pressure using the junction conditions.
The intrinsic surface stress energy tensor is found to have the form \(S_{ij}=diag\left(\Sigma,-P,-P,-P\right)\) that can be derived from the Lanczos equation [64; 65; 66; 67].
Figure 1: Variation of the shape function with respect to \(r\)
Figure 2: Variation of the NEC with respect to \(r\)
In the most general form, \(S_{ij}\) is defined using the Lanczos equation
\[S_{j}^{i}=-\frac{1}{8\pi}(\kappa_{j}^{i}-\delta_{j}^{i}\kappa_{k}^{k}), \tag{12}\]
where discontinuity in the extrinsic curvature across the surface is given by
\[\kappa_{ij}=\kappa_{ij}^{+}-\kappa_{ij}^{-}, \tag{13}\]
such that \(+\) and \(-\) implies the space-times interior and exterior to the wormhole surface. The second fundamental form can be obtained from the relation
\[\kappa_{ij}^{\pm}=-n_{v}^{\pm}\bigg{[}\frac{\partial^{2}X_{v}}{\partial\xi^{i} \partial\xi^{j}}+\Gamma_{\alpha\beta}^{\nu}\frac{\partial X^{\alpha}}{\partial \xi^{i}}\frac{\partial X^{\beta}}{\partial\xi^{j}}\bigg{]}\big{|}_{S}, \tag{14}\]
where \(n_{v}^{\pm}\) denotes the normal vectors of unit magnitude defined as
\[n_{v}^{\pm}=\pm\left|g^{\alpha\beta}\frac{\partial f}{\partial X^{\alpha}} \frac{\partial f}{\partial X^{\beta}}\right|^{-\frac{1}{2}}\frac{\partial f}{ \partial X^{\nu}}. \tag{15}\]
We consider \(n^{v}n_{v}=1\), while the intrinsic coordinate at the surface of the wormhole is represented by \(\xi^{i}\) and satisfies the parametric equation \(f(x^{\alpha}(\xi^{i}))=0\).
The surface energy density can be computed to have the form
\[\Sigma = -\frac{1}{2\pi R}\bigg{[}\sqrt{e^{\lambda}}\bigg{]}^{+}_{-}= \frac{1}{2\pi R}\left(\sqrt{1-\frac{2M}{R}}\right. \tag{16}\] \[\left.-\sqrt{1-\frac{6Re^{-2BR^{2}}}{B\rho_{c}HFG}\left(-\frac{8 \pi\,C_{1}\mu^{2}\rho_{c}He^{\frac{BR^{2}\lambda}{2\mu}}}{3}+\left(4\mu{C_{1} }^{2}\pi(2\mu+1)e^{\frac{BR^{2}\lambda}{\mu}}+\rho_{c}\left(\frac{G}{2})e^{2 BR^{2}}+\frac{C_{2}}{2}\right)BH\right)\frac{F}{3}\right)\right).\]
The surface pressure turns out to be given by
\[\mathcal{P} = \frac{1}{16\pi\,R}\bigg{[}\bigg{(}\frac{2f+f^{\prime}R}{\sqrt{f} }\bigg{)}\bigg{]}^{+}_{-}=\frac{6e^{-2BR^{2}}}{\pi\,G^{2}FB\,R^{3}H\rho_{c}} \bigg{(}\frac{HG^{2}e^{2BR^{2}}}{4}\bigg{(}MR-\frac{R^{2}}{2}-\frac{M}{2} \bigg{)}\frac{B\rho_{c}F}{3} \tag{17}\] \[+\frac{R\,R(3\mu+1)}{2}-\frac{\mu}{2}\bigg{)}{C_{1}}^{2}\bigg{(} \mu+\frac{2\mu}{2}\bigg{)}\pi e^{\frac{BR^{2}\mu}{\mu}}+HC_{2}B\rho_{c}\bigg{(} B^{2}R^{3}-\frac{BR^{2}}{2}+BR-\frac{1}{4}\bigg{)}\bigg{)}\bigg{)}\bigg{)}\] \[\times\frac{1}{\sqrt{\frac{1}{e^{2BR^{2}}B\rho_{c}HFG}\bigg{(}16 e^{\frac{3BR^{2}}{2}}\pi\mu^{2}C_{1}\rho_{c}He^{\frac{BR^{2}}{2\mu}}-F\bigg{(}BC_{2} \rho_{c}He^{\frac{BR^{2}}{2\mu}}+16\bigg{(}\mu+\frac{1}{2}\bigg{)}\pi\,C_{1} ^{2}e^{BR^{2}}\mu\bigg{)}\bigg{)}e^{-\frac{BR^{2}}{\mu}}}}\frac{1}{M^{\prime}}\]
where \(G=(2BR^{2}+1)\), \(F=(3\mu-1)\), \(H=(\mu-1)\), \(M^{\prime}=\sqrt{1-\frac{2M}{R}}\).
The wormhole space-time being static, the surface density and pressure shall vanish at the surface [31; 41], the vanishing surface density giving the boundary condition
\[b(r)|_{r=R}=2M. \tag{18}\]
The matching conditions to obtain the other unknown model parameters also appear from the junction conditions, where \(g_{tt}|_{int}=g_{tt}|_{ext}\) and \(\frac{\partial g_{rr}}{\partial r}|_{int}=\frac{\partial g_{rr}}{\partial r}|_{ext}\) at the surface of the wormhole \(r=R\). So, we have three conditions in all.
We choose physically relevant values of the model parameters \(B=0.006\) km\({}^{-2}\), \(\rho_{c}=0.41\) m\({}^{4}\) and \(M=2.496\)\(M_{\odot}\) and making use of the boundary and matching conditions we obtain the unknown model parameters as \(\mu=-0.9\), \(C_{1}=0.4756683923\) and \(C_{2}=0.150492837\). These value have been used to construct all the plots in this paper. As we obtain \(\mu>-1\), the SEC is violated by physical matter but not the NEC. The NEC is however violated by the effective matter as the quadratic corrections make \(\rho_{eff}+p_{eff}<0\), implying \(\mu_{eff}<-1\).
### Tidal acceleration
The tidal acceleration experienced by the traveller at the throat of the wormhole must have both its radial and tangential components restricted to a reasonable value, which is usually considered to be the acceleration due to gravity on the earth. This shall ensure that the traveller crosses the
wormhole throat safely and also we can obtain an upper limit on the velocity of the traveller while traversing the throat from the tangential acceleration.
The tidal acceleration along the radial direction is expressed by computing the \(|R_{rtrt}|\) component of the Riemann tensor, which for the wormhole metric turns out to have the form
\[|R_{rtrt}|=\left|\left(1-\frac{b}{r}\right)\left[\frac{v^{\ast}}{2 }+\frac{v^{2}}{4}-\frac{b^{\prime}r-b}{2r(r-b)}.\frac{v^{\prime}}{2}\right] \right|\leq g_{earth}. \tag{19}\]
The condition is satisfied by our wormhole model.
The tidal acceleration along the tangential direction is found by computing two of the Riemann tensor components \(|R_{\theta\theta\theta t}|\) and \(|R_{\theta r\theta r}|\) and has the form
\[\nu^{2}|R_{\theta\theta t\theta t}|+\gamma^{2}v^{2}|R_{\theta r \theta r}| = \left|\frac{\nu^{2}}{2r^{2}}\left[v^{2}\left(b^{\prime}-\frac{b}{ r}\right)+\left(r-b\right)v^{\prime}\right]\right| \tag{20}\] \[\leq g_{earth},\]
where, the \(\nu=\frac{1}{\sqrt{1-v^{2}}}\) represents the Lorentz factor, \(v\) being the velocity with which the traveller traverses the wormhole throat. It seems reasonable to approximate that the velocity of the traveller at the throat is of the order much less than the velocity of light \(v\ll 1\) implying a Lorentz factor \(\gamma\equiv 1\). Making use of the assumed redshift function and the obtained shape function for the wormhole, the velocity of the traveller at the throat can be limited as
\[v\leq 0.099218371\sqrt{g_{earth}}, \tag{21}\]
which is a realistic limit that we obtain. The traversability of the wormhole can thus be ensured.
### Linearized stability analysis
A linearized stability analysis is performed around the throat to ensure that our wormhole model is stable at the throat and remains traversable. For doing so, we consider the throat radius to be a proper time function \(r_{0}=x(\tau)\). This consideration gives the surface density and surface pressure, having the form
\[\Sigma=-\frac{1}{2\pi x}\sqrt{f(x)+\dot{x}^{2}}, \tag{22}\]
and
\[\mathcal{P}=\frac{1}{8\pi}\frac{f^{\prime}(x)}{\sqrt{f(x)}}-\frac{ \sigma}{2}, \tag{23}\]
where the function \(f(x)=1-\frac{2M}{x}\), the parameter \(M\) representing the wormhole mass.
The equation of motion can be obtained making use of the energy-momentum conservation as
\[\dot{x}^{2}+V(x)=0, \tag{24}\]
where, the effective potential \(V(x)\) is constructed from the surface energy density, having the form
\[V(x)=f(x)-[2\pi x\,\Sigma(x)]^{2}. \tag{25}\]
A linearization is considered around the static solution \(x_{0}\) which we assume for the equation of motion given by Eq. (24).
On expanding the constructed potential up to second order around the assumed solution \(x_{0}\) using Taylor series, one can get
\[V(x) = V(x_{0})-V^{\prime}(x_{0})(x-x_{0})+\frac{1}{2}V^{\prime\ast}(x _{0})(x-x_{0})^{2} \tag{26}\] \[+\,O[(x-x_{0})^{3}],\]
where prime implies derivative with respect to x.
For stability at the throat, the constructed effective potential must have a minimum at the throat which demands \(V^{\prime}(x_{0})=0\) and \(V^{\prime\prime}(X_{0})>0\). The parameter \(\beta=\frac{\delta\mathcal{P}}{\delta\Sigma}\) is introduced, in terms of which we shall express the condition for minima of the potential involving its second derivative as an inequality. The second derivative of the potential with respect to \(x\) can be expressed in terms of the newly introduced parameter \(\beta\) as
\[V^{\prime\prime}(x) = f^{\prime\prime}(x)-8\pi^{2}[(\Sigma+2\mathcal{P})^{2}+\Sigma (\Sigma+\mathcal{P})(1+2\beta).\]
This provides us with the stability condition at the throat in terms of \(\beta\) as
\[\beta<\frac{\frac{f^{\prime\prime}(x_{0})}{8\pi^{2}}-(\Sigma+2 \mathcal{P})^{2}-2\,\Sigma(\Sigma+\mathcal{P})}{4\,\Sigma(\Sigma+\mathcal{P})}. \tag{28}\]
Figure 3: Plot of \(\beta\) vs \(x_{0}\)
Using the relations for \(\Sigma\) and \(\mathcal{P}\), Eq. (28) can be written in the simplified form as
\[\beta<\frac{x_{0}^{2}(f_{0}^{\prime})^{2}-2x_{0}^{2}f_{0}^{\prime\prime}f_{0}}{4 f_{0}(x_{0}f_{0}^{\prime}-2f_{0})}-\frac{1}{2}. \tag{29}\]
For the wormhole we have constructed in the LQC scenario, the parameter \(\beta\) turns out to have the value
\[\beta=\frac{10(2\pi-1)m^{2}+3(-4\pi+3)mr-2r^{2}}{8\pi r\,(-r+2m)(-r+3m)}. \tag{30}\]
We have plotted the variation of the parameter \(\beta\) along \(x_{0}\) in Fig. 3. From the stability condition obtained using the minima of the constructed potential, the stable regions of the wormhole have been marked as regions 1-4 in the figure.
### Acceleration and nature of the wormhole
It is interesting to compute the radial component of the four-acceleration for a static observer just outside the wormhole as if this component is positive, it implies that the wormhole features an attractive geometry implying that an outward directed radial acceleration is required on the static observer in order to stop being pulled into the wormhole. Likewise, if the radial component of the four acceleration is a negative quantity, the wormhole geometry is a repulsive one, implying the necessity of an inward directed radial acceleration on the static observer to prevent being pushed away from the wormhole.
A test particle initially at rest has the geodesic equation in the radial direction given by
\[\frac{d^{2}r}{dt^{\tau}}=-\Gamma^{r}_{tt}\!\left(\frac{dt}{d\tau}\right)^{2}= -a^{r}, \tag{31}\]
where \(a^{r}\) is the radial 4-acceleration.
The four-velocity of a static observer near the wormhole is
\[U^{\mu}=\frac{dx^{\mu}}{d\tau}=(e^{-\frac{\pi(\rho)}{2}},0,0,0), \tag{32}\]
where \(\tau\) denotes proper time as in the previous subsection.
Alternatively, the four-acceleration \(a^{\mu}\) can be computed from the four-velocity as \(a^{\mu}=U^{\mu}_{;v}U^{v}\), where the radial component of the 4-acceleration is expressed in terms of the metric potentials as
\[a^{r}=\frac{v^{\prime}}{2}\!\left(1-\frac{b(r)}{r}\right). \tag{33}\]
The radial component of the 4-acceleration for a static observer for the LQC constructed wormhole is given as
\[a^{r} = Br\left(1-\frac{2e^{-2Br^{2}}}{B\rho_{c}HF\,G}\left(2\mu{C_{1}}^ {2}\pi(2\mu+1)e^{\frac{Br^{2}H}{\mu}}\rho_{c}\right.\right.\] \[\times\left.\left.\left(Ge^{2Br^{2}}+C_{2}\right)BHF-8\pi\,C_{1} \mu^{2}\rho_{c}He^{\frac{Br^{2}H}{2\mu}}\right)\right)v^{2}.\]
The variation of the radial 4-acceleration across the wormhole has been plotted in Fig. 4. The radial acceleration turns to be positive for all values of \(r\) implying that the wormhole constructed in the LQC scenario features an attractive geometry, requiring an outward directed radial acceleration on the static observer to prevent from being pulled into the wormhole.
## 3 Discussions and conclusion
In this paper we have attempted to construct a traversable wormhole in the UV corrected framework of LQC. The classical EFE are modified by effective quadratic corrections in stress energy in the LQC scenario in an attempt to apply the central effects of Loop quantum gravity. Even the effective scenario is more fundamental in understanding the spacetime geometry and the strong curvature singularities appearing in GR can be averted without ambiguities arising from quantum mechanical interpretations. Solving the modified EFE for a static, spherically symmetric metric describing a wormhole spacetime, where the redshift function is assumed to be given by the Kuchowicz metric function which is well behaved in the vicinity of the wormhole and has been used to construct traversable wormhole on the RSII braneworld [41]. It is also found to work well in case of regular objects involving
Figure 4: Plot of radial component of acceleration with respect to \(r\)
large space-time curvatures [68; 69]. In relativistic wormhole solutions with modifications to the source sector involving exotic matter components, it often becomes impossible to obtain analytical solutions unless a constant redshift function is assumed [29; 31]. This problem does not arise in the UV corrected scenario, where we consider a radially varying redshift function. The matter distribution at the wormhole throat is assumed to obey a linear EoS given by \(p=\mu\rho\). From the modified field equations, the shape function of the wormhole is obtained. As expected, there is a dependence on the EoS parameter and the critical density, besides the other model parameters. On plotting the variation of the shape function with the radial distance, the plot turns out to represent the shape of the wormhole quite well, where the values of the model parameters are used as obtained from the junction conditions for generating the plot.
A model independent kinematical constraint on black hole bounce implying shell bounce in an untrapped region (either inside inner horizon or outside outer horizon) has been developed [70]. Extensions of the Oppenheimer-Snyder collapse in the form of black-to-white hole bounce have also been studied by matching the exterior static geometry with a spatially close FRW interior characterized by a bounce. A consistent model of black-to-white hole bounce applying the techniques of LQC has been constructed. So, it is established that LQC corrections can be used to model black-white hole bounces consistently. However, it may still be of some interest to study the possibility of existence of static traversable Lorentzian wormholes in LQC framework.
Causality should not be violated in a consistent wormhole even if the wormhole harbors traversability [71; 72]. A key inconsistency or instability in models of traversable wormholes may appear from the apparent violation of causality owing to the possibilities of faster-than-light travel or travelling backwards in time. This may depend on what type of matter finds relevance in opening up the wormhole throat and the stability of the wormhole throat. However, the type of static Lorentzian wormhole that we have obtained in the LQC setup is "long" wormhole which implies a lesser travelling time in the ambient space surrounding the wormhole structure than through it and does not facilitate the formation of closed timelike curves. Moreover, the upper limit on the velocity of the traveller trying to traverse the wormhole experiencing tangential tidal forces within the desired limit as obtained from our analysis is considerably smaller than the velocity of light. Possible causality violation may lead to instabilities both at the classical and quantum level. A linearized stability analysis performed on the wormhole throat with the effective potential formalism indicates the stability of the throat. Also, most importantly we have not required any exotic matter violating the Null Energy condition to sustain an open wormhole throat due to the effective quantum corrections appearing from LQC. So, there is no real possibility of faster-than-light travel or backward time travel thus ensuring that causality is not violated. However, a better understanding of the spacetime structure in spherically symmetric setup in LQC may help us establish this issue more comprehensively in the recent future.
The validity of the NEC is checked for the effective matter distribution. Although the physical matter at the wormhole throat does not violate the NEC, _the NEC is effectively violated due to the UV correction terms arising in the modified EFE which are quadratic in stress energy. So, we do not require exotic matter to construct the traversable wormhole._ The obtained value of the EoS parameter from the junction conditions imply that the physical matter at the throat must violate the SEC in order to make the wormhole traversable. As discussed earlier, such matter can cause the universe to accelerate. This leads to the interesting result that _dark energy can be harnessed into a wormhole in the framework of LQC, such that the EoS parameter inside the throat of the wormhole \(\mu>-1\) but \(\mu_{eff}<-1\) due to the LQC corrections._ No additional ingredient is required in the energy budget of the universe to construct a traversable wormhole. Moreover, as dark energy is the dominant component of the energy budget at present, so any wormhole that is formed in the present epoch does not require its mass to be minimized as in the standard relativistic context. More importantly, the quantum instabilities [76] arising from physical matter violating the NEC can be avoided. For the observed acceleration of the universe, it is indeed required that \(-1.61<\mu<-0.78\)[73; 74; 75], but the pathologies associated with such violation of NEC using matter sector [76] are difficult to tackle. So, a geometric modification is also of interest at the IR scale where \(\mu_{eff}<-1\) without violation of NEC by physical matter, but the quadratic correction term will not remain significant at these scales due to low energy densities and some alternative mechanism must prevail.
The tidal acceleration obtained at the throat is within desirable limits both in the radial and tangential directions and the consequent upper limit obtained on the velocity of the traveller traversing the wormhole throat is a realistic one. So, any traveller trying to use the wormhole as a shortcut for space-time travel does not get ripped apart at the throat of the wormhole due to infinitely large tidal forces in either the radial or tangential direction. One can think that despite the matter at the throat not violating the NEC (as \(\mu>-1\)), _the tidal forces and hence the Weyl curvature tensor do not diverge due to the effect of LQC which prevents the tidal force from increasing beyond a certain limit, thus resolving the singularity._ This ensures the traversability of the wormhole. However, we can say that LQC is more effective in resolving the strong curvature singularities that arise from diverging energy densities, as it can do so without violating any of the energy conditions. The reason behind this may be that for a diverging spacetime curvature due to infinitely
large energy densities like the initial singularity of the universe or one at the center of a Schwarzschild black hole, the Weyl curvature vanishes rather than diverging, contrary to the behaviour at the wormhole throat. So, these singularities can be completely resolved in a LQC context (which has an inherent pressure isotropy in the UV corrected EFE) without matter violating any of the energy conditions. However, to resolve the diverging Weyl curvature at the wormhole throat without violating any of the energy conditions, the presence of an inherent anisotropy in the UV-corrected EFE may play a significant role as indicated by braneworld (which has an inherent pressure anisotropy due to contribution of the bulk Weyl tensor projected on the brane) wormholes, which can be constructed from matter obeying all the energy conditions [41].
Performing a linear stability analysis check, we can say that the wormhole would not collapse at the throat due to development of any instability and remains traversable. In order to perform the check, an effective potential formalism is applied, where the radius of the wormhole throat is assumed to be a function of the proper time and an effective potential is constructed from the surface density of the wormhole, which is in turn obtained from the junction conditions at the wormhole surface. The equation of motion in terms of this effective potential can be obtained from the conservation of stress-energy. A parameter \(\beta\) is introduced involving the surface pressure and surface density. The stability condition essentially represents a minima of the potential, obtained from the vanishing of its first derivative and the second derivative being positive at the throat. The condition for this minima is expressed as an inequality in terms of the parameter \(\beta\) that is plotted to obtain the regions of stability as denoted. The radial acceleration remaining positive for radial distances within the wormhole, it can be said to feature an attractive geometry.
Although a wormhole has not been detected yet till date, but there is the possibility of detecting one in the recent future. A number of suggestions have been proposed in literature to detect one [77; 78; 79; 80; 81; 82]. This can be done by studying any unexplained effect on the orbital motion of stars near black holes that can harbor wormholes [83]. Also, micro-lensing effects of wormholes have been suggested to resemble gamma ray bursts [84]. Emission of radiation pulses remains another interesting possibility [85]. Wormhole that are constructed from phantom matter violating the NEC with a particular EoS can be distinguished from black holes via the process of quasinormal ringing [86]. If a wormhole is indeed detected in the recent future, then detailed observational studies can also throw light on the actual nature of the UV corrected gravity due to the large space-time curvatures involved. For the time being we can conclude by stating that _LQC not only resolves the curvature singularities due to diverging energy densities, leading to non-singular bouncing black holes_[87] _but also due to diverging tidal forces, leading to traversable wormholes without any NEC violating exotic matter which may result in quantum instabilities_.
###### Acknowledgements.
MK is thankful to the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, India for providing the Visiting Associateship under which a part of this work was carried out. RS is thankful to the Govt. of West Bengal for financial support through SVMCM scheme. SG is thankful to the Directorate of Legal Metrology under the Department of Consumer Affairs, West Bengal for their support.
## Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors' comment: This is a theoretical work and no data has been used to arrive at any of the results.]
## Open Access
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit [http://creativecommons.org/licenses/by4.0/](http://creativecommons.org/licenses/by4.0/).
Funded by SCOAP\({}^{3}\). SCOAP\({}^{3}\) supports the goals of the International Year of Basic Sciences for Sustainable Development.
|
2309.04345 | General Relativistic Polarized Proca Stars | Massive vector fields can form spatially localized, non-relativistic,
stationary field configurations supported by gravitational interactions. The
ground state configurations (p-solitons/vector solitons/dark photon
stars/polarized Proca stars) have a time-dependent vector field pointing in the
same spatial direction throughout the configuration at any instant of time, can
carry macroscopic amounts of spin angular momentum, and are spherically
symmetric and monotonic in the energy density. In this paper, we include
general relativistic effects, and numerically investigate the stability of
compact polarized Proca stars (linear and circularly polarized) and compare
them to hedgehog-like field configurations (with radially pointing field
directions). Starting with approximate field profiles of such stars, we evolve
the system numerically using 3+1 dimensional numerical simulations in general
relativity. We find that these initial conditions lead to stable
configurations. However, at sufficiently large initial compactness, they can
collapse to black holes. We find that the initial compactness that leads to
black hole formation is higher for circularly polarized stars (which carry
macroscopic spin angular momentum), compared to linearly polarized ones, which
in turn is higher than that for hedgehog configurations. | Zipeng Wang, Thomas Helfer, Mustafa A. Amin | 2023-09-08T14:16:28Z | http://arxiv.org/abs/2309.04345v1 | # General Relativistic Polarized Proca Stars
###### Abstract
Massive vector fields can form spatially localized, non-relativistic, stationary field configurations supported by gravitational interactions. The ground state configurations (p-solitons/vector solitons/dark photon stars/polarized Proca stars) have a time-dependent vector field pointing in the same spatial direction throughout the configuration at any instant of time, can carry macroscopic amounts of spin angular momentum, and are spherically symmetric and monotonic in the energy density. In this paper, we include general relativistic effects, and numerically investigate the stability of compact polarized Proca stars (linear and circularly polarized) and compare them to hedgehog-like field configurations (with radially pointing field directions). Starting with approximate field profiles of such stars, we evolve the system numerically using 3+1 dimensional numerical simulations in general relativity. We find that these initial conditions lead to stable configurations. However, at sufficiently large initial compactness, they can collapse to black holes. We find that the initial compactness that leads to black hole formation is higher for circularly polarized stars (which carry macroscopic spin angular momentum), compared to linearly polarized ones, which in turn is higher than that for hedgehog configurations.
## I Introduction
Massive vector fields (or dark photons) can constitute all or part of dark matter. If their mass is \(\lesssim 10\,\)eV, the occupation numbers in typical astrophysical/cosmological settings are large enough to admit a classical field description. Their early universe production (for example, [1; 2; 3; 4; 5; 6; 7; 8]), astrophysical/cosmological phenomenology, as well as direct and indirect detection strategies are being extensively explored (see [9] for a recent review). Numerical simulations investigating the nonlinear (and non-relativistic) gravitational dynamics of such fields in an astrophysical setting have been initiated recently [10; 11; 12; 13].
Similar to the case of scalar fields, in the non-relativistic limit, one expects massive vector fields to form spatially localized, non-relativistic, stationary field configurations (solitons or Boson stars) supported by gravitational interactions. At any instant of time, such polarized Proca stars (also referred to as \(p\)-solitons, vector solitons, dark photon stars) have a spatially constant orientation of the field polarization throughout the configuration [14; 15]. Depending on the polarization, they can carry macroscopic amounts of spin angular momentum [10; 15]. They are spherically symmetric in energy density, but not in the field configuration (but are node-free). Non-relativistic (fractionally) polarized solitons have been shown to from generically from cosmological, as well as astrophysical initial conditions [10; 11; 12; 13].
General relativistic effects become necessary to consider if the vector field configurations become sufficiently compact.1 Such an analysis is relevant for understanding the detailed nature of the compact configurations, including their maximal compactness, intrinsic spin, stability and deformability. These properties can be critical when considering (mergers of) such compact objects as gravitational wave sources [16; 17]. 2 In this paper, we study such polarized Proca stars within full numerical relativity.
Footnote 1: Current simulations show that solitons forming from generic initial conditions are indeed fractionally polarized (with macroscopic spin) [12], and are typically too diffuse to warrant studying relativistic corrections. However, as such solitons accrete fields from their surrounding, they can become increasingly more compact.
Footnote 2: The merger of scalar boson stars and their gravitational wave emission has been studied extensively in the literature, see for example [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30].
We note that polarized Proca stars that we focus on here are unlike hedgehog-like field configurations. Hedgehog-like configurations have spherically symmetric field configurations with spatially varying field polarization, and have a node in their field profiles at the origin. Hedgehog configurations have been studied in detail including general relativistic corrections, assisted by the spherical symmetry of the energy density and their field configuration [31; 32]. However, such hedgehogs are higher energy states of the field for a given mass compared to the polarized Proca stars/p-solitons mentioned above [14; 15]. Unlike the polarized Proca stars, hedgegos have been shown to form only under a special set of spherically symmetric (field) initial conditions and evolution [33].
As far as general relativistic polarized Proca stars are concerned, the \(m=1\) case was studied in [16; 34; 31] (for the complex vector field).3 This case is similar to
circularly polarized solitons/Proc stars with macroscopic angular momentum in [15]. The analysis in [15] (which includes circular, linear and fractionally polarized cases) is a non-relativistic analysis, where the underlying relativistic vector field can be real valued. To the best of our knowledge, general relativistic linearly polarized Proca stars (with negligible angular momentum) have not been studied in the literature before.
In this study, we explore the behavior of complex-valued polarized Proca stars as their compactness is increased, which allows us to estimate a rough lower bound of the maximum compactness these stars. We numerically investigate the stability of both the linear and the circular polarization states, and compare them to hedgehog-like field configurations when general relativistic effects are included.4
Footnote 4: The maximal compactness for the hedgehog configurations, and the circularly polarized ones was already provided in [31]. The maximal compactness for linearly polarized one, however, has not been provided.
We work in units where \(\hbar=c=G_{N}=1\). In the captions we use the Planck mass \(M_{\rm pl}\equiv 1/\sqrt{G_{N}}\), and occasionally even \(\hbar\), to make the units explicit for clarity. We will use Greek letters (\(\mu,\,\nu,\ldots\)) to represent four-dimensional indices, and Latin letters (\(i\),\(j\),\(\ldots\)) to represent three-dimensional spatial indices. We work with the \(-+++\) convention for the metric.
The rest of the paper is organized as follows. In section II, we provide the underlying model for complex Proca fields in general relativity. In this section we also describe the numerical relativity framework, construction of initial data and provide some details of the numerical set-up. In section III, we present and discuss the results of our simulations. We summarize our results and provide a taste of their implications in section IV. Convergence tests and level of constrain violations are discussed in an appendix.
## II Setup and numerical methods
### Proca field in general relativity
We consider a vector field \(X^{\alpha}\) in general relativity, with an action [31]
\[\mathcal{S}\!=\!\int d^{4}x\sqrt{-g}\left(\frac{R}{16\pi}-\frac{1}{2}m^{2}X_{ \alpha}\bar{X}^{\alpha}-\frac{1}{4}F^{\mu\nu}\bar{F}_{\mu\nu}\right). \tag{1}\]
Here, \(R\) is the Ricci scalar, \(g=\det(g_{\mu\nu})\), \(m\) is the mass of the vector field, \(\bar{X}^{\alpha}\) is the complex conjugate of the vector field \(X^{\alpha}\), \(F_{\mu\nu}=\partial_{\mu}X_{\nu}-\partial_{\mu}X_{\nu}\) is the field strength tensor, and \(\bar{F}_{\mu\nu}\) is its complex conjugate.
Extremizing the action Eq. (1) with respect to variations in \(g_{\mu\nu}\) yields the Einstein's field equation
\[G_{\mu\nu}=8\pi T_{\mu\nu}\,. \tag{2}\]
Here, \(G_{\mu\nu}\) is the Einstein tensor, and
\[T_{\mu\nu}=\frac{1}{2}\left(F_{\mu\rho}\bar{F}_{\nu}{}^{\rho}+ \bar{F}_{\mu\rho}{F}_{\nu}{}^{\rho}\right)-\frac{1}{4}F_{\rho\gamma}\bar{F}^ {\rho\gamma}g_{\mu\nu}\\ +\frac{m^{2}}{2}\left(X_{\mu}\bar{X}_{\nu}+\bar{X}_{\mu}X_{\nu}-g _{\mu\nu}\bar{X}^{\rho}X_{\rho}\right) \tag{3}\]
is the stress-energy tensor associated with the Proca field. Similarly, extremizing the action Eq. (1) with respect to \(X_{\mu}\) leads to the Proca field equation
\[\nabla_{\mu}F^{\mu\nu}=m^{2}X^{\nu}\,. \tag{4}\]
The above equation with \(m\neq 0\), along with the antisymmetry of \(F^{\mu\nu}\), implies that the field \(X^{\nu}\) must satisfy the Proca constraint equation
\[\nabla_{\nu}X^{\nu}=0\,. \tag{5}\]
Eq. (2) and Eq. (4) govern the evolution of the Proca field and spacetime.
### 3+1 decomposition
Following [35], we foliate spacetime with spatial slices \(\Sigma\) (with metric \(\gamma_{ij}\)), and connect the slices with each other with a lapse function \(\alpha\) and shift vector field \(\beta^{i}\). The spacetime metric can then be written as
\[ds^{2}=-(\alpha^{2}-\beta^{i}\beta_{i})dt^{2}+2\beta_{i}dx^{i}dt+\gamma_{ij}dx ^{i}dx^{j}\,, \tag{6}\]
where \(\beta_{i}\equiv\gamma_{ij}\beta^{j}\).
The unit future-directed normal vector of \(\Sigma\): \(n_{\mu}=(-\alpha,0,0,0)\), and \(P_{\mu}{}^{\nu}\equiv\delta_{\mu}{}^{\nu}+n_{\mu}n^{\nu}\) is the projection tensor that projects on to \(\Sigma\)[36]. The extrinsice curvature of \(\Sigma\) is \(K_{ij}=-\mathcal{L}_{n}\gamma_{ij}/2\) with \(\mathcal{L}_{n}\) being the Lie-derivative along the normal vector \(n_{\mu}\). A decomposition of the stress-energy tensor, Eq. (3), adapted to this foliation of spacetime is
\[\rho\equiv n_{\alpha}n_{\beta}T^{\alpha\beta},\ S_{i}\equiv-\gamma_{i\alpha}n _{\beta}T^{\alpha\beta},\ S_{ij}\equiv\!\gamma_{i\alpha}\gamma_{j\beta}T^{ \alpha\beta}. \tag{7}\]
The evolution equation for \(\gamma_{ij}\), \(K_{ij}\), \(\alpha\), and \(\beta^{i}\) can be obtained from the Einstein equation (2); see [37] for their explicit form.
We also decompose the vector field as [38]:
\[X_{\mu}=A_{\mu}+n_{\mu}\varphi\,, \tag{8}\]
where \(\varphi=-n^{\mu}X_{\mu}\) is the component of the vector field normal to the spatial slice [38], and \(A_{\mu}=P_{\mu}{}^{\nu}X_{\nu}\) are the components along spatial slice. The electric field associated with the Proca field is defined as:
\[E_{i}\equiv P_{i}{}^{\mu}n^{\nu}F_{\mu\nu}\,. \tag{9}\]
Under this decomposition, the Proca constraint, Eq. (5), becomes
\[\mathcal{C}_{\mathcal{E}}=D_{i}E^{i}-m^{2}\varphi=0\,. \tag{10}\]
Figure 1: Simulations of the three types of Proca stars. Top row: a hedgehog configuration generated with initial compactness \(\mathcal{C}\approx 0.04\) (\(\mu=0.04m\)) shown on the \(x\)-\(y\) plane at three different times. Middle row: a linearly polarized Proca star with \(\mathcal{C}\approx 0.06\) (\(\mu=0.06m\)) shown on the \(x\)-\(z\) plane. The Proca field vectors are polarized in the \(z\)-direction. Bottom row: a circularly polarized star with initial compactness \(\mathcal{C}\approx 0.08\) (\(\mu=0.10m\)) shown on the \(x\)-\(y\) plane. The energy density profiles, \(\rho\times(M_{\rm pl}m)^{-2}\), are shown as color plots. The black arrows show the direction and relative magnitude of the real part of the spatial Proca vector field \(\mathrm{Re}(A_{i})\). The black bars on top of each panel show the length scale of the plots in units of \(m^{-1}\). The time shown is in units of \(m^{-1}\). The (real part of) the vector field oscillates along the arrows in the top and middle panel, whereas it rotates in the bottom one (with period \(T=2\pi m^{-1}\)). Note that the time interval between snapshots is much longer than \(T\); the changes in density profiles due to perturbations happen on these longer timescales.
where \(D_{i}\) is the covariant derivative corresponding to \(\gamma_{ij}\). The Proca evolution equation,(4), yields [38]
\[\partial_{t}\varphi =-A^{i}D_{i}\alpha+\alpha(K\varphi-D_{i}A^{i}-Z)+\mathcal{L}_{ \beta}\varphi\,,\] \[\partial_{t}A_{i} =-\alpha(E_{i}+D_{i}\varphi)-\varphi D_{i}\alpha+\mathcal{L}_{ \beta}A_{i}\,,\] \[\partial_{t}E^{i} =\alpha(KE^{i}+D^{i}Z+m^{2}A^{i}+D^{k}D^{i}A_{k}-D^{k}D_{k}A^{i})\] \[+D^{j}\alpha(D^{i}A_{j}-D_{j}A^{i})+\mathcal{L}_{\beta}E^{i}\,, \tag{11}\] \[\partial_{t}Z =\alpha(D_{i}E^{i}+m^{2}\varphi-\kappa Z)+\mathcal{L}_{\beta}Z\,.\]
In the above equation, \(\mathcal{L}_{\beta}\) is the Lie derivative with respect to the shift vector, \(K=\gamma^{ij}K_{ij}\) is the mean curvature, and \(Z\) is an auxiliary variable introduced to keep \(\mathcal{C}_{\mathcal{E}}\) minimized during evolution [38; 39; 40].
### Initial data
In this section, we describe the construction of the initial data, which approximates the full-GR solutions for three types of Proca stars.
#### ii.3.1 Proca field
As initial data, we use the field profiles for stationary Proca stars in the non-relativistic regime, \(|\partial_{i}/m|\ll 1\). For details of this construction, see [15]. The three types of Proca stars under consideration have a spatial vector field
\[\mathbf{A}(\tilde{t},\mathbf{\tilde{r}})=e^{i\tilde{t}}\frac{\mu}{m}\begin{cases}f^{ \text{lin}}(\tilde{r})\hat{\mathbf{z}}&\text{linearly polarized},\\ f^{\text{cir}}(\tilde{r})\frac{\hat{\mathbf{x}}+i\hat{\mathbf{y}}}{\sqrt{2}}&\text{ circularly polarized},\\ f^{\text{hh}}(\tilde{r})\hat{\mathbf{r}}&\text{hedgehog}.\end{cases} \tag{12}\]
Here, \(\mu\) is the effective chemical potential with \(\mu/m\sim|\partial_{i}^{2}/m^{2}|\ll 1\) in the non-relativistic limit, and the rescaled coordinates \(\tilde{r}\equiv\sqrt{m\mu}r\), \(\tilde{t}\equiv(1-\mu/m)mt\). The profiles \(f^{\text{lin}}\), \(f^{\text{cir}}\) and \(f^{\text{hh}}\) are approximately given by
\[f^{\text{lin}}(\tilde{r}) =f^{\text{circ}}(\tilde{r})\approx\frac{1.94}{(1+0.073\tilde{r}^{ 2})^{4}}, \tag{13}\] \[f^{\text{hh}}(\tilde{r}) \approx\frac{0.76\tilde{r}}{(1+0.0096\tilde{r}^{2})^{16}}. \tag{14}\]
These are fitting formulae for the profiles. More accurate profiles can be obtained by numerically solving the corresponding profile equations [15]. Our fits deviate from these numerically obtained profiles by \(\sim 5\%\).
To specify the electrical field (9)
\[E_{i}=\gamma^{-\frac{1}{3}}\left(\partial_{i}\varphi-\partial_{0}A_{i}\right)\,, \tag{15}\]
where \(\gamma\equiv\det(\gamma_{ij})\). On the initial slice, we use Eq. (12) to obtain \(\partial_{0}A_{i}\). We ignore the \(\partial_{i}\varphi\) since it is suppressed by \(|\partial_{i}|/m\). This \(E^{i}=\gamma^{ij}E_{j}\) can then be used in Eq. (10) to obtain \(\varphi\).5
Footnote 5: We need not have ignored \(\varphi\), and could have solved for it using Eq. (10), together with the Hamiltonian and momentum constraint equations. We found that this procedure was not numerically stable.
#### ii.3.2 Spacetime
On the initial slice, the gauge functions are assumed to be trivial, \(\alpha=1\) and \(\beta^{i}=0\). The spatial metric is assumed have conformal flatness: \(\gamma_{ij}=\psi^{4}\delta_{ij}\), where \(\psi=[\det(\gamma_{ij})]^{\frac{1}{4}}\) is the conformal factor of the metric.
The fields from the previous subsection and the metric, must satisfy the Hamiltonian and momentum constraint
\[\mathcal{H}\equiv R-K_{ij}K^{ij}+K^{2}-16\pi\rho=0\,, \tag{16}\] \[\mathcal{M}^{i}\equiv D_{j}\left(K^{ij}-\gamma^{ij}K\right)-8\pi S ^{i}=0\,, \tag{17}\]
where \(R\) is the three-dimensional Ricci scalar of \(\gamma_{ij}\).
To solve the constraint equations, we follow the conformal-transverse-traceless (CTT) formalism (see [41] chap. 3 and appendix B). We decompose the extrinsic curvature as \(K_{ij}=\psi^{-2}\bar{A}_{ij}+\gamma_{ij}K/3\), where \(\bar{A}_{ij}\) is the trace-free part of the extrinsic curvature. We further assume zero mean curvature, \(K=0\), on the initial slice. We decompose Eqns. (16) and (17) as
\[\Delta\psi+\frac{1}{8}\psi^{-7}\bar{A}_{ij}\bar{A}^{ij} =-2\pi\psi^{5}\rho\,, \tag{18}\] \[(\Delta_{L}W)^{i} =8\pi\psi^{10}S^{i}\,, \tag{19}\]
where \(\Delta\psi=\partial^{k}\partial_{k}\psi\) is the flat Laplacian of the conformal factor, \(W^{i}\) is the vector potential of \(\bar{A}_{ij}\), and \((\Delta_{L}W)^{i}=\partial^{j}\partial_{j}W^{i}+\frac{1}{3}\partial^{i} \partial_{j}W^{j}\) is the flat vector Laplacian of \(W^{i}\).
We utilize a GRChombo-based elliptical solver that has adaptive mesh refinement support to solve Eqns. (18) and (19) with \(\rho\) and \(S_{i}\) given by Eq. (7) based on the vector field profiles. Additionally, the solver requires an initial guess for the conformal factor \(\psi\), which was set to be the conformal factor of the full-GR stationary hedgehog stars in [31]. This solver improves this initial guess iteratively, updating \(\psi\) and \(W^{i}\) each time to reduce the Hamiltonian and momentum constraints. At each iterative step, we update the \(E^{i}\) components of the field according to Eq. (15) (without \(\partial_{i}\varphi\)), and then update the \(\varphi\) component of the Proca field according to Eq. (10), to ensure that the Proca constraint is still satisfied under the updated conformal factor \(\psi\). Note that since the Proca field distribution is compact, we use the boundary conditions \(\psi=1\) and \(W^{i}=0\) and put the boundaries far from the Proca star.
For the densest simulated Proca star generated with \(\mu=0.10m\), we solve the equations with side length \(L=300m^{-1}\) with the number of points \(N=96\) on
the coarsest level. We add three additional refinement levels enclosing the star, with the finest resolution as \(\Delta=0.2m^{-1}\). Under these conditions, the procedure detailed above provides good convergence rates, with \(\mathcal{H}\) sufficiently small. In the Appendix, we demonstrate convergence for \(\mathcal{H}\) with different resolutions as an example. The momentum constraint \(\mathcal{M}^{i}\) and the Proca constraint \(\mathcal{C}_{\mathcal{E}}\) converge in a similar fashion.6
Footnote 6: However, we observed that, for denser initial vector profiles with parameter \(\mu>0.10m\), this procedure fails to converge.
For convenience, we call the Proca profiles detailed in II.3.1, along with the spacetime metric solved with Eq. (18) and Eq. (19), "constructed Proca stars".
A few comments are in order regarding our constructed Proca stars. We expect our constructed Proca stars to be non-stationary. This is because the non-relativistic profiles we use will deviate from the true relativistic solutions as the compactness is increased. This is currently unavoidable for us because unlike hedgehog-like Proca stars [31], constructing a stationary solution at high compactness for polarized stars is difficult due of the lack of spherical symmetry in the field configuration (and a likely deviation from spherical symmetry in the energy density). Furthermore, in our procedure, we used \(K=0\) and conformal flatness, ignored the transverse traceless part of the \(K_{ij}\) and chose trivial functions for \(\alpha\) and \(\beta^{i}\) (chosen as a numerical convenience) which might make the initially constructed Proca stars deviate even more from their Newtonian counterparts at low compactness. These shortcomings can be thought of as adding initial perturbations to the possible stationary solution for each of the three stars.
### Extraction of mass and angular momentum
Following [42, 36], we define the conserved mass (\(M\)) and the \(z\) component of the angular momentum (\(J_{3}\)) as
\[Q=Q_{0}+\int_{0}^{t}\mathcal{S}dt\,, \tag{20}\]
where \(Q=M,J_{3}\), and
\[Q_{0} \equiv\int_{\Sigma}d^{3}x\sqrt{\gamma}\,n_{\nu}\zeta^{\mu}T_{\mu} ^{\ \nu}, \tag{21}\] \[\mathcal{S} \equiv\int_{\Sigma}d^{3}x\sqrt{\gamma}\,\alpha T^{\mu}_{\ \nu}\nabla_{\mu}\zeta^{\nu}\,. \tag{22}\]
The above quantities differ for \(Q=M,J_{3}\) in the choice of \(\zeta^{\mu}\):
\[\zeta^{\mu}=\begin{cases}(1,0,0,0)&\text{for}\quad Q=M\,,\\ (0,-y,x,0)&\text{for}\quad Q=J_{3}\,.\end{cases} \tag{23}\]
The explicit expressions for \(M_{0}\) and \((J_{3})_{0}\) are given by
\[M_{0} =\int_{\Sigma}d^{3}x\sqrt{\gamma}\left(\alpha\rho-\beta_{j}S^{j}\right) \tag{24}\] \[(J_{3})_{0} =\int_{\Sigma}d^{3}x\sqrt{\gamma}\left(yS_{x}-xS_{y}\right) \tag{25}\]
where we \(\rho,S_{i}\) are defined in Eq. (7).
In Fig. 2 we summarize the mass-radius relationship of the constructed Proca stars solved using the CTT procedure in Sec. II.3.2, using \(Q\) in Eq. (20) as the measure of mass. We show that the initial data we obtain for all three kinds of stars agree approximately with the mass-radius curve in the non-relativistic limit [15]. The compactness
\[\mathcal{C}\equiv\frac{M}{R_{95}}, \tag{26}\]
of these stars ranges from \(\approx 0.01\) to \(0.1\). Here, \(R_{95}\) is defined as the radius containing \(95\%\) of the mass. We note that the above measure for "mass", is really a measure of the total energy, including the rest mass. It will agree with the rest mass (defined in the Newtonian solutions) at low compactness, but can show deviations at larger ones.
Figure 2: Initial mass and radius of Proca stars generated from non-relativistic profiles. The dots show the Proca stars with Proca field generated from Sec. II.3.1 and the spacetime metric solved from the CTT procedure. The orange and blue dashed lines show the mass-radius curve as predicted in the non-relativistic limit and under Newtonian gravity.
## III Results & Discussion
We simulated the evolution of Proca stars (both polarized stars, as well as hedgehog-like configurations for comparison), using constraint-fulfilling initial Proca field profiles that are stationary under Newtonian gravity. The evolution times were approximately 140 cycles of the Proca field (\(t_{\rm sim}\approx 900m^{-1}\)).
A sample evolution of the three different stars is shown in Fig. 1. During their time evolution, all three stars exhibit radial oscillations in their density, but do not disperse away for the duration of the simulation. The period of this radial density oscillation is roughly \(30-100\) times longer than period of vector field cycle (\(T=2\pi m^{-1}\)). The radial oscillations are likely excited due to the choice of initial data, see the last paragraph of section II.3.
A summary of the mass radius-relationship of the three types of Proca stars during the evolution (with varying initial compactness) is shown in Fig. 4. For \(\mathcal{C}\lesssim 0.1\), we see that all the stars show stable radial oscillations with the amplitude of the oscillations being smallest at lowest compactness. We take the survival of these stars for the duration of the simulation, with perturbations introduced by imperfect initial data, as evidence of the existence of long-lived, compact, polarized Proca stars within full GR.
In Fig. 4, we can interpret the central values within the radial variations (horizontal "error bars") for each \(M\) as defining the radius of a true stationary solution at that mass. This should provide guidance in the construction of the stationary solution at large compactness for the linearly polarized stars, which is not known away from the Newtonian limit. However, we urge caution here, especially as compactness gets high. For the hedgehog and circularly polarized case, the mass radius relation is known at all compactness [31]. Using this known mass radius relationship for hedgehog configurations, the radial variation does not include the expected radius for \(\mathcal{C}\approx 0.04\). We leave a more detailed comparison with known stationary mass-radius relationships, as well as the derivation of the stationary mass-radius curve for the linearly polarized case in the high compactness regime for future work.
As the compactness approaches \(\mathcal{C}=0.1\), we start seeing qualitatively different behavior for the three types of stars. We start seeing collapse to black holes in some types of stars. At \(\mathcal{C}\approx 0.08\), the circularly polarized star shows large radial oscillations, but does not collapse to a black hole (left most point in the right panel of Fig. 4. The linearly polarized one (middle panel), however does collapse at this same compactness. The hedghog star (left panel) collapses at an even smaller compactness of \(\approx 0.06\) (where both linear and circularly polarized stars are stable). That is, the initial compactness range \((0,\mathcal{C})\) where hedgehogs are stable is smaller compared with that of the polarized stars. Between the two polarized stars, the stable region of the linearly polarized stars is smaller than that of the circularly polarized ones. We were unable simulate any type of Proca stars with \(\mathcal{C}>0.08\) because the constraint equations solver Eqns. (18) and (19) failed to converge for these configurations.
**Limitations**: We cannot control the magnitude of the perturbations around the stationary solution induced by the imperfect initial conditions. Therefore, it is possible that the perturbation is larger in the case of hedgehogs, which might be a confounding factor leading to collapse to black holes at smaller compactness. For this reason, we cannot "prove" that that polarized stars are more stable than the hedgehog configuration. A controlled quantitative analysis will be possible, after the stationary solutions (like the \(m=1\) case in [31]) of polarized Proca stars are found at high compactness.
At large field amplitudes corresponding to highest compactness explored here (\(|A_{i}|\sim 0.1M_{\rm pl}\), the self-interactions of the vector field might not be ignorable. While polarized Proca stars with self-interactions for relatively small compactness have been explored in the literature [43; 44; 45; 46], the large amplitude here might bring additional complications, see [47; 48; 49; 50].
Figure 3: Total angular momentum against mass of the simulated Proca stars. For both the hedgehog stars and the linearly polarized stars, the extracted angular momentum from their respective simulations is consistent with zero. For the circularly polarized stars, the extracted angular momentum satisfies the relationship \(J=\hbar M/m\), consistent with it being the spin angular momentum discussed in [15].
## IV Summary & Implications
We simulated two types of polarized Proca stars (linear and circularly polarized), along with hedgehog-like Proca stars for comparison, using general relativistic field equations. The initial conditions were based on field profiles of related Proca star solutions in Newtonian gravity [15](see our Fig. 2), scaled to a higher compactness.
Our key results are as follows (see Fig. 4):
* We provided evidence that high-compactness polarized stars can be stable for \(\mathcal{C}\lesssim 0.1\).
* As we increase the initial compactness from approximately \(0.01\) to \(0.1\), the linearly polarized, circularly polarized, and hedgehog stars evolve away from their initial configurations and towards new, and slightly different fixed points.
* At sufficiently high compactness, some types of stars collapse to black holes. We found that circularly polarized stars avoid collapse to black holes up to higher initial compactness than linearly polarized ones, which in turn avoid collapse up to a higher initial compactness than hedgehog-like stars. The large intrinsic spin angular momentum of circularly polarized stars (see Fig. 3) might be playing a role in their relative robustness to collapse.
For circularly polarized stars, we did not observe collapse to a black hole up to \(\mathcal{C}=0.08\). We were unable to simulate stars with initial compactness \(\gtrsim 0.08\) due to numerical limitations. An improved procedure for constructing the initial data which allows for control of perturbations away from the stationary solution is needed. This can be done using an improved initial data formulation such as the one in [31].
We hope our findings provide new phenomenology that can be incorporated in the search for "exotic" compact objects [51] through gravitational and electromagnetic radiation. Polarized Proca stars can form in dark photon/ vector dark matter fields [10; 11; 12; 13], potentially providing access the nature of the dark sector.
For the purpose of gravitational wave physics, both the increased compactness, and the polarization of the stars, can have important implications. The increased maximal compactness of polarized stars in this paper (compared to, for example, hedgehog stars), suggests that they will get closer before they merge, resulting in the emitted gravitational radiation being different from hedgehog stars. The polarization of the star can also impact the dynamics of the binary merger of such stars through finite size effects such as tidal deformability \(\Lambda\propto\mathcal{C}^{-5}\)[52], before and during merger.7 In addition, circularly polarized stars with maximal intrinsic spin can lead to spin-orbit and spin-spin effects before they merge. During the final phase of the merger, the generated gravitational waves can also be directly impacted by the polarization state
Figure 4: The mass-radius relationship of the simulated Proca stars. In all panels the bars show ranges of the radial changes observed in simulations (after an initial “settling in” period of \(400m^{-1}\)). For some stars, the bars are replaced by arrows, indicating that the Proca star collapses into a black hole. The grey dashed lines show the lines of constant compactness, and the dark grey line show the compactness of a black hole, with its photosphere as its radius, in isotropic coordinates. Left, Middle and Right panels show the results for four hedgehog Proca stars, six linearly polarized Proca stars, and five circularly polarized stars respectively. For \(\mathcal{C}\lesssim 0.1\), the middle and left panels demonstrate the stability of compact, gravitationally supported polarized stars. Near the upper bound of this range, hedgehogs collapse at the lowest initial compactness (\(\mathcal{C}\approx 0.06\)), followed by linearly polarized (\(\mathcal{C}\approx 0.08\)), and then (likely) circularly polarized stars (\(\mathcal{C}>0.08\), although we were unable to simulate collapse in circularly polarized stars). For non-collapsing polarized stars, the mean of the radial variations provides insight into the mass-radius relationship at these compactness.
of the star. Analysis of mergers of compact scalar boson with angular momentum leads to rich dynamics (see, for example [57, 28]). A similar analysis is warranted for polarized Proca stars; for related recent work see [16, 17].
We have focused on stars constructed out of complex valued Proca fields for convenience. Similar constructions can be carried out for real valued fields (which might have a different lifetime). As in the case of axion stars [58, 59, 60, 61, 62, 63], such polarized Proca stars can also emit electromagnetic radiation, with the novelty that the properties of the radiation now depends on the polarization state of the Proca star [64](for effects on gravitational radiation, see [65]). In particular, the polarization patterns in the outgoing radiation could provide a new handle on the nature of the underlying dark fields. It could be interesting to construct multimessenger signals (gravitational and electromagnetic waves) from merging polarized Proca stars.
###### Acknowledgements.
The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC, visualization, database, or grid resources that have contributed to the research results reported within this paper [66]. URL: [http://www.tacc.utexas.edu](http://www.tacc.utexas.edu). MA acknowledges early discussions of related work with Peter Adshead, Mudit Jain, Kaloian Lozanov, Helvi Witek and Hong-Yi Zhang. We also acknowledge many fruitful conversations with Emanuele Berti, Robin Croft, Liina M. Chung-Jukko, Tamara Evstafyeva, Eugene Lim and Ulrich Sperhake. Z.W and T.H. are supported by NSF Grants No. AST-2006538, PHY-2207502, PHY-090003 and PHY20043, and NASA Grants No. 19-ATP19-0051, 20-LPS20-0011 and 21-ATP21-0010. MA is supported by a DOE grant DE-SC0021619. This research project was conducted using computational resources at the Maryland Advanced Research Computing Center (MARCC).
## Appendix A Convergence tests
On the initial spatial slice, the Hamiltonian constraint Eq. (16), the momentum constraint Eq. (17), and the Proca constraint Eq. (10) must be satisfied. We use the conformal-transverse-traceless (CTT) formalism to reduce Eq. (16) and Eq. (17) into elliptical equations of the conformal factor \(\psi\) and the extrinsic curvature \(K_{ij}\) (see [41] chap. 3 and appendix B).
To show the validity of the solutions obtained from the CTT equations, we performed convergence tests on the initial data of the densest circularly polarized Proca star (generated with \(\mu=0.10m\)). In Fig. 5, we show the Hamiltonian constraint \(\mathcal{H}\) with two different resolutions. We see that \(\mathcal{H}\) from the high-resolution run is smaller than that of the low-resolution run, and is consistent with second-order convergence towards zero. The momentum constraint \(\mathcal{M}\) and the Proca constraint \(\mathcal{C}_{\mathrm{c}}\) both behave similarly to \(\mathcal{H}\), and are consistent with second-order convergence.
Aside from the initial data test, We also performed a convergence test for the Proca field evolution scheme. Using the same circularly polarized Proca star, We perform three runs with resolutions \(\Delta_{1}=0.146m^{-1}\), \(\Delta_{2}=0.117m^{-1}\), and \(\Delta_{3}=0.098m^{-1}\) at the position of the star. We then plot the energy density \(\rho\) at the center of
Figure 6: Convergence test for the densest circularly polarized Proca star with three different resolutions. The top panel shows \(\rho\) at the center of the star (averaged over a sphere with radius \(0.2m^{-1}\)). The bottom panel shows the difference between simulations of medium and low (M-L) resolution (black). The dashed line shows the predicted difference, assuming fourth-order convergence, based on the difference between the high-resolution run and the medium-resolution run.
Figure 5: Hamiltonian constraint violation for the densest (\(\mathcal{C}=0.08\)) circularly polarized Proca star with two different resolutions \(\Delta_{1}=0.40m^{-1}\) and \(\Delta_{2}=0.20m^{-1}\). Here \(r\) is the radial distance (in code units) from the center of the star. The dashed lines show the predicted Hamiltonian constraint of the high-resolution run for second-order and fourth-order convergence.
the star (averaged over several cells) in Fig. 6. On the top panel, the radial oscillations of the star are visible in the form of changes in its central density. Here, the differences between simulations with different resolutions are negligible compared to the physical density variations.
## Appendix B Numerical Details
We performed a series of runs, as shown in Figure 4, where we selected profiles with \(\mu/m=0.01,0.02,0.04,\text{and}\;0.06\) for the hedgehog Proca stars. For the generated linearly polarized \(\mu/m=0.01,0.02,0.04,0.06,0.087\), and \(0.10\) and circularly polarized Proca stars, we used \(\mu/m=0.01,0.02,0.04,0.06,\text{and}\;0.10\).
To determine the \(R_{95}\) radius, we integrated the energy density \(\rho\) over the volume. We extracted the energy density along the \(x\) axes and assumed that these profiles are symmetric for each axis. The mass \(M(R)\) is given by the integral:
\[M(R)=4\pi\int_{0}^{R}\rho_{(x)}(r)r^{2}dr. \tag{20}\]
Next, we determined the radius \(R_{95\%}\) that contains \(95\%\) of the total mass: \(M(R_{95\%})=0.95M(\infty)\). We calculated the mass based on all three directions \(x\), \(y\) and \(z\) and in most cases, the three radii agreed and did not vary significantly, only when approaching highly relativistic regimes before collapse did we observe noticeable deviations from sphericity. Lastly, this measure is dependent on gauge, so it should be interpreted with care.
|
2309.13639 | A deletion-contraction formula and monotonicity properties for the
polymatroid Tutte polynomial | The Tutte polynomial is a crucial invariant of matroids. The polymatroid
Tutte polynomial $\mathscr{T}_{P}(x,y)$, introduced by Bernardi et al., is an
extension of the classical Tutte polynomial from matroids to polymatroids $P$.
In this paper, we first obtain a deletion-contraction formula for
$\mathscr{T}_{P}(x,y)$. Then we prove two natural monotonicity properties, for
containment and for minors of the interior polynomial
$x^{n}\mathscr{T}_{P}(x^{-1},1)$ and the exterior polynomial
$y^{n}\mathscr{T}_{P}(1,y^{-1})$, for polymatroids $P$ over $[n]$. We show by a
counter-example that these monotonicity properties do not extend to
$\mathscr{T}_{P}(x,y)$. Using deletion-contraction, we obtain formulas for the
coefficients of terms of degree $n-1$ in $\mathscr{T}_{P}(x,y)$. Finally, for
all $k\geq 0$, we characterize hypergraphs $\mathcal{H}=(V,E)$ so that the
coefficient of $y^{k}$ in the exterior polynomial of the associated polymatroid
$P_{\mathcal{H}}$ attains its maximal value $\binom{|V|+k-2}{k}$. | Xiaxia Guan, Xian'an Jin, Tamás Kálmán | 2023-09-24T13:43:24Z | http://arxiv.org/abs/2309.13639v1 | # A deletion-contraction formula and monotonicity properties for the polymatroid Tutte polynomial
###### Abstract
The Tutte polynomial is a crucial invariant of matroids. The polymatroid Tutte polynomial \(\mathscr{T}_{P}(x,y)\), introduced by Bernardi et al., is an extension of the classical Tutte polynomial from matroids to polymatroids \(P\). In this paper, we first obtain a deletion-contraction formula for \(\mathscr{T}_{P}(x,y)\). Then we prove two coefficientwise natural monotonicity properties, for containment and for minors of the interior polynomial \(x^{n}\mathscr{T}_{P}(x^{-1},1)\) and the exterior polynomial \(y^{n}\mathscr{T}_{P}(1,y^{-1})\), where \(P\) is a polymatroid over \([n]\). We show by a counter-example that these monotonicity properties do not extend to \(\mathscr{T}_{P}(x,y)\). Using deletion-contraction, we obtain formulas for the coefficients of terms of degree \(n-1\) in \(\mathscr{T}_{P}(x,y)\). Finally, for all \(k\geq 0\), we characterize hypergraphs \(\mathcal{H}=(V,E)\) so that the coefficient of \(y^{k}\) in the exterior polynomial of the associated polymatroid \(P_{\mathcal{H}}\) attains its maximal value \(\binom{|V|+k-2}{k}\).
keywords: Tutte polynomial, Polymatroid, Deletion-contraction formula, Monotonicity. Msc: 05C31, 05B35, 05C65 +
Footnote †: journal:
## 1 Introduction
The Tutte polynomial [16] is an important and well-studied topic in graph and matroid theory, having wide applications in statistical physics, knot theory and so on. As a generalization of the one-variable evaluations \(T_{G}(x,1)\)
and \(T_{G}(1,y)\) of the Tutte polynomial \(T_{G}(x,y)\) of graphs \(G\) to hypergraphs, Kalman [9] introduced the interior polynomial \(I_{\mathcal{H}}(x)\) and the exterior polynomial \(X_{\mathcal{H}}(y)\) for hypergraphs \(\mathcal{H}\) via internal and external activities of hypertrees. (Hypertrees were first described as 'left or right degree vectors' in [13].) Later, Kalman, Murakami, and Postnikov [10, 11] established that certain leading terms of the HOMFLY polynomial [8], which is a generalization of the celebrated Jones polynomial [8] in knot theory, of any special alternating link coincide with the common interior polynomial of the pair of hypergraphs derived from the Seifert graph (which is a bipartite graph) of the link.
Integer polymatroids are a generalization of matroids and an abstraction of hypergraphs. Throughout this paper, only the set of bases of integer polymatroids will be considered, and simply called polymatroids. In [1], Bernardi, Kalman, and Postnikov proposed the polymatroid Tutte polynomial \(\mathscr{T}_{P}(x,y)\) for polymatroids \(P\), which can be reduced to the classical Tutte polynomial \(T_{M}(x,y)\) of any matroid \(M\). More precisely, if \(M\subset 2^{[n]}=\{I|I\subset[n]\}\) is a matroid of rank \(d\) over \([n]=\{1,\ldots,n\}\), and \(P=P(M)\subset\{0,1\}^{n}\subset\mathbb{Z}^{n}\) is its corresponding polymatroid, then
\[T_{M}(x,y)=\frac{(x+y-xy)^{n}}{x^{n-d}y^{d}}\mathscr{T}_{P}\left(\frac{x}{x+y- xy},\frac{y}{x+y-xy}\right). \tag{1}\]
Moreover, for a polymatroid \(P\) over \([n]\), the polynomial \(\mathscr{T}_{P}(x,y)\) contains both the interior polynomial \(I_{P}(x)\) and the exterior polynomial \(X_{P}(y)\) as special cases in the sense that
\[I_{P}(x)=x^{n}\mathscr{T}_{P}(x^{-1},1)\quad\text{and}\quad X_{P}(y)=y^{n} \mathscr{T}_{P}(1,y^{-1}), \tag{2}\]
respectively. It is also known that \(\mathscr{T}_{P}(x,y)\) is translation-invariant and satisfies a duality relation. We remark that \(\mathscr{T}_{P}(x,y)\) is equivalent to another polynomial introduced by Cameron and Fink [3].
One of the most basic properties of the classical Tutte polynomial is its deletion-contraction formula. Let \(P\) be a polymatroid on \([n]\) and \(f:2^{[n]}\to\mathbb{Z}\) be its rank function. For any \(t\in[n]\), let \(r_{t}=f(\{t\})+f([n]\setminus\{t\})-f([n])\). In Proposition 4.11 (f) of [1], Bernardi, Kalman, and Postnikov obtained a deletion-contraction relation of \(\mathscr{T}_{P}(x,y)\), for the cases \(r_{t}=0\) and \(r_{t}=1\), that generalizes the deletion-contraction formula of the classical Tutte polynomial. They also posed the following question.
**Question 1.1**.: Does there exist a deletion-contraction relation in the case \(r_{t}>1\)?
We answer this question in the affirmative by proving Theorem 3.5. It is based on the observation that certain natural "slices" of polymatroids are again polymatroids. We note that even if a polymatroid is hypergraphical, not all of its slices will be so. Let us also mention that in [4], the authors introduced a polynomial invariant similar (but not equivalent) to \(\mathscr{T}_{P}\) and established (in a limited sense) a deletion-contraction relation for it.
In 1972, Brylawski [2] proved that if a matroid \(M_{1}\) is a minor of a connected matroid \(M_{2}\), then \(T_{M_{1}}(x,y)\leq T_{M_{2}}(x,y)\), that is, each coefficient of \(T_{M_{1}}(x,y)\) is less then or equal to the corresponding coefficient of \(T_{M_{2}}(x,y)\). Unfortunately, we show by a counter-example in Remark 4.9 that this monotonicity property does not hold for the polymatroid Tutte polynomial, not even if we make the substitution in it suggested by (1).
In [15], Stanley showed that if \(\Delta\) and \(\Delta^{\prime}\) are lattice polytopes in \(\mathbb{R}^{m}\) with \(\Delta^{\prime}\subset\Delta\), then \(h^{*}_{\Delta^{\prime}}\leq h^{*}_{\Delta}\), where \(h^{*}\) denotes the \(h^{*}\)-polynomial. In [12], Kato observed, based on results of Kalman and Postnikov [11], that the interior polynomial of any connected hypergraph is equal to the \(h^{*}\)-polynomial of the so-called root polytope of the associated bipartite graph. The two theorems together imply a natural monotonicity property of the interior polynomial of hypergraphs, that is, for two connected hypergraphs \(\mathcal{H}=(V,E)\) and \(\mathcal{H}^{\prime}=(V^{\prime},E^{\prime})\), if the associated bipartite graph \(\mathrm{Bip}\mathcal{H}^{\prime}\) of \(\mathcal{H}^{\prime}\) is a subgraph of \(\mathrm{Bip}\mathcal{H}\), then \(I_{\mathcal{H}^{\prime}}\leq I_{\mathcal{H}}\).
In this paper, we extend this monotonicity property to the interior polynomial and the exterior polynomial of arbitrary polymatroids in Theorems 4.2 and 4.7. Namely, we show that for two polymatroids \(P\) and \(P^{\prime}\), if \(P^{\prime}\subset P\) or \(P^{\prime}\) is a minor of \(P\), then \(I_{P^{\prime}}\leq I_{P}\) and \(X_{P^{\prime}}\leq X_{P}\). In hypergraphical cases, we obtain that if the associated bipartite graph of the hypergraph \(\mathcal{H}^{\prime}\) is a subgraph of the associated bipartite graph of the hypergraph \(\mathcal{H}\) (even if \(\mathcal{H}\) and \(\mathcal{H}^{\prime}\) are not necessarily connected), then \(I_{\mathcal{H}^{\prime}}\leq I_{\mathcal{H}}\) and \(X_{\mathcal{H}^{\prime}}\leq X_{\mathcal{H}}\) (see Corollary 4.8).
We also examine certain individual coefficients of the polymatroid Tutte polynomial. Let \(P\) be a polymatroid over \([n]\). From Proposition 4.11 (d) of [1], one knows that for any \(k\in[n]\cup\{0\}\),
\[x^{k}y^{n-k}[\mathscr{T}_{P}(x,y)]=\binom{n}{k}.\]
In other words, the top degree terms of \(\mathscr{T}_{P}\) form a standard binomial expansion. Let \(f\) be the rank function of the polymatroid \(P\). In [6], the authors computed the coefficients \(x^{n-1}[\mathscr{T}_{P}(x,1)]=\sum_{i\in[n]}f(\{i\})-f([n])\) and \(y^{n-1}[\mathscr{T}_{P}(1,y)]=\sum_{i\in[n]}f([n]\setminus\{i\})-(n-1)f([n])\). From there it is easy to get formulas for the coefficients \(x^{n-1}[\mathscr{T}_{P}(x,y)]\) and \(y^{n-1}[\mathscr{T}_{P}(x,y)]\). As a generalization, using our deletion-contraction relation of \(\mathscr{T}_{P}(x,y)\), we obtain
\[x^{n-k}y^{k-1}[\mathscr{T}_{P}(x,y)]=\sum_{\stackrel{{ S\subset[n]}}{{|S|=k-1}}}f(S)+\sum_{ \stackrel{{ S^{\prime}\subset[n]}}{{|S^{\prime}|=k}}}f(S^{\prime} )-\binom{n}{k-1}f([n])-k\binom{n}{k}\]
for any \(k\in[n]\) in Theorem 5.3. Likewise, formulas for the coefficients \(x^{n-2}[\mathscr{T}_{P}(x,y)]\) and \(y^{n-2}[\mathscr{T}_{P}(x,y)]\) can be obtained as well (see Theorem 5.4). They imply formulas for the coefficients \(x^{n-2}[\mathscr{T}_{P}(x,1)]\) and \(y^{n-2}[\mathscr{T}_{P}(1,y)]\) (see Corollary 5.5).
Connectivity plays a critical role in graphs and hypergraphs. In Theorem 5.9 of this paper, using our deletion-contraction formula and monotonicity property of the exterior polynomial, we prove that for all \(k\geq 0\) and for any bipartite graph \(G=(E\cup V,\mathcal{E})\), the subgraph \(G-E^{\prime}\) is connected for all \(E^{\prime}\subset E\) with \(|E^{\prime}|=k\), if and only if \(y^{i}[X_{\mathcal{H}}(y)]=\binom{|V|+i-2}{i}\) for any \(i\leq k\), where \(\mathcal{H}\) is the hypergraph induced by \(G\) so that \(E\) is the set of hyperedges. We remark that the coefficient \(y^{i}[X_{\mathcal{H}}(y)]\) can never be higher than \(\binom{|V|+i-2}{i}\).
The paper is organized as follows. In Section 2, we make some necessary preparations. Section 3 is devoted to the deletion-contraction relation of the polymatroid Tutte polynomial. Our monotonicity properties of the interior polynomial and the exterior polynomial for polymatroids are proven in Section 4. In Section 5, we apply the deletion-contraction formula of Section 3 to derive formulas for the coefficients \(x^{n-k}y^{k-1}[\mathscr{T}_{P}(x,y)]\) for any \(k\in[n]\), \(x^{n-2}[\mathscr{T}_{P}(x,y)]\), and \(y^{n-2}[\mathscr{T}_{P}(x,y)]\) for polymatroids \(P\) and to characterize the connectivity of hypergraphs using the exterior polynomial.
## 2 Preliminaries
In this section, we will give some definitions and summarize some known results used in the next sections. Throughout the paper, let \([n]=\{1,2,\ldots,n\}\), \(2^{[n]}=\{I|I\subset[n]\}\), and let \(\mathbf{e}_{1},\mathbf{e}_{2},\ldots,\mathbf{e}_{n}\) denote the canonical basis of \(\mathbb{R}^{n}\). We first recall the definition of polymatroids.
**Definition 2.1**.: A _polymatroid_\(P=P_{f}\subset\mathbb{Z}^{n}\) (in other words, over the ground set \([n]\)) with the rank function \(f\) is given as
\[P=\left\{(a_{1},\cdots,a_{n})\in\mathbb{Z}^{n}\bigg{|}\sum_{i\in I}a_{i}\leq f(I )\text{ for any }I\subset[n]\text{ and }\sum_{i\in[n]}a_{i}=f([n])\right\},\]
where \(f:2^{[n]}\to\mathbb{Z}\) satisfies
1. \(f(\emptyset)=0\);
2. \(f(I)+f(J)\geq f(I\cup J)+f(I\cap J)\) for any \(I,J\subset[n]\) (submodularity).
Polymatroids are non-empty. It is also easy to see that the base polytope of any matroid (more precisely, the set of its vertices) is a polymatroid. We remark that in other papers on the subject, cf. [5], polymatroids are often defined as slightly larger sets, and the set \(P\) of Definition 2.1 is referred to as the set of integer bases of a polymatroid.
Conversely, if \(P\) is a polymatroid on \([n]\), then its rank function \(f=f_{P}\colon 2^{[n]}\to\mathbb{Z}\) can be recovered as
\[f_{P}(I)=\max_{\mathbf{a}\in P}\sum_{i\in I}a_{i},\text{ for any subset }I\subset[n]. \tag{3}\]
We refer the readers to [5] and chapter 44 of [14] for more details.
A _hypergraph_ is a pair \(\mathcal{H}=(V,E)\), where \(V\) is a finite set and \(E\) is a finite multiset of non-empty subsets of \(V\). Elements of \(V\) are called _vertices_ and elements of \(E\) are called _hyperedges_, respectively, of the hypergraph. For a hypergraph \(\mathcal{H}=(V,E)\), its _associated bipartite graph_\(\operatorname{Bip}\mathcal{H}\) is defined as follows. The sets \(V\) and \(E\) are the color classes of \(\operatorname{Bip}\mathcal{H}\), and an element \(v\) of \(V\) is connected to an element \(e\) of \(E\) in \(\operatorname{Bip}\mathcal{H}\) if and only if \(v\in e\). For a subset \(E^{\prime}\subset E\), let \(\operatorname{Bip}\mathcal{H}|_{E^{\prime}}\) denote the bipartite graph formed by \(E^{\prime}\), all edges of \(\operatorname{Bip}\mathcal{H}\) incident with elements of \(E^{\prime}\), and their endpoints in \(V\). Define \(\mu(E^{\prime})=0\) for \(E^{\prime}=\emptyset\), and \(\mu(E^{\prime})=|\bigcup E^{\prime}|-c(E^{\prime})\) for \(E^{\prime}\neq\emptyset\), where \(\bigcup E^{\prime}=V\cap(\operatorname{Bip}\mathcal{H}|_{E^{\prime}})\) and \(c(E^{\prime})\) is the number of connected components of \(\operatorname{Bip}\mathcal{H}|_{E^{\prime}}\). Kalman [9] proved that \(\mu\) is submodular. In that sense, polymatroids are an abstraction of hypergraphs. The elements of the polymatroid induced by \(\mu\) (in the sense of Definition 2.1) will be referred to as hypertrees because they are essentially degree distribution of spanning forests of \(\operatorname{Bip}\mathcal{H}\), cf. [9]. A polymatroid is called a _hypergraphical polymatroid_, denoted by \(P_{\mathcal{H}}\), if it is the set of all hypertrees of some hypergraph \(\mathcal{H}\).
A nonempty finite subset \(B\subset\mathbb{Z}^{n}\) is a polymatroid on \([n]\) if and only if \(B\) satisfies the following two conditions [7].
1. For any \((a_{1},a_{2},\cdots,a_{n})\in B\) and \((b_{1},b_{2},\cdots,b_{n})\in B\), we have \(\sum_{i\in[n]}a_{i}=\sum_{i\in[n]}b_{i}\).
2. For any \(\textbf{a}=(a_{1},a_{2},\cdots,a_{n})\in B\), \(\textbf{b}=(b_{1},b_{2},\cdots,b_{n})\in B\), and any \(i\in[n]\) with \(a_{i}>b_{i}\), there exists some \(j\in[n]\) such that \(a_{j}<b_{j}\) and \(\textbf{a}+\textbf{e}_{j}-\textbf{e}_{i}\in B\), as well as \(\textbf{b}+\textbf{e}_{i}-\textbf{e}_{j}\in B\).
Condition (2) is called the _Exchange Axiom_. This description easily implies that adding the same vector of \(\mathbb{Z}^{n}\) to each element of a polymatroid yields another polymatroid.
We next recall (internal and external) activity of a basis of a polymatroid. Note that the following definition relies on the natural order of the set \([n]\).
**Definition 2.2**.: Let \(P\) be a polymatroid over \([n]\). For a basis \(\textbf{a}\in P\), an index \(i\in[n]\) is _internally active_ if \(\textbf{a}-\textbf{e}_{i}+\textbf{e}_{j}\notin P\) for any \(j<i\). Let \(\operatorname{Int}(\textbf{a})=\operatorname{Int}_{P}(\textbf{a})\subset[n]\) denote the set of all internally active indices with respect to **a**. Furthermore, let \(\iota(\textbf{a})=|\operatorname{Int}(\textbf{a})|\) and \(\overline{\iota}(\textbf{a})=n-|\operatorname{Int}(\textbf{a})|\).
For a basis \(\textbf{a}\in P\), we call \(i\in[n]\)_externally active_ if \(\textbf{a}+\textbf{e}_{i}-\textbf{e}_{j}\notin P\) for any \(j<i\). Let \(\operatorname{Ext}(\textbf{a})=\operatorname{Ext}_{P}(\textbf{a})\subset[n]\) denote the set of all externally active indices with respect to **a**, and let \(\epsilon(\textbf{a})=|\operatorname{Ext}(\textbf{a})|\) and \(\overline{\epsilon}(\textbf{a})=n-|\operatorname{Ext}(\textbf{a})|\).
Let \(P\) be a polymatroid over \([n]\) and \(f\) be its rank function. For a basis \(\textbf{a}=(a_{1},a_{2},\cdots,a_{n})\in P\), let
\[\mathcal{I}(\textbf{a}):=\mathcal{I}_{P}(\textbf{a})=\left\{I\subset[n] \bigg{|}\sum_{i\in I}a_{i}=f(I)\right\}.\]
We call this the set of _tight_ sets for **a**.
The next result holds due to [14, Theorem 44.2], since \(f\) is submodular.
**Theorem 2.3**.: _[_14_]_ _Let \(P\) be a polymatroid. For any basis \(\textbf{a}\in P\), if \(I,J\in\mathcal{I}(\textbf{a})\), then \(I\cup J,I\cap J\in\mathcal{I}(\textbf{a})\)._
We now state a conclusion obtained in Lemma 4.2 in [1].
**Theorem 2.4**.: _[_1_]_ _Let \(P\) be a polymatroid over \([n]\). For any \(\textbf{a}\in P\),_
1. _an index_ \(i\in[n]\) _is internally active with respect to_ **a** _if and only if there exists a subset_ \(I\subset[n]\) _such that_ \(i=\min(I)\) _and_ \([n]\setminus I\in\mathcal{I}(\textbf{a})\)
_._
2. _an index_ \(i\in[n]\) _is externally active with respect to_ \(\mathbf{a}\) _if and only if there exists a subset_ \(I^{\prime}\subset[n]\) _such that_ \(i=\min(I^{\prime})\) _and_ \(I^{\prime}\in\mathcal{I}(\mathbf{a})\)_._
To close this section, we will recall the polymatroid Tutte polynomial.
**Definition 2.5**.: [1] Let \(P\) be a polymatroid over \([n]\). The _polymatroid Tutte polynomial_\(\mathscr{T}_{P}(x,y)\) is defined as
\[\mathscr{T}_{P}(x,y):=\sum_{\mathbf{a}\in P}x^{oi(\mathbf{a})}y^{oe(\mathbf{a })}(x+y-1)^{ie(\mathbf{a})},\]
where
\[\begin{array}{l}oi(\mathbf{a}):=|\mathrm{Int}(\mathbf{a})\setminus\mathrm{ Ext}(\mathbf{a})|,\\ oe(\mathbf{a}):=|\mathrm{Ext}(\mathbf{a})\setminus\mathrm{Int}(\mathbf{a})|, \\ ie(\mathbf{a}):=|\mathrm{Int}(\mathbf{a})\cap\mathrm{Ext}(\mathbf{a})|.\end{array}\]
Note that the smallest index \(i=1\) must be simultaneously internally and externally active for any basis of any polymatroid. Thus the polynomial \(\mathscr{T}_{P}(x,y)\) is always divisible by \(x+y-1\).
For a polymatroid \(P\) over \([n]\), Kalman [9] defined _the interior polynomial_
\[I_{P}(x):=\sum_{\mathbf{a}\in P}x^{\mathscr{I}(\mathbf{a})}\]
and _the exterior polynomial_
\[X_{P}(y):=\sum_{\mathbf{a}\in P}y^{\mathscr{E}(\mathbf{a})}.\]
It is easy to verify that if \(P\) is a polymatroid over \([n]\), then we have \(I_{P}(x)=x^{n}\mathscr{T}_{P}(x^{-1},1)\) and \(X_{P}(y)=y^{n}\mathscr{T}_{P}(1,y^{-1})\). Moreover, the coefficients of \(I_{P}\) and \(X_{P}\) are non-negative integers.
Let \(P\) be a polymatroid over \([n]\). Let the symmetric group \(S_{n}\) act on \(\mathbb{Z}^{n}\) by permutations of coordinates. I.e., for a permutation \(w\in S_{n}\) and a point \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathbb{Z}^{n}\), define \(w(\mathbf{a}):=(a_{w(1)},\ldots,a_{w(n)})\), and
\[w(P):=\{w(\mathbf{a})|\mathbf{a}\in P\}.\]
For any \(\mathbf{c}=(c_{1},\ldots,c_{n})\in\mathbb{Z}^{n}\), let
\[P+\mathbf{c}:=\{(a_{1}+c_{1},\ldots,a_{n}+c_{n})|(a_{1},\ldots,a_{n})\in P\}.\]
The _dual_ polymatroid of a polymatroid \(P\) is denoted by \(-P\), where
\[-P:=\{(-a_{1},\ldots,-a_{n})|(a_{1},\ldots,a_{n})\in P\}.\]
The following invariance properties of the polymatroid Tutte polynomial have been proved in [1].
**Theorem 2.6**.: _[_1_]_ _Let \(P\) be a polymatroid on \([n]\) and let \(-P\) be the dual polymatroid of \(P\). Then_
1. \(\mathscr{T}_{P}(x,y)=\mathscr{T}_{P+\boldsymbol{c}}(x,y)\) _for any_ \(\boldsymbol{c}\in\mathbb{Z}^{n}\) _(translation invariance);_
2. \(\mathscr{T}_{P}(x,y)=\mathscr{T}_{w(P)}(x,y)\) _for any_ \(w\in S_{n}\) _(_\(S_{n}\)_-invariance);_
3. \(\mathscr{T}_{P}(x,y)=\mathscr{T}_{-P}(y,x)\) _(duality)._
Regarding Theorem 2.6 (2), we note again that the order \(1<2<\ldots<n\) plays an implicit role in Definition 2.5, but it does turn out that \(\mathscr{T}_{P}\) depends only on \(P\) and not on this order. In particular, \(I_{P}\) and \(X_{P}\) also depend only on \(P\), that is, they also satisfy \(S_{n}\)-invariance. Moreover, for \(I_{P}\) and \(X_{P}\), duality takes the form
\[I_{-P}=X_{P}\text{ and }I_{P}=X_{-P}. \tag{4}\]
By the definition of the dual polymatroid, it is clear that for any polymatroid \(P\), we have that
\[-(-P)=P. \tag{5}\]
Hence, \(I_{-P}=X_{P}\) is equivalent to \(I_{P}=X_{-P}\).
## 3 A deletion-contraction formula
In this section, we will study a deletion-contraction formula of the polymatroid Tutte polynomial which answers Question 1.1.
Let \(P\) be a polymatroid on \([n]\) and \(f\) be its rank function. For convenience, for any \(t\in[n]\), let \(\alpha_{t}=f([n])-f([n]\setminus\{t\})\), \(\beta_{t}=f(\{t\})\) and \(T_{t}=\{\alpha_{t},\alpha_{t}+1,\ldots,\beta_{t}\}\). For any \(j\in T_{t}\), define
\[P_{j}^{t}:=\{(a_{1},\ldots,a_{n})\in P\mid a_{t}=j\}\]
and its projection
\[\widehat{P}_{j}^{t}:=\{(a_{1},\ldots,a_{t-1},a_{t+1},\ldots,a_{n})\in\mathbb{ Z}^{n-1}\mid(a_{1},\ldots,a_{n})\in P_{j}^{t}\}. \tag{6}\]
Here the range \(T_{t}\) is chosen so that \(P^{t}_{j}\) and \(\widehat{P}^{t}_{j}\) are nonempty if and only if \(j\in T_{t}\). By the Exchange Axiom, \(P^{t}_{j}\) and \(\widehat{P}^{t}_{j}\) are polymatroids on \([n]\) and on \([n]\setminus\{t\}\), respectively. We next study the relation between the rank function of \(P\) and the rank function of the polymatroid \(P^{t}_{j}\). We first state a result due to Kalman.
**Lemma 3.1** ([9]).: _Let \(P\) be a polymatroid on \([n]\). For any basis \(\textbf{a}\in P\) and any subset \(I\subset[n]\), if \(I\notin\mathcal{I}(\textbf{a})\), then there are \(j\in I\) and \(k\in[n]\setminus I\) so that \(\textbf{a}-\textbf{e}_{k}+\textbf{e}_{j}\in P\)._
**Proposition 3.2**.: Let \(P\) be a polymatroid on \([n]\) and \(f\) be its rank function. For some \(t\in[n]\), let \(\alpha_{t}\), \(\beta_{t}\), \(T_{t}\) and \(\widehat{P}^{t}_{j}\) be defined as above. Let \(f^{t}_{j}\) be the rank function of the polymatroid \(\widehat{P}^{t}_{j}\). Then for any subset \(I\subset[n]\setminus\{t\}\), we have \(f^{t}_{j}(I)=\min\{f(I),f(I\cup\{t\})-j\}\). In particular, \(f^{t}_{\alpha_{t}}(I)=f(I)\) and \(f^{t}_{\beta_{t}}(I)=f(I\cup\{t\})-f(\{t\})\).
Proof.: For any \((a_{1},\ldots,a_{n})\in P\), we know that \(\sum_{i\in I}a_{i}\leq f(I)\) and \(\sum_{i\in I}a_{i}+a_{t}\leq f(I\cup\{t\})\). Then for any \(j\in T_{t}\), we have that \(f^{t}_{j}(I)\leq\min\{f(I),f(I\cup\{t\})-j\}\) as \(a_{t}=j\).
We now show that there is a basis \(\textbf{c}=(c_{1},\ldots,c_{n})\in P^{t}_{j}\) so that \(\sum_{i\in I}c_{i}=\min\{f(I),f(I\cup\{t\})-j\}\). Let \(\textbf{b}=(b_{1},\ldots,b_{n})\in P\) be a basis of \(P\) satisfying \(\sum_{i\in I}b_{i}=f(I)\) and \(\sum_{i\in I}b_{i}+b_{t}=f(I\cup\{t\})\). (Such a basis must exist, for example, it can be taken as the lexicographically maximal basis of \(P\) with respect to some order in which \(I\) forms the first \(|I|\) elements of \([n]\) and \(t\) is the \((|I|+1)\)'st.)
1. Assume that \(f(I)=f(I\cup\{t\})-j\). Then the basis **b** belongs to \(P^{t}_{j}\) and satisfies the required property.
2. Assume that \(f(I)<f(I\cup\{t\})-j\). Then \(j<f(I\cup\{t\})-f(I)=b_{t}\). Note that \(j\geq\alpha_{t}=f([n])-f([n]\setminus\{t\})\). By Lemma 3.1, we have that for any \(\textbf{d}\in P\) satisfying \(d_{t}>j\), there is \(k\in[n]\setminus\{t\}\) so that \(\textbf{d}-\textbf{e}_{t}+\textbf{e}_{k}\in P\) as \([n]\setminus\{t\}\notin\mathcal{I}(\textbf{d})\). This implies that for any \(m\in[1,b_{t}-j]\), there is \(k_{m}\in[n]\setminus\{t\}\) so that \(\textbf{b}^{m+1}=\textbf{b}^{m}-\textbf{e}_{t}+\textbf{e}_{k_{m}}\in P\), where \(\textbf{b}^{1}=\textbf{b}\). Note also that \(k_{m}\notin I\) as \(I\in\mathcal{I}(\textbf{b}^{m})\) for all \(m\). Hence, the basis \(\textbf{c}=\textbf{b}^{b_{t}-j+1}\) satisfies the required property.
3. Assume that \(f(I)>f(I\cup\{t\})-j\). Similar to (ii), by Lemma 3.1, for any \(m\in[1,j-b_{t}]\), there is \(k^{\prime}_{m}\in[n]\setminus\{t\}\) so that \(\textbf{c}^{m+1}=\textbf{c}^{m}+\textbf{e}_{t}-\textbf{e}_{k^{\prime}_{m}}\in P\) since \(\{t\}\notin\mathcal{I}(\textbf{c}^{m})\), where \(\textbf{c}^{1}=\textbf{b}\). We know that \(k^{\prime}_{m}\notin[n]\setminus(I\cup\{t\})\) as \(I\cup\{t\}\in\mathcal{I}(\textbf{c}^{m})\), that is, \(k^{\prime}_{m}\in I\) for all \(m\). This implies the basis \(\textbf{c}=\textbf{c}^{j-b_{t}+1}\) satisfies the required property.
The before first claim holds.
By the submodularity of \(f\), for any subset \(I\subset[n]\setminus\{t\}\), we have
\[f(I\cup\{t\})+f([n]\setminus\{t\})\geq f([n])+f(I)\]
and
\[f(\{t\})+f(I)\geq f(I\cup\{t\}),\]
whereby the second claim also holds.
**Definition 3.3**.: Let \(P\) be a polymatroid on \([n]\) with rank function \(f\). For a subset \(A\subset[n]\), the _deletion_\(P\setminus A\) and _contraction_\(P/A\), which are polymatroids on \([n]\setminus A\), are given by the rank functions \(f_{P\setminus A}(T)=f(T)\) and \(f_{P/A}(T)=f(T\cup A)-f(A)\), for any subset \(T\subset[n]\setminus A\), respectively.
Hence, Proposition 3.2 implies that
\[\widehat{P}^{t}_{\alpha_{t}}=P\setminus\{t\}\text{ and }\widehat{P}^{t}_{ \beta_{t}}=P/\{t\}. \tag{7}\]
The following conclusion is given in Proposition 4.11 (f) of [1].
**Theorem 3.4**.: _[_1_]_ _Let \(P\) be a polymatroid on \([n]\). For some \(t\in[n]\), let \(\alpha_{t}\), \(\beta_{t}\), and \(\widehat{P}^{t}_{j}\) be defined as above._
1. _If_ \(\beta_{t}-\alpha_{t}=0\)_, then_ \[\mathscr{T}_{P}(x,y)=(x+y-1)\mathscr{T}_{\widehat{P}^{t}_{\alpha_{t}}}(x,y)=( x+y-1)\mathscr{T}_{\widehat{P}^{t}_{\beta_{t}}}(x,y).\]
2. _If_ \(\beta_{t}-\alpha_{t}=1\)_, then_ \[\mathscr{T}_{P}(x,y)=x\mathscr{T}_{\widehat{P}^{t}_{\alpha_{t}}}(x,y)+y \mathscr{T}_{\widehat{P}^{t}_{\beta_{t}}}(x,y).\]
In fact, this relation generalizes the deletion-contraction formula of the classical Tutte polynomial. In other words, if \(M\subset 2^{[n]}\) is a matroid of rank \(d\), and \(P=P(M)\subset\{0,1\}^{n}\) is its corresponding polymatroid, then Theorem 3.4 is consistent with the deletion-contraction formula of \(T_{M}(x,y)\). Theorem 3.4 (1) and the earlier relation (2) imply that
\[I_{\widehat{P}^{t}_{j}}=I_{P^{t}_{j}}\text{ and }X_{\widehat{P}^{t}_{j}}=X_{P^{t }_{j}}\text{ for any }j\in T_{t}. \tag{8}\]
That is, when we consider essentially the same \(P\) as a polymatroid over a ground set with an extra element, \(\mathscr{T}_{P}\) changes but \(I_{P}\) and \(X_{P}\) do not change. We now present a deletion-contraction formula for the case \(\beta_{t}-\alpha_{t}>0\) which generalizes Theorem 3.4 (2).
**Theorem 3.5**.: _Let \(P\) be a polymatroid on \([n]\). For some \(t\in[n]\), let \(\alpha_{t}\), \(\beta_{t}\), \(T_{t}\) and \(\widehat{P}^{t}_{j}\) be given by the above definitions. If \(\beta_{t}-\alpha_{t}>0\), then_
\[\mathscr{T}_{P}(x,y)=x\mathscr{T}_{\widehat{P}^{t}_{\alpha_{t}}}(x,y)+y \mathscr{T}_{\widehat{P}^{t}_{\beta_{t}}}(x,y)+\sum_{j\in T_{t}\setminus\{ \alpha_{t},\beta_{t}\}}\mathscr{T}_{\widehat{P}^{t}_{j}}(x,y).\]
Proof.: By the \(S_{n}\)-invariance of \(\mathscr{T}_{P}(x,y)\), we may let \(t=n\). Then we have the following two claims.
**Claim 1.** If \(\beta_{t}-\alpha_{t}>0\), then
1. \(t\in\mathrm{Int}_{P}(\mathbf{a})\setminus\mathrm{Ext}_{P}(\mathbf{a})\) for any \(\mathbf{a}\in P^{t}_{\alpha_{t}}\);
2. \(t\in\mathrm{Ext}_{P}(\mathbf{a})\setminus\mathrm{Int}_{P}(\mathbf{a})\) for any \(\mathbf{a}\in P^{t}_{\beta_{t}}\);
3. \(t\in[n]\setminus(\mathrm{Ext}_{P}(\mathbf{a})\cup\mathrm{Int}_{P}(\mathbf{a}))\) for any \(\mathbf{a}\in P^{t}_{j}\), where \(j\in T_{t}\setminus\{\alpha_{t},\beta_{t}\}\);
4. \(t\in\mathrm{Ext}_{P^{t}_{j}}(\mathbf{a})\cap\mathrm{Int}_{P^{t}_{j}}(\mathbf{a})\) for any \(\mathbf{a}\in P^{t}_{j}\), where \(j\in T_{t}\).
_Proof of Claim 1._ For any \(\mathbf{a}\in P^{t}_{\alpha_{t}}\), we have that \([n]\setminus\{t\}\in\mathcal{I}_{P}(\mathbf{a})\) by the definition of \(P^{t}_{\alpha_{t}}\). Then \(t\in\mathrm{Int}_{P}(\mathbf{a})\) by Theorem 2.4 (1). If \(\mathbf{a}\in P^{t}_{\beta_{t}}\), then \(t\in\mathrm{Ext}_{P}(\mathbf{a})\) by Theorem 2.4 (2) as \(\{t\}\in\mathcal{I}_{P}(\mathbf{a})\). It is clear that for any \(j\in T_{t}\), we have that \(t\in\mathrm{Int}_{P^{t}_{j}}(\mathbf{a})\cap\mathrm{Ext}_{P^{t}_{j}}(\mathbf{a})\) for any \(\mathbf{a}\in P^{t}_{j}\), since \(a_{t}=j\), that is, \([n]\setminus\{t\}\in\mathcal{I}_{P^{t}_{j}}(\mathbf{a})\) and \(\{t\}\in\mathcal{I}_{P^{t}_{j}}(\mathbf{a})\). The other results are true by the Exchange Axiom.
**Claim 2.** For any \(j\in T_{t}\), let \(\mathbf{a}\in P^{t}_{j}\) be a basis of the polymatroid \(P^{t}_{j}\). Then for any \(k\in[n]\setminus\{t\}\), we have that \(k\in\mathrm{Int}_{P}(\mathbf{a})\) if and only if \(k\in\mathrm{Int}_{P^{t}_{j}}(\mathbf{a})\), and \(k\in\mathrm{Ext}_{P}(\mathbf{a})\) if and only if \(k\in\mathrm{Ext}_{P^{t}_{j}}(\mathbf{a})\).
_Proof of Claim 2._ It is easy to see that \(\mathrm{Int}_{P}(\mathbf{a})\subset\mathrm{Int}_{P^{t}_{j}}(\mathbf{a})\) and \(\mathrm{Ext}_{P}(\mathbf{a})\subset\mathrm{Ext}_{P^{t}_{j}}(\mathbf{a})\) as \(P^{t}_{j}\subset P\).
If \(k\notin\mathrm{Int}_{P}(\mathbf{a})\), then there is \(k^{\prime}<k\) so that \(\mathbf{b}=\mathbf{a}-\mathbf{e}_{k}+\mathbf{e}_{k^{\prime}}\in P\). Note that \(b_{t}=a_{t}=j\), that is, \(\mathbf{b}\in P^{t}_{j}\). It implies that \(k\notin\mathrm{Int}_{P^{t}_{j}}(\mathbf{a})\). Similarly, if \(k\notin\mathrm{Ext}_{P}(\mathbf{a})\), then there is \(k^{\prime\prime}<k\) so that \(\mathbf{c}=\mathbf{a}+\mathbf{e}_{k}-\mathbf{e}_{k^{\prime\prime}}\in P\). Hence, \(k\notin\mathrm{Ext}_{P^{t}_{j}}(\mathbf{a})\) because \(\mathbf{c}\in P^{t}_{j}\). Thus, the claim holds.
We know that for any basis \(\mathbf{a}\in P\), there is an integer \(j\in T_{t}\) so that \(\mathbf{a}\in P^{t}_{j}\). Then by Claims 1 and 2,
\[\mathscr{T}_{P}(x,y)\] \[= \sum_{\mathbf{a}\in P}x^{oi_{P}(\mathbf{a})}y^{oe_{P}(\mathbf{a}) }(x+y-1)^{ie_{P}(\mathbf{a})}\] \[= \sum_{\mathbf{a}\in P^{t}_{\alpha_{t}}}x^{oi_{P}(\mathbf{a})}y^{oe _{P}(\mathbf{a})}(x+y-1)^{ie_{P}(\mathbf{a})}+\sum_{\mathbf{a}\in P^{t}_{ \beta_{t}}}x^{oi_{P}(\mathbf{a})}y^{oe_{P}(\mathbf{a})}(x+y-1)^{ie_{P}( \mathbf{a})}\]
\[+\sum_{j\in T_{t}\setminus\{\alpha_{t},\beta_{t}\}}\left[\sum_{\mathbf{a}\in P^{t}_{ j}}x^{\alpha i_{P}(\mathbf{a})}y^{oe_{P}(\mathbf{a})}(x+y-1)^{ie_{P}(\mathbf{a})}\right]\] \[= \sum_{\mathbf{a}\in P^{t}_{\alpha_{t}}}x^{\alpha i_{P^{t}_{ \alpha_{t}}(\mathbf{a})+1}}y^{oe_{P^{t}_{\alpha_{t}}(\mathbf{a})}}(x+y-1)^{ie_ {P^{t}_{\alpha_{t}}(\mathbf{a})-1}}\] \[+\sum_{\mathbf{a}\in P^{t}_{\beta_{t}}}x^{\alpha i_{P^{t}_{ \beta_{t}}(\mathbf{a})}}y^{oe_{P^{t}_{\beta_{t}}(\mathbf{a})+1}}(x+y-1)^{ie_ {P^{t}_{\beta_{t}}(\mathbf{a})-1}}\] \[+\sum_{j\in T_{t}\setminus\{\alpha_{t},\beta_{t}\}}\left[\sum_{ \mathbf{a}\in P^{t}_{j}}x^{\alpha i_{P^{t}_{j}(\mathbf{a})}}y^{oe_{P^{t}_{j}( \mathbf{a})}}(x+y-1)^{ie_{P^{t}_{j}(\mathbf{a})-1}}\right]\] \[= (x+y-1)^{-1}\left[x\mathscr{T}_{P^{t}_{\alpha_{t}}}(x,y)+y \mathscr{T}_{P^{t}_{\beta_{t}}}(x,y)+\sum_{j\in T_{t}\setminus\{\alpha_{t}, \beta_{t}\}}\mathscr{T}_{P^{t}_{j}}(x,y)\right].\]
Hence, \(\mathscr{T}_{P}(x,y)=x\mathscr{T}_{\widehat{P}^{t}_{\alpha_{t}}}(x,y)+y \mathscr{T}_{\widehat{P}^{t}_{\beta_{t}}}(x,y)+\sum_{j\in T_{t}\setminus\{ \alpha_{t},\beta_{t}\}}\mathscr{T}_{\widehat{P}^{t}_{j}}(x,y)\) since \(\mathscr{T}_{\widehat{P}^{t}_{j}}(x,y)=(x+y-1)^{-1}\mathscr{T}_{P^{t}_{j}}(x,y)\) for any \(j\in T_{t}\) by Theorem 3.4 (1).
Note that if \(P\) is a hypergraphical polymatroid, then both \(\widehat{P}^{t}_{\alpha_{t}}\) and \(\widehat{P}^{t}_{\beta_{t}}\) are hypergraphical. However, \(\widehat{P}^{t}_{j}\) is not necessarily hypergraphical for \(j\in T_{t}\setminus\{\alpha_{t},\beta_{t}\}\).
A deletion-contraction formula of interior and exterior polynomials of polymatroids can be obtained from Theorems 3.4 (1) and 3.5.
**Corollary 3.6**.: _Let \(P\) be a polymatroid on \([n]\). For some \(t\in[n]\), let \(\alpha_{t}\), \(\beta_{t}\), \(T_{t}\) and \(\widehat{P}^{t}_{j}\) be defined as above. Then_
\[I_{P}(x)=I_{\widehat{P}^{t}_{\alpha_{t}}}(x)+x\sum_{j\in T_{t}\setminus\{ \alpha_{t}\}}I_{\widehat{P}^{t}_{j}}(x)\]
_and_
\[X_{P}(y)=X_{\widehat{P}^{t}_{\beta_{t}}}(y)+y\sum_{j\in T_{t}\setminus\{\beta_ {t}\}}X_{\widehat{P}^{t}_{j}}(y).\]
Proof.: We first consider the statement of the interior polynomial. If \(\beta_{t}-\alpha_{t}=0\), then as \(T_{t}\setminus\{\alpha_{t}\}=\emptyset\), the conclusion is true by Theorem 3.4 (1) and the fact that \(I_{P}(x)=x^{n}\mathscr{T}_{P}(x^{-1},1)\).
If \(\beta_{t}-\alpha_{t}>0\), by Theorem 3.5, we have
\[I_{P}(x) = x^{n}\mathscr{T}_{P}(x^{-1},1)\] \[= x^{n}(x^{-1}\mathscr{T}_{\widetilde{P}^{t}_{\alpha_{t}}}(x^{-1},1 )+\sum_{j\in T_{t}\setminus\{\alpha_{t}\}}\mathscr{T}_{\widetilde{P}^{t}_{j}}( x^{-1},1))\] \[= x^{n-1}\mathscr{T}_{\widetilde{P}^{t}_{\alpha_{t}}}(x^{-1},1)+x ^{n}\sum_{j\in T_{t}\setminus\{\alpha_{t}\}}\mathscr{T}_{\widetilde{P}^{t}_{j }}(x^{-1},1)\] \[= I_{\widetilde{P}^{t}_{\alpha_{t}}}(x)+x\sum_{j\in T_{t}\setminus \{\alpha_{t}\}}I_{\widetilde{P}^{t}_{j}}(x).\]
Similarly, the statement regarding the exterior polynomial also holds by \(X_{P}(y)=y^{n}\mathscr{T}_{P}(1,y^{-1})\), Theorems 3.4 (1) and 3.5.
## 4 Monotonicity properties
In this section, we prove two monotonicity properties of interior and exterior polynomials of polymatroids. For two polynomials \(g_{1}\) and \(g_{2}\), we say \(g_{1}\leq g_{2}\) if each coefficient of \(g_{1}\) is less than or equal to the corresponding coefficient of \(g_{2}\).
We first consider the case when there is an inclusion of the two polymatroids. We start with a result from [6].
**Lemma 4.1** ([6]).: _Let \(P\) be a polymatroid on \([n]\) and \(f\) be its rank function. Then the coefficients of the terms of degree \(n-1\) for \(\mathscr{T}_{P}(x,1)\) and \(\mathscr{T}_{P}(1,y)\) are \(\sum_{i\in[n]}f(\{i\})-f([n])\) and \(\sum_{i\in[n]}f([n]\setminus\{i\})-(n-1)f([n])\), respectively._
Then our first main claim is as follows.
**Theorem 4.2**.: _Let \(P\) and \(P^{\prime}\) be two polymatroids on the same ground set \([n]\). If \(P^{\prime}\subset P\), then \(I_{P^{\prime}}\leq I_{P}\) and \(X_{P^{\prime}}\leq X_{P}\)._
Proof.: It is enough to prove the statement for the exterior polynomial because of (4) and because \(P^{\prime}\subset P\) implies \(-P^{\prime}\subset-P\). We now prove the claim by induction on \(n\).
Let \(f\) and \(f^{\prime}\) be the rank functions of \(P\) and \(P^{\prime}\), respectively. Then \(f^{\prime}([n])=f([n])\) and by (3) we also have \(f^{\prime}(I)\leq f(I)\) for all \(I\subset[n]\) as \(P^{\prime}\subset P\). Note that the constant term of the exterior polynomial of any polymatroid is \(1\). Then the statement is clear in the cases \(n=1\) and \(n=2\)
by Lemma 4.1. Assume that the conclusion holds for \(n\leq m-1\). Then for \(n=m\), we divide the proof into two cases.
**Case 1.** Assume that there is some \(t\in[n]\) so that \(f^{\prime}(\{t\})=f(\{t\})\). Let \(T_{t}=\{f([n])-f([n]\setminus\{t\}),\ldots,f(\{t\})\}\) and \(T^{\prime}_{t}=\{f^{\prime}([n])-f^{\prime}([n]\setminus\{t\}),\ldots,f^{ \prime}(\{t\})\}\). Note that as \(P^{\prime}\subset P\), we have \(T^{\prime}_{t}\subset T_{t}\) and \(\widehat{P}^{tt}_{j}\subset\widehat{P}^{t}_{j}\) (cf. (6)) for any \(j\in T^{\prime}_{t}\). By the induction hypothesis and Corollary 3.6, we know that
\[X_{P}(y) = X_{\widehat{P}^{t}_{f(\{t\})}}(y)+y\sum_{j\in T_{t}\setminus\{f( \{t\})\}}X_{\widehat{P}^{t}_{j}}(y)\] \[= X_{\widehat{P}^{t}_{f(\{t\})}}(y)+y\sum_{j\in T^{\prime}_{t} \setminus\{f(\{t\})\}}X_{\widehat{P}^{t}_{j}}(y)+y\sum_{j\in T_{t}\setminus T ^{\prime}_{t}}X_{\widehat{P}^{t}_{j}}(y)\] \[\geq X_{\widehat{P}^{tt}_{f^{\prime}(\{t\})}}(y)+y\sum_{j\in T^{ \prime}_{t}\setminus\{f^{\prime}(\{t\})\}}X_{\widehat{P}^{tt}_{j}}(y)+y\sum_{ j\in T_{t}\setminus T^{\prime}_{t}}X_{\widehat{P}^{t}_{j}}(y)\] \[= X_{P^{\prime}}(y)+y\sum_{j\in T_{t}\setminus T^{\prime}_{t}}X_{ \widehat{P}^{t}_{j}}(y)\] \[\geq X_{P^{\prime}}(y).\]
**Case 2.** Assume that \(f^{\prime}(\{t^{\prime}\})<f(\{t^{\prime}\})\) for all \(t^{\prime}\in[n]\). Then choose an arbitrary index \(t\in[n]\). By the Exchange Axiom, the set \(P^{\prime\prime}=\{{\bf a}\in P\mid a_{t}\leq f^{\prime}(\{t\})\}\) is also a polymatroid. Case 1 implies that \(X_{P^{\prime}}\leq X_{P^{\prime\prime}}\). We now prove that \(X_{P^{\prime\prime}}\leq X_{P}\). This in fact follows from Case 1 applied to any coordinate other that \(t\), but we will follow a different route.
By the \(S_{n}\)-invariance of the exterior polynomial, we may assume that \(t=1\). We claim that
\[{\rm Ext}_{P}({\bf b})={\rm Ext}_{P^{\prime\prime}}({\bf b})\mbox{ for any }{\bf b}\in P^{\prime\prime}. \tag{9}\]
* It is obvious that \({\rm Ext}_{P}({\bf b})\subset{\rm Ext}_{P^{\prime\prime}}({\bf b})\) as \(P^{\prime\prime}\subset P\).
* If \(i\notin{\rm Ext}_{P}({\bf b})\), then by definition, there exists some \(i^{\prime}\in[n]\) with \(i^{\prime}<i\) so that \({\bf c}={\bf b}-{\bf e}_{i^{\prime}}+{\bf e}_{i}\in P\). Note that \({\bf c}\in P^{\prime\prime}\) as \(c_{t}\leq b_{t}\leq f^{\prime}(\{t\})\). Hence, \(i\notin{\rm Ext}_{P^{\prime\prime}}({\bf b})\).
* implies that \(\overline{\epsilon}_{P}({\bf b})=\overline{\epsilon}_{P^{\prime\prime}}({\bf b})\) for any \({\bf b}\in P^{\prime\prime}\). Then \[X_{P}(y) = \sum_{{\bf b}\in P}y^{\overline{\epsilon}_{P}({\bf b})}\] \[= \sum_{{\bf b}\in P^{\prime\prime}}y^{\overline{\epsilon}_{P}({ \bf b})}+\sum_{{\bf b}\in P\setminus P^{\prime\prime}}y^{\overline{\epsilon}_ {P}({\bf b})}\]
\[= \sum_{\mathbf{b}\in P^{\prime\prime}}y^{\bar{\varepsilon}_{P^{\prime \prime}}(\mathbf{b})}+\sum_{\mathbf{b}\in P\setminus P^{\prime\prime}}y^{\bar{ \varepsilon}_{P}(\mathbf{b})}\] \[= X_{P^{\prime\prime}}(y)+\sum_{\mathbf{b}\in P\setminus P^{ \prime\prime}}y^{\bar{\varepsilon}_{P}(\mathbf{b})}\] \[\geq X_{P^{\prime\prime}}(y).\]
This completes the proof.
We next consider the case of minors.
**Definition 4.3**.: Let \(P\) be a polymatroid on \([n]\). A polymatroid \(P^{\prime}\) is a _minor_ of \(P\) if \(P^{\prime}=(P\setminus A)/B\) or \(P^{\prime}=(P/B)\setminus A\) for disjoint subsets \(A\) and \(B\) of \([n]\).
The deletion and contraction are well-defined in the following sense.
**Proposition 4.4**.: Let \(P\) be a polymatroid on \([n]\). Then \((P\setminus A)\setminus B=(P\setminus B)\setminus A=P\setminus(A\cup B)\) and \((P/A)/B=(P/B)/A=P/(A\cup B)\) for disjoint subsets \(A\) and \(B\) of \([n]\).
Proof.: Let \(f_{P}\), \(f_{P\setminus A}\), \(f_{(P\setminus A)\setminus B}\), \(f_{P\setminus(A\cup B)}\), \(f_{(P/A)/B}\), \(f_{P/A}\) and \(f_{P/(A\cup B)}\) be the rank functions of the polymatroids \(P\), \(P\setminus A\), \((P\setminus A)\setminus B\), \(P\setminus(A\cup B)\), \((P/A)/B\), \(P/A\) and \(P/(A\cup B)\), respectively. Then for any subset \(T\subset[n]\setminus A\setminus B\),
\[f_{(P\setminus A)\setminus B}(T) = f_{P\setminus A}(T)\] \[= f_{P}(T)\] \[= f_{P\setminus(A\cup B)}(T)\]
and
\[f_{(P/A)/B}(T) = f_{P/A}(T\cup B)-f_{P/A}(B)\] \[= (f_{P}(T\cup B\cup A)-f_{P}(A))-(f_{P}(B\cup A)-f_{P}(A))\] \[= f_{P}(T\cup(B\cup A))-f_{P}(B\cup A)\] \[= f_{P/(A\cup B)}(T).\]
Hence, \((P\setminus A)\setminus B=P\setminus(A\cup B)\) and \((P/A)/B=P/(A\cup B)\). Similarly, we also have that \((P\setminus B)\setminus A=P\setminus(A\cup B)\) and \((P/B)/A=P/(A\cup B)\). This completes the proof.
For a polymatroid \(P\), let \(f_{P}\) and \(f_{-P}\) be the rank functions of the polymatroids \(P\) and \(-P\), respectively. Then by (3) and the definition of the dual polymatroid, we have that
\[f_{-P}(T)=f_{P}([n]\setminus T)-f_{P}([n])\]
for any subset \(T\subset[n]\). Then the relation of the dual polymatroid and contraction and deletion is as follows.
**Proposition 4.5**.: Let \(P\) be a polymatroid on \([n]\). Then \(-(P\setminus A)=(-P)/A\) and \(-(P/A)=(-P)\setminus A\) for any subset \(A\subset[n]\).
Proof.: Let \(f_{P}\), \(f_{P\setminus A}\), \(f_{-(P\setminus A)}\), \(f_{(-P)/A}\) and \(f_{-P}\) be the rank functions of the polymatroids \(P\), \(P\setminus A\), \(-(P\setminus A)\), \((-P)/A\) and \(-P\), respectively. Then for any subset \(T\subset[n]\setminus A\),
\[f_{-(P\setminus A)}(T) = f_{P\setminus A}([n]\setminus A\setminus T)-f_{P\setminus A}([n ]\setminus A)\] \[= f_{P}([n]\setminus A\setminus T)-f_{P}([n]\setminus A)\] \[= (f_{P}([n]\setminus(A\cup T))-f_{P}([n]))-(f_{P}([n]\setminus A )-f_{P}([n]))\] \[= f_{-P}(T\cup A)-f_{-P}(A)\] \[= f_{(-P)/A}(T).\]
Hence, \(-(P\setminus A)=(-P)/A\). This implies that \(-((-P)\setminus A)=P/A\). Therefore, by (5), \(-(P/A)=(-P)\setminus A\) also holds.
We study monotonicity properties of the deletion and contraction before our second main claim.
**Lemma 4.6**.: _Let \(P\) be a polymatroid on \([n]\). Then \(I_{P\setminus A}\leq I_{P}\), \(I_{P/A}\leq I_{P}\), \(X_{P\setminus A}\leq X_{P}\) and \(X_{P/A}\leq X_{P}\) for any subset \(A\subset[n]\)._
Proof.: It is enough to prove that the statements \(I_{P\setminus A}\leq I_{P}\) and \(I_{P/A}\leq I_{P}\) hold because of (4) and Proposition 4.5. By Proposition 4.3, it suffices to consider the case that \(A\) is a singleton set \(\{t\}\). In order to prove \(I_{P\setminus\{t\}}\leq I_{P}\) and \(I_{P/\{t\}}\leq I_{P}\), by (7) and (8), it is equivalent to prove that \(I_{P_{\alpha_{t}}^{t}}\leq I_{P}\) and \(I_{P_{\beta_{t}}^{t}}\leq I_{P}\). These follow from Theorem 4.2. This completes the proof.
**Theorem 4.7**.: _If \(P^{\prime}\) is a minor of a polymatroid \(P\), then \(I_{P^{\prime}}\leq I_{P}\) and \(X_{P^{\prime}}\leq X_{P}\)._
Proof.: Without loss of generality, we can assume that \(P\) is a polymatroid on \([n]\) and \(P^{\prime}=(P\setminus A)/B\) for disjoint subsets \(A\subset[n]\) and \(B\subset[n]\). (It is obvious that \((P\setminus A)/B=(P/B)\setminus A\).) Then \(I_{P^{\prime}}\leq I_{P\setminus A}\leq I_{P}\) and \(X_{P^{\prime}}\leq X_{P\setminus A}\leq X_{P}\) by Lemma 4.6.
Now let us focus on the hypergraphical cases. We can also obtain a monotonicity property of interior and exterior polynomials of hypergraphs by Theorems 4.2, 4.7 and 2.6 (1).
**Corollary 4.8**.: _Let \(\mathcal{H}=(V,E)\) and \(\mathcal{H}^{\prime}=(V^{\prime},E^{\prime})\) be two hypergraphs. Let \(\mathrm{Bip}\mathcal{H}\) and \(\mathrm{Bip}\mathcal{H}^{\prime}\) be the associated bipartite graphs of \(\mathcal{H}\) and \(\mathcal{H}^{\prime}\), respectively. If \(\mathrm{Bip}\mathcal{H}^{\prime}\) is a subgraph of \(\mathrm{Bip}\mathcal{H}\), then \(I_{\mathcal{H}^{\prime}}\leq I_{\mathcal{H}}\) and \(X_{\mathcal{H}^{\prime}}\leq X_{\mathcal{H}}\)._
Proof.: Let \(\mathrm{Bip}\mathcal{H}^{\prime\prime}\) be the bipartite graph obtained by removing all vertices of \(E\setminus E^{\prime}\) and all edges incident with elements of \(E\setminus E^{\prime}\) from \(\mathrm{Bip}\mathcal{H}\). Then \(I_{\mathcal{H}^{\prime\prime}}\leq I_{\mathcal{H}}\) and \(X_{\mathcal{H}^{\prime\prime}}\leq X_{\mathcal{H}}\) by Theorem 4.7.
Let \(\mathrm{Bip}\mathcal{H}^{\prime\prime\prime\prime}\) be a bipartite graph obtained by adding \(|V\setminus V^{\prime}|\) vertices to \(V^{\prime}\) and \(|V\setminus V^{\prime}|-c\) edges from \(\mathrm{Bip}\mathcal{H}^{\prime}\) so that there exists exactly an edge \(uv^{\prime}\) incident with each vertex \(v^{\prime}\), of \(V\setminus V^{\prime}\), incident with at least one edge in \(\mathrm{Bip}\mathcal{H}^{\prime\prime}\), where \(uv^{\prime}\) is an edge of \(\mathrm{Bip}\mathcal{H}^{\prime\prime}\), and \(c\) is the number of vertices, of \(V\setminus V^{\prime}\), incident with \(0\) edges in \(\mathrm{Bip}\mathcal{H}^{\prime\prime}\). Then \(I_{\mathcal{H}^{\prime\prime\prime}}=I_{\mathcal{H}^{\prime}}\) and \(X_{\mathcal{H}^{\prime\prime\prime}}=X_{\mathcal{H}^{\prime}}\) by Theorem 2.6 (1).
Figure 1: An example in Corollary 4.8
Let \(c(\mathrm{Bip}\mathcal{H}^{\prime\prime})\) and \(c(\mathrm{Bip}\mathcal{H}^{\prime\prime\prime})\) be the number of connected components of \(\mathrm{Bip}\mathcal{H}^{\prime\prime}\) and \(\mathrm{Bip}\mathcal{H}^{\prime\prime\prime}\), respectively. Note that \(\mathrm{Bip}\mathcal{H}^{\prime\prime\prime}\subset\mathrm{Bip}\mathcal{H}^{ \prime\prime}\). Then \(c(\mathrm{Bip}\mathcal{H}^{\prime\prime\prime})\geq c(\mathrm{Bip}\mathcal{H}^ {\prime\prime})\). Arbitrarily choose \(c(\mathrm{Bip}\mathcal{H}^{\prime\prime\prime})-c(\mathrm{Bip}\mathcal{H}^{ \prime\prime})\) edges \(\widehat{E}\) in \(\mathrm{Bip}\mathcal{H}^{\prime\prime}\) so that \(c(\mathrm{Bip}\mathcal{H}^{\prime\prime\prime}+\widehat{E})=c(\mathrm{Bip} \mathcal{H}^{\prime\prime})\). Let \(\mathrm{Bip}\mathcal{H}^{4}\) be a bipartite graph obtained by adding \(c(\mathrm{Bip}\mathcal{H}^{\prime\prime\prime})-c(\mathrm{Bip}\mathcal{H}^{ \prime\prime})\) vertices \(\widehat{V}\) to \(V\) and replacing \(uv\) by an edge joining \(u\) to some vertex of \(\widehat{V}\) for each edge \(uv\in\widehat{E}\) from \(\mathrm{Bip}\mathcal{H}^{\prime\prime\prime}+\widehat{E}\), where \(u\in E^{\prime}\) and \(v\in V\), so that for each \(v\in\widehat{V}\), there exists exactly an edge incident with \(v\). Note that \(\mu_{\mathrm{Bip}\mathcal{H}^{4}}(T)\leq\mu_{\mathrm{Bip}\mathcal{H}^{\prime \prime}}(T)\) for all \(T\subset E^{\prime}\), and \(\mu_{\mathrm{Bip}\mathcal{H}^{4}}(E^{\prime})=\mu_{\mathrm{Bip}\mathcal{H}^{ \prime\prime}}(E^{\prime})\). Then by Theorem 4.2, we know that \(I_{\mathcal{H}^{4}}\leq I_{\mathcal{H}^{\prime\prime}}\) and \(X_{\mathcal{H}^{4}}\leq X_{\mathcal{H}^{\prime\prime}}\). Moreover, by Theorem 2.6 (1), we have that \(I_{\mathcal{H}^{\prime}}=I_{\mathcal{H}^{4}}\) and \(X_{\mathcal{H}^{\prime}}=X_{\mathcal{H}^{4}}\). Hence, the conclusion holds.
See Figure 1 for an illustration.
_Remark 4.9_.: Let \(P\) and \(P^{\prime}\) be polymatroids on the same set \([n]\). If \(P^{\prime}\subset P\), then it is possible that there are \(i,j\in\mathbb{Z}\) so that \(x^{i}y^{j}[\mathscr{T}_{P^{\prime}}(x,y)]>x^{i}y^{j}[\mathscr{T}_{P}(x,y)]\). Likewise, this monotonicity property of the polymatroid Tutte polynomial does not hold for minors either. Counterexamples can already be found among hypergraphical cases. Furthermore, for a polymatroid \(P\) over \([n]\) with rank \(d\), let us define
\[T_{P}(x,y):=\frac{(x+y-xy)^{n}}{x^{n-d}y^{d}}\mathscr{T}_{P}(\frac{x}{x+y-xy}, \frac{y}{x+y-xy}).\]
This is an equivalent form of \(\mathscr{T}_{P}\) and when written in this way, the invariant becames a direct generalization of the Tutte polynomial of matroids, cf. (1). Moreover, \(T_{P}(x,y)\) does not satisfy the monotonicity property, either.
For example, let us consider the hypergraphical polymatroids \(P_{\cal H}\) and \(P_{\cal H^{\prime}}\) over \(\{1,2,3\}\), where the hypergraphs \({\cal H}\) and \({\cal H^{\prime}}\) are defined by their associated bipartite graphs in Figure 2 (1) and (2), respectively. Let also \(P_{{\cal H^{\prime\prime}}}=P_{\cal H}\setminus\{1\}\); see Figure 2 (3) for the associated bipartite graph. Then \(P_{{\cal H^{\prime}}}\subset P_{\cal H}\) and \(P_{{\cal H^{\prime\prime}}}\) is a minor of \(P_{\cal H}\). On the other hand,
\[\mathscr{T}_{P_{\cal H}}(x,y)=x^{3}+3x^{2}y+3xy^{2}+y^{3}+2x^{2}+3xy+y^{2}-x-2,\]
\[\mathscr{T}_{P_{{\cal H^{\prime}}}}(x,y)=x^{3}+3x^{2}y+3xy^{2}+y^{3}-2x^{2}-4 xy-2y^{2}+x+y,\]
and
\[\mathscr{T}_{P_{{\cal H^{\prime\prime}}}}(x,y)=x^{2}+2xy+y^{2}-x-y.\]
In particular, we have \(xy^{0}[\mathscr{T}_{P_{{\cal H^{\prime}}}}(x,y)]>xy^{0}[\mathscr{T}_{P_{{\cal H }}}(x,y)]\), \(x^{0}y[\mathscr{T}_{P_{{\cal H^{\prime}}}}(x,y)]>x^{0}y[\mathscr{T}_{P_{{\cal H }}}(x,y)]\), \(x^{0}y^{0}[\mathscr{T}_{P_{{\cal H^{\prime}}}}(x,y)]>x^{0}y^{0}[\mathscr{T}_{P _{{\cal H}}}(x,y)]\), and \(x^{0}y^{0}[\mathscr{T}_{P_{{\cal H^{\prime\prime}}}}(x,y)]>x^{0}y^{0}[\mathscr{ T}_{P_{{\cal H}}}(x,y)]\).
Note that \(\mathscr{T}_{P_{{\cal H}}}(x,y)-\mathscr{T}_{P_{{\cal H^{\prime}}}}(x,y)=4x^{2}+7xy+3y^{2}-2x-y-2.\) This yields
\[T_{P_{\cal H}}(x,y)-T_{P_{{\cal H^{\prime}}}}(x,y)\] \[= \left(\mathscr{T}_{P_{\cal H}}(\frac{x}{x+y-xy},\frac{y}{x+y-xy} )-\mathscr{T}_{P_{{\cal H^{\prime}}}^{\prime}}(\frac{x}{x+y-xy},\frac{y}{x+y- xy})\right)\] \[\cdot\frac{(x+y-xy)^{3}}{x^{-1}y^{4}}\] \[= xy^{-4}(2x^{3}y^{3}-8x^{3}y^{2}-7x^{2}y^{3}+11x^{2}y^{2}+6x^{3}y+5 xy^{3}).\]
Hence, we have \(x^{4}y^{-2}[T_{P_{{\cal H}^{\prime}}}(x,y)]>x^{4}y^{-2}[T_{P_{{\cal H}}}(x,y)]\) and \(x^{3}y^{-1}[T_{P_{{\cal H}^{\prime\prime}}}(x,y)]>x^{3}y^{-1}[T_{P_{{\cal H}}} (x,y)]\).
As to \({\cal H^{\prime\prime}}\), whose \(n\) value differs from that of \({\cal H}\), we have
\[T_{P_{{\cal H^{\prime\prime}}}}(x,y)=x^{2}y^{-4}(x^{2}+2xy+y^{2}-(x+y)(x+y-xy)).\]
In particular, \(x^{4}y^{-2}[T_{P_{{\cal H^{\prime\prime}}}}(x,y)]=0\). On the other hand,
\[T_{P_{\cal H}}(x,y) = xy^{-4}(x^{3}+3x^{2}y+3xy^{2}+y^{3}+(2x^{2}+3xy+y^{2})(x+y-xy)\] \[-x(x+y-xy)^{2}-2(x+y-xy)^{3})\]
has \(x^{4}y^{-2}[T_{P_{\cal H}}(x,y)]=-7<0=x^{4}y^{-2}[T_{P_{{\cal H}^{\prime \prime}}}(x,y)].\)
## 5 Further applications of the deletion-contraction formula
In this section, by applying the deletion-contraction formula of Section 3, we compute the coefficients of some terms of the polymatroid Tutte polynomial and characterize a family hypergraphs.
### The extremal coefficients of \(\mathscr{T}_{P}(x,y)\)
In this subsection, we study the coefficients of some extremal terms of the polymatroid Tutte polynomial.
In [1, Proposition 4.11 (d)], Bernardi, Kalman, and Postnikov derived the coefficients of the top degree terms of the polymatroid Tutte polynomial.
**Lemma 5.1**.: _[_1_]_ _Let \(P\) be a polymatroid over \([n]\). Then for any \(k\in[n]\cup\{0\}\),_
\[x^{k}y^{n-k}[\mathscr{T}_{P}(x,y)]=\binom{n}{k}.\]
By Lemmas 4.1 and 5.1, we have the following statement.
**Lemma 5.2**.: _Let \(P\) be a polymatroid over \([n]\) with the rank function \(f\). Then_
\[x^{n-1}[\mathscr{T}_{P}(x,y)]=\sum_{i\in[n]}f(\{i\})-f([n])-n\]
_and_
\[y^{n-1}[\mathscr{T}_{P}(x,y)]=\sum_{i\in[n]}f([n]\setminus\{i\})-(n-1)f([n])-n.\]
We next study the coefficients of the degree \(n-1\) terms in \(\mathscr{T}_{P}(x,y)\) that generalizes Lemma 5.2, using the deletion-contraction relation obtained in Section 3.
**Theorem 5.3**.: _Let \(P\) be a polymatroid on \([n]\) with the rank function \(f\). Then for any \(k\in[n]\),_
\[x^{n-k}y^{k-1}[\mathscr{T}_{P}(x,y)]=\sum_{\stackrel{{ S\subset[n]}}{{|S|=k-1}}}f(S)+\sum_{ \stackrel{{ S^{\prime}\subset[n]}}{{|S^{\prime}|=k}}}f(S^{\prime} )-\binom{n}{k-1}f([n])-k\binom{n}{k}. \tag{10}\]
Proof.: For any \(t\in[n]\), let \(\alpha_{t}=f([n])-f([n]\setminus\{t\})\) and \(\beta_{t}=f(\{t\})\).
If \(\alpha_{t^{\prime}}=\beta_{t^{\prime}}\) for all \(t^{\prime}\in[n]\), then \(P\) has unique basis. Hence,
\[\mathscr{T}_{P}(x,y)=(x+y-1)^{n}.\]
Then the left-hand side of the equation (10) is
\[x^{n-k}y^{k-1}[\mathscr{T}_{P}(x,y)]=-\binom{n}{n-1}\binom{n-1}{k-1}=-n\binom{ n-1}{k-1}.\]
Note that in this case,
\[\sum_{\begin{subarray}{c}S\subset[n]\\ |S|=k-1\end{subarray}}f(S)=\binom{n-1}{k-2}f([n])\quad\text{and}\quad\sum_{ \begin{subarray}{c}S^{\prime}\subset[n]\\ |S^{\prime}|=k\end{subarray}}f(S)=\binom{n-1}{k-1}f([n]).\]
Then the right-hand side of the equation (10) is
\[\sum_{\begin{subarray}{c}S\subset[n]\\ |S|=k-1\end{subarray}}f(S)+\sum_{\begin{subarray}{c}S^{\prime}\subset[n]\\ |S^{\prime}|=k\end{subarray}}f(S^{\prime})-\binom{n}{k-1}f([n])-k\binom{n}{k}=-k \binom{n}{k}=-n\binom{n-1}{k-1}.\]
Assume that there exists some \(t\in[n]\) satisfying \(\alpha_{t}\neq\beta_{t}\). Then we prove the equation (10) by induction on \(n\). It is obvious in the cases \(n=1\) and \(n=2\) by Lemma 4.1. Suppose that \(n\geq 3\). Let \(T_{t}=\{\alpha_{t},\alpha_{t}+1,\ldots,\beta_{t}\}\). For any \(j\in T_{t}\), let \(P^{t}_{j}=\{(a_{1},\ldots,a_{n})\in P\mid a_{t}=j\}\) and \(\widehat{P}^{t}_{j}=\{(a_{1},\ldots,a_{t-1},a_{t+1},\ldots,a_{n})\in\mathbb{Z}^ {n-1}\mid(a_{1},\ldots,a_{n})\in P^{t}_{j}\}\), and let \(f^{t}_{j}\) be the rank function of \(\widehat{P}^{t}_{j}\).
By Proposition 3.2, for any subset \(I\subset[n]\setminus\{t\}\), we note that \(f^{t}_{\alpha_{t}}(I)=f(I)\) and \(f^{t}_{\beta_{t}}(I)=f(I\cup\{t\})-f(\{t\})\). Then by the induction hypothesis,
\[x^{n-k-1}y^{k-1}[\mathscr{T}_{\widehat{P}^{t}_{\alpha_{t}}}(x,y)]\] \[= \sum_{\begin{subarray}{c}T\subset[n]\setminus\{t\}\\ |T|=k-1\end{subarray}}f^{t}_{\alpha_{t}}(T)+\sum_{\begin{subarray}{c}T^{\prime }\subset[n]\setminus\{t\}\\ |T^{\prime}|=k\end{subarray}}f^{t}_{\alpha_{t}}(T^{\prime})-\binom{n-1}{k-1}f^{ t}_{\alpha_{t}}([n]\setminus\{t\})-k\binom{n-1}{k}\] \[= \sum_{\begin{subarray}{c}T\subset[n]\setminus\{t\}\\ |T|=k-1\end{subarray}}f(T)+\sum_{\begin{subarray}{c}T^{\prime}\subset[n] \setminus\{t\}\\ |T^{\prime}|=k\end{subarray}}f(T^{\prime})-\binom{n-1}{k-1}f([n]\setminus\{t\})- k\binom{n-1}{k}.\]
\[x^{n-k}y^{k-2}[\mathscr{T}_{\hat{P}^{t}_{\beta_{t}}}(x,y)]\] \[= \sum_{\begin{subarray}{c}T\subset[n]\setminus\{t\}\\ |T|=k-2\end{subarray}}f^{t}_{\beta_{t}}(T)+\sum_{\begin{subarray}{c}T^{\prime} \subset[n]\setminus\{t\}\\ |T^{\prime}|=k-1\end{subarray}}f^{t}_{\beta_{t}}(T^{\prime})-\binom{n-1}{k-2} f^{t}_{\beta_{t}}([n]\setminus\{t\})\] \[-(k-1)\binom{n-1}{k-1}\] \[= \sum_{\begin{subarray}{c}T\subset[n]\setminus\{t\}\\ |T|=k-2\end{subarray}}(f(T\cup\{t\})-f(\{t\}))+\sum_{\begin{subarray}{c}T \subset[n]\setminus\{t\}\\ |T^{\prime}|=k-1\end{subarray}}(f(T^{\prime}\cup\{t\})-f(\{t\}))\] \[-\binom{n-1}{k-2}(f([n])-f(\{t\}))-(k-1)\binom{n-1}{k-1}.\]
By Lemma 5.1,
\[\sum_{j\in T_{t}\setminus\{\alpha_{t},\beta_{t}\}}x^{n-k}y^{k-1} [\mathscr{T}_{\hat{P}^{t}_{j}}(x,y)]\] \[= \sum_{j\in T_{t}\setminus\{\alpha_{t},\beta_{t}\}}\binom{n-1}{k-1}\] \[= \binom{n-1}{k-1}(\beta_{t}-\alpha_{t}-1)\] \[= \binom{n-1}{k-1}(f(\{t\})+f([n]\setminus\{t\})-f([n])-1).\]
By Theorems 3.5,
\[x^{n-k}y^{k-1}[\mathscr{T}_{P}(x,y)]=x^{n-k-1}y^{k-1}[\mathscr{ T}_{\hat{P}^{t}_{\alpha_{t}}}(x,y)]+x^{n-k}y^{k-2}[\mathscr{T}_{\hat{P}^{t}_{ \beta_{t}}}(x,y)]\\ +\sum_{j\in T_{t}\setminus\{\alpha_{t},\beta_{t}\}}x^{n-k}y^{k-1 }[\mathscr{T}_{\hat{P}^{t}_{j}}(x,y)].\]
Hence, the equation (10) holds.
Similarly, we can get formulas for the coefficients \(x^{n-2}[\mathscr{T}_{P}(x,y)]\) and \(y^{n-2}[\mathscr{T}_{P}(x,y)]\). The proof is left for the readers.
**Theorem 5.4**.: _Let \(P\) be a polymatroid on \([n]\) with the rank function \(f\). Then_
1. \(x^{n-2}[\mathscr{T}_{P}(x,y)]=\binom{\sum\limits_{i\in[n]}f(\{i\})-f([n])+1-n} {2}-\left(\sum\limits_{\{i,j\}\subset[n]}\binom{f(\{i\})+f(\{j\})-f(\{i,j\})}{2 }\right)\)_;_
2. \(y^{n-2}[\mathscr{T}_{P}(x,y)]=\binom{\sum\limits_{i\in[n]}f([n]\setminus\{i\})- (n-1)f([n])+1-n}{2}\)__ \[-\left(\sum\limits_{\{i,j\}\subset[n]}\binom{f([n]\setminus\{i\})+f([n] \setminus\{j\})-f([n]\setminus\{i,j\})-f([n])}{2}\right)\right)\).\]
Lemma 5.1, Theorems 5.3 and 5.4 imply the following conclusion.
**Corollary 5.5**.: _For a polymatroid \(P\) on \([n]\) with the rank function \(f\),_
1. \(x^{n-2}[\mathscr{T}_{P}(x,1)]=\binom{\sum\limits_{i\in[n]}f(\{i\})-f([n])+1} {2}-\left(\sum\limits_{\{i,j\}\subset[n]}\binom{f(\{i\})+f(\{j\})-f(\{i,j\})+ 1}{2}\right)\)_;_
2. \(y^{n-2}[\mathscr{T}_{P}(1,y)]=\binom{\sum\limits_{i\in[n]}f([n]\setminus\{i\})- (n-1)f([n])+1}{2}\)__ \[-\left(\sum\nolimits_{\{i,j\}\subset[n]}\binom{f([n]\setminus\{i\})+f([n] \setminus\{j\})-f([n]\setminus\{i,j\})-f([n])+1}{2}\right)\right)\)_._
Proof.: This follows from Lemma 5.1, Theorems 5.3 and 5.4, since
\[x^{n-2}[\mathscr{T}_{P}(x,1)]=x^{n-2}[\mathscr{T}_{P}(x,y)]+x^{n-2}y[\mathscr{ T}_{P}(x,y)]+x^{n-2}y^{2}[\mathscr{T}_{P}(x,y)]\]
and
\[y^{n-2}[\mathscr{T}_{P}(1,y)]=y^{n-2}[\mathscr{T}_{P}(x,y)]+xy^{n-2}[\mathscr{ T}_{P}(x,y)]+x^{2}y^{n-2}[\mathscr{T}_{P}(x,y)]\]
For hypergraphical cases, Corollary 5.5 (1) is consistent with the result in [11].
**Corollary 5.6**.: _[_11_, Proposition 5.5]_ _Let \(\mathcal{H}\) be a hypergraph and let \(\mathrm{Bip}\mathcal{H}=(V\cup E,\mathcal{E})\) be its associated bipartite graph. Then \(x^{2}[I_{\mathcal{H}}(x)]=\binom{|\mathcal{E}|-|V|-|E|+2}{2}-N\), where \(N\) is the number of 4-cycles in \(\mathrm{Bip}\mathcal{H}\)._
### A characterization of the connectivity of hypergraphs
In this subsection, we characterize a family hypergraphs using the external polynomial.
Kalman [9] obtained the exterior polynomial of hypergraphs induced by complete bipartite graphs.
**Observation 5.7** ([9]).: _Let \(P\) be a polymatroid on \([n]\) and \(f\) be its rank function. If \(f(I)=f([n])\) for all nonempty subset \(I\subset[n]\), then \(y^{i}[X_{\mathcal{H}}(y)]=\binom{f([n])+i-1}{i}\) for all nonnegative integer \(i\leq n-1\)._
By Corollary 4.8 and Observation 5.7, we know that the coefficient of the degree \(k\) term of the exterior polynomial of any hypergraph \(\mathcal{H}=(V,E)\) is at most \(\binom{|V|+k-2}{k}\). We now characterize hypergraphs attaining this maximal value. We first show the key lemma in this subsection.
**Lemma 5.8**.: _Let \(P\subset\mathbb{Z}_{\geq 0}^{n}\) be a polymatroid and let \(f\) be its rank function. Then for all \(k\geq 0\), \(f([n]\setminus J)=f([n])\) for all \(J\subset[n]\) with \(|J|=k\) if and only if \(y^{i}[X_{P}(y)]=\binom{f([n])+i-1}{i}\) for all \(i\leq k\)._
Proof.: It is obvious for the case \(k=0\) since the constant term of the exterior polynomial for any polymatroid is \(1\). We now assume that \(k\geq 1\). We first prove the necessity by induction on \(n\). If \(n=k+1\), then this follows from Observation 5.7.
Assume that \(n>k+1\). Note that if \(P\subset\mathbb{Z}_{\geq 0}^{n}\), then \(f\) is non-decreasing by (3). This implies that \(f([n]\setminus I)=f([n])\) for all subset \(I\subset[n]\) with \(|I|\leq k\). Hence, \(T_{t}=\{0,1,2,\ldots,f(\{t\})\}\) for any \(t\in[n]\). For any \(j\in T_{t}\), let \(f_{j}^{t}\) be the rank function of the polymatroid \(\widehat{P}_{j}^{t}\). We have that
1. for any \(j\in T_{t}\setminus\{f(\{t\})\}\) and any subset \(J^{\prime}\subset[n]\setminus\{t\}\) with \(|J^{\prime}|=k-1\), \(f([n]\setminus J^{\prime})-j=f([n])-j\leq f([n])=f([n]\setminus\{t\}\setminus J ^{\prime})\). Then \(f_{j}^{t}([n]\setminus\{t\}\setminus J^{\prime})=f([n])-j=f_{j}^{t}([n] \setminus\{t\})\) by Proposition 3.2;
2. \(f_{f(\{t\})}^{t}([n]\setminus\{t\}\setminus J^{\prime\prime})=f([n]\setminus J ^{\prime\prime})-f(\{t\})=f([n])-f(\{t\})=f_{f(\{t\})}^{t}([n]\setminus\{t\})\) for any subset \(J^{\prime\prime}\subset[n]\setminus\{t\}\) with \(|J^{\prime\prime}|=k\) by Proposition 3.2.
By the induction hypothesis, for any nonnegative integer \(i\leq k\), we have \(y^{i}[X_{\widehat{P}_{f(\{t\})}^{t}}(y)]=\binom{f([n])+i-1-f(\{t\})}{i}\) and \(y^{i-1}[X_{\widehat{P}_{j}^{t}}(y)]=\binom{f([n])-j+i-1-1}{i-1}\) for any \(j\in T_{t}\setminus\{f(\{t\})\}\). Hence, by Corollary 3.6, we know that
\[y^{i}[X_{P}(y)] = y^{i}[X_{\widehat{P}_{f(\{t\})}^{t}}(y)]+\sum_{j\in T_{t} \setminus\{f(\{t\})\}}y^{i-1}[X_{\widehat{P}_{j}^{t}}(y)]\]
\[= \binom{f([n])+i-1-f(\{t\})}{i}+\sum_{j\in T_{t}\setminus\{f(\{t\})\}} \binom{f([n])-j+i-2}{i-1}\] \[= \binom{f([n])+i-1}{i}.\]
For the sufficiency, by induction on \(n\), we prove \(y^{i}[X_{P}(y)]<\binom{f([n])+i-1}{i}\) for some \(i\leq k\) if \(f([n]\setminus T^{\prime})<f([n])\) for some subset \(T^{\prime}\subset[n]\) of size \(k\).
If \(n=k+1\), then it is clear as \(|P|<\sum_{i=0}^{k}\binom{f([n])+i-1}{i}\). (This is because that \(\mathbf{a}\notin P\) if \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathbb{Z}^{n}\) satisfies \(\sum_{i\in[n]\setminus T^{\prime}}a_{i}=f([n])\).)
Assume that \(n>k+1\). Let \(t^{\prime}\) be an element of \([n]\setminus T^{\prime}\). Then the subset \(T^{\prime}\subset[n]\setminus\{t^{\prime}\}\) satisfies \(f^{t^{\prime}}_{f(\{t^{\prime}\})}([n]\setminus\{t^{\prime}\}\setminus T^{ \prime})=f([n]\setminus T^{\prime})-f(\{t^{\prime}\})<f([n])-f(\{t^{\prime}\}) =f^{t^{\prime}}_{f(\{t^{\prime}\})}([n]\setminus\{t^{\prime}\})\). By the induction hypothesis, \(y^{i}[X_{\widehat{P}^{t^{\prime}}_{f(\{t^{\prime}\})}}]<\binom{f([n])+i-1-f(t^ {\prime})}{i}\) for some \(i\leq k\). By Theorem 4.2 and Observation 5.7, for all \(j\in T_{t^{\prime}}\setminus\{f(\{t^{\prime}\})\}\), \(y^{i-1}[X_{\widehat{P}^{t^{\prime}}_{j}}]\leq\binom{f([n])-j+i-2}{i-1}\) as \(\widehat{P}^{t^{\prime}}_{j}\subset P^{\prime}\), where \(P^{\prime}\) is the polymatroid on the ground set \([n]\setminus\{t^{\prime}\}\) so that the rank of \(I\) equals \(f([n])-j\) for all nonempty subset \(I\subset[n]\setminus\{t^{\prime}\}\). By Corollary 3.6,
\[y^{i}[X_{P}(y)]=y^{i}[X_{\widehat{P}^{t^{\prime}}_{f(\{t\})}}(y)]+\sum_{j\in T _{t^{\prime}}\setminus\{f(\{t^{\prime}\})\}}y^{i-1}[X_{\widehat{P}^{t^{\prime} }_{j}}(y)]<\binom{f([n])+i-1}{i},\]
a contraction. This completes the proof.
**Theorem 5.9**.: _Let \(\mathcal{H}=(V,E)\) be a hypergraph and \(\mathrm{Bip}\mathcal{H}\) be its associated bipartite graph. Then for all \(k\geq 0\), we have \(y^{i}[X_{\mathcal{H}}]=\binom{|V|+i-2}{i}\) for all \(i\leq k\) if and only if \(\mathrm{Bip}\mathcal{H}-E^{\prime}\) is connected for any \(E^{\prime}\subset E\) with \(|E^{\prime}|=k\)._
Proof.: It follows from Lemma 5.8 since if \(\mathrm{Bip}\mathcal{H}-E^{\prime}\) is connected, then \(f(E\setminus E^{\prime})=f(E)=|V|-1\), where \(f\) is the rank function of \(\mathcal{H}\).
**Corollary 5.10**.: _For a graph \(G=(V,E)\) and an integer \(k\geq 0\), \(y^{i}[T_{G}(1,y)]=\binom{|E|+|V|+i-2}{i}\) for all \(|E|-|V|+2-k\leq i\leq|E|-|V|+1\) if and only if \(G\) is \(k-\)edge connected._
_Remark 5.11_.: Let \(\mathcal{H}=(V,E)\) be a hypergraph and \(\mathrm{Bip}\mathcal{H}\) be its associated bipartite graph. For all \(k\geq 0\), if there is a subset \(E^{\prime}\subset E\) with \(|E^{\prime}|=k\) so that \(\mathrm{Bip}\mathcal{H}-E^{\prime}\) is connected and there two at least two edge incident with \(e\) in \(\mathrm{Bip}\mathcal{H}\) for all \(e\in E^{\prime}\), then \(y^{i}[X_{\mathcal{H}}]>0\) for all \(i\leq k\).
_Proof._ Given an order of \(E\) so that the elements of \(E^{\prime}\) is the last \(|E^{\prime}|\) elements of \(E\), there exists a vector \(\mathbf{a}=(a_{1},\ldots,a_{|E|-|E^{\prime}|},0,\ldots,0))\) is a basis of its hypergraphical polymatroid \(P_{\mathcal{H}}\), since \(\mathrm{Bip}\mathcal{H}-E^{\prime}\) is connected. Let \(f\) be the rank function of \(P_{\mathcal{H}}\). Then for all \(e\in E^{\prime}\), \(f(E^{\prime\prime})>0\) for all \(E^{\prime\prime}\) with \(e\in E^{\prime\prime}\). This implies that \(\sum_{e\in E^{\prime\prime\prime}}a_{e}=0<f(E^{\prime\prime\prime})\) for all subset \(E^{\prime\prime\prime}\subset E\) with \(e\in E^{\prime\prime\prime}\) and \(e=\min(E^{\prime\prime})\). So, \(e\notin\mathrm{Ext}_{P_{\mathcal{H}}}(\mathbf{a})\). We have that \(\epsilon(\mathbf{a})\leq|E|-k\). By the interpolatory property of the exterior polynomial (see [6, Theorem 14]), \(y^{i}[X_{\mathcal{H}}]>0\) for all \(i\leq k\).
## Acknowledgements
This paper was written while the first author visited the Tokyo Institute of Technology in 2022-2023. It is a pleasure to acknowledge the hospitality, as well as the financial support from China Scholarship Council (No. 202206310079), that made the visit possible.
XJ was supported by National Natural Science Foundation of China (No. 12171402).
TK was supported by consecutive Japan Society for the Promotion of Science (JSPS) Grants-in-Aid for Scientific Research C (Nos. 17K05244 and 23K03108).
|
2309.09661 | Effectively flat potential in the Friedberg-Lee-Sirlin model | The Friedberg-Lee-Sirlin (FLS) model is a well-known renormalizable theory of
scalar fields that provides for the existence of non-topological solitons.
Since this model was proposed, numerous works have been dedicated to studying
its classical configurations and its general suitability for various physical
problems in cosmology, quantum chromodynamics, etc. In this paper, we study how
Q-balls in effective field theory (EFT) reproduce non-topological solitons in
full FLS theory. We obtain an analytical description of the simplified model
and compare results with numerical calculations and perturbation theory. We
also study the condensation of charged bosons on the domain wall. A full
numerical solution allows us to check the EFT methods for this problem. The
latter analysis is based on the application of EFT methods to significantly
inhomogeneous configurations. We give an interpretation of the results in terms
of the shifted boson mass and the vacuum rearrangement. | Eduard Kim, Emin Nugaev | 2023-09-18T10:55:50Z | http://arxiv.org/abs/2309.09661v1 | # Effectively flat potential in the Friedberg-Lee-Sirlin model
###### Abstract
The Friedberg-Lee-Sirlin (FLS) model is a well-known renormalizable theory of scalar fields that provides for the existence of non-topological solitons. Since this model was proposed, numerous works have been dedicated to studying its classical configurations and its general suitability for various physical problems in cosmology, quantum chromodynamics, etc. In this paper, we study how Q-balls in effective field theory (EFT) reproduce non-topological solitons in full FLS theory. We obtain an analytical description of the simplified model and compare results with numerical calculations and perturbation theory. We also study the condensation of charged bosons on the domain wall. A full numerical solution allows us to check the EFT methods for this problem. The latter analysis is based on the application of EFT methods to significantly inhomogeneous configurations. We give an interpretation of the results in terms of the shifted boson mass and the vacuum rearrangement.
## I Introduction
Effective field theory is a theoretical instrument widely used in modern physics; see [1]. The success of the EFT method depends on the hierarchy of scales in the original theory, i.e., the parameters of the theory provide a hierarchy of lengths or energy scales. Specifically, decoupling allows one to obtain practical results. Using EFT, one can calculate S-matrix elements in a simplified model or find semi-classical solutions, for example, solitons or instantons. The benefit is that, in principle, calculations might be systematically improved without knowledge of the exact theory, even in a non-perturbative regime. The above-mentioned features made this instrument extremely efficient and convenient when applied to phenomenological theories related to quantum chromodynamics [2; 3], phase transitions in cosmology [4; 5], etc.
In this paper, we use the EFT technique for stationary problems in classical field theory. In particular, we apply it to the problem of finding non-topological solitons [6; 7; 8]. The Friedberg-Lee-Sirlin (FLS) model [9] was chosen on the basis of the following advantages: it is a renormalizable boson theory suitable for semi-classical description. Since this is a theory of two scalar fields, we are interested in integrating out the real field and obtaining a simplified single-field theory with Q-balls. When being applied (with possible modifications) to various phenomenological models in cosmology and particle physics [6; 10; 11], the FLS model showed itself as a useful analytical evaluation tool. Any comments on the quality of the EFT derived from the FLS model will be based on an analysis of the field configurations for the EFT and the original theory.
Let us now discuss the above-mentioned compact classical field object called the Q-ball in more detail. Localized solutions of the nonlinear equations of motion of classical field theory are solitary waves, which are more often called solitons in the physical literature [7; 12; 13]. These objects are essentially non-linear and can be studied only in the non-perturbative regime. The existence of solitons can be ensured by two mechanisms: topological and non-topological. The topological mechanism consists, according to the considered theory, in the presence of a non-trivial vacuum structure and the existence of solutions with a conserved non-zero topological charge [12]. Additionally, the presence in the theory of an unbroken, continuous internal symmetry provides conserved charge \(Q\). The set of special conditions for the potential of a complex scalar field leads to the existence of solitons, called Q-balls. Q-balls have been extensively investigated in numerous previous works [14; 15].
Under the assumption of scale hierarchy among parameters of the theory, one can use methods of EFT. In particular, it is possible to integrate out heavy real field in the FLS model to obtain a single field theory with Q-balls. Surprisingly, the resulted effective potential1 is a smooth piece-wise function partially consisting of a massive \((\phi^{*}\phi)^{2}\) model near the field origin and a flat potential elsewhere. The integral characteristics of Q-balls in the theory with effective potential reenact those of the FLS model. The same result might be reproduced in the gradient approximation of the theory.
Footnote 1: In contrast to the Coleman-Weinberg potential [16], we are constructing an EFT only at a classical level by using equations of motion to integrate out real field of the FLS model.
It is known from previous works that the FLS model significantly differs in \((3+1)\) and \((1+1)\) dimensions [17]. In the original paper [9], the model was studied in the three spatial dimensions, and non-topological localized solutions were found to be divided into two different branches. In this case, there are no static topological configurations for the real
field could be found due to Derrick's theorem [18]. In \((1+1)\) dimensions, the situation is different and arguably more diverse. Non-topological solitons are presented in only one branch of classically stable field configurations. For this model, Derrick's theorem does not restrict the existence of topological solitons (or domain walls) within the theory, and they were also studied both numerically and by analytical approximations [17; 19; 20]. To take into account the interaction of the domain wall and complex scalar field, it is necessary to reconsider the method of the constructing of the effective potential. Indeed, in the case of domain walls, the presence of non-trivial topological charge significantly distinguishes this vacuum from a usual homogeneous vacuum. In order to construct an appropriate effective theory, we applied both methods of EFT and perturbation theory. Obtained results allowed both analytical and numerical study. We found that bosons are in a bound state within the potential of a kink and are able to form condensate.
We construct an effective potential for non-topological solitons of the FLS model in Sec.II. The Q-balls of the resulted theory in \((1+1)\) and \((3+1)\) dimensions are studied in Sec.III and Sec.IV, respectively. Sec.V is dedicated to the analysis of the Q-balls within the domain wall. We provide a discussion of our results in Sec.VI.
## II Effective potential
Let us write the Lagrangian1 of the Friedberg-Lee-Sirlin model [9] with two scalar fields: complex field \(\phi\) and real field \(\chi\)
Footnote 1: Throughout the paper, we will use the following metrics: \((+,-,-,-)\) and \((+,-)\) for the \((3+1)\) and \((1+1)\)-dimensional theories, respectively
\[\mathcal{L}=\partial_{\mu}\phi^{*}\partial^{\mu}\phi+\frac{1}{2}\partial_{\mu }\chi\partial^{\mu}\chi-V(|\phi|^{2},\chi) \tag{1}\]
in which potential is in the form of
\[V(|\phi|^{2},\chi)=h^{2}|\phi|^{2}\chi^{2}+\frac{m^{2}}{2}(\chi^{2}-v^{2})^{2} \tag{2}\]
Corresponding equations of motion are
\[\begin{cases}&\partial_{\mu}\partial^{\mu}\phi+\frac{1}{2}\frac{\partial V(| \phi|^{2},\chi)}{\partial|\phi|}=0,\\ &\partial_{\mu}\partial^{\mu}\chi+\frac{\partial V(|\phi|^{2},\chi)}{\partial \chi}=0\end{cases} \tag{3}\]
Theory (1) implies the existence of non-broken \(U(1)\) symmetry with corresponding conserving Noether charge and discrete symmetry \(\chi\rightarrow-\chi\). The potential with spontaneous symmetry breaking may provide non-zero topological charge in \((1+1)\) dimensions. Following [21], the ansatz for Eq.(3) solutions might be chosen as
\[\begin{cases}&\phi(t,\vec{x})=e^{-i\omega t}f(\vec{x}),\\ &\chi(t,\vec{x})=\chi(\vec{x})\end{cases} \tag{4}\]
Equations Eq.(3) now could be rewritten as
\[\begin{split}\nabla^{2}f&=h^{2}\chi^{2}f-\omega^{2}f,\\ \nabla^{2}\chi&=2h^{2}f^{2}\chi+2m^{2}(\chi^{2}-v^{2})\chi \end{split} \tag{5}\]
From formulation Eq.(5), we can see that the equations of motion of the theory possess a few dimensional quantities, like \(\sqrt{h^{2}v^{2}-\omega^{2}}\) from field \(\phi\) asymptotic behavior; \(h\phi\) and \(mv\).
In order to derive effective potential, we propose two methods that, in the end, will lead to the same result. A feature underlying their similarity arises by assuming the relation between fields masses \(m_{\chi}=mv\gg hv=m_{\phi}\) which is true in any dimensions. What this means is that it is possible to integrate out real field \(\chi\) by using the bottom equation of (3). Firstly, one can set up hierarchy among quantities introduced above and gradient terms of Eq.(5); in this case, imposing an overwhelming value of \(mv\) corresponding to steep changing of field \(\chi\), leads to
\[\frac{\nabla^{2}\chi}{2m^{2}v^{2}\chi}=\left(\frac{h^{2}|\phi|^{2}}{m^{2}v^{2}}+ \left(\frac{\chi^{2}}{v^{2}}-1\right)\right)\to 0 \tag{6}\]
along with the second solution \(\chi=0\) performs integration of field \(\chi\). In other words, we want to study two different regions: when field \(\chi\) is a constant solution and a region of steep change of \(\chi\) to vacuum values. The same results can be derived from the gradient approximation in Eq.(5) when
\[\nabla^{2}\chi=\frac{\partial V(|\phi|^{2},\chi)}{\partial\chi}=0 \tag{7}\]
Equations (6-7) lead to two possible cases. Firstly, if field \(\chi\) equals zero, then the value of the potential (3) becomes \(V_{1}=\frac{m^{2}v^{2}}{2}\) and from Eq.(5) we get that \(\nabla^{2}\chi=0\). Secondly, if field \(\chi\) does not equal zero, then
\[\chi^{2}=v^{2}-\frac{h^{2}}{m^{2}}|\phi|^{2} \tag{8}\]
with corresponding potential \(V_{2}=h^{2}v^{2}|\phi|^{2}-\frac{h^{4}}{2m^{2}}|\phi|^{4}\), where we can denote \(m_{\phi}=hv\). The same result can be obtained from the perspective of the quantum mechanical analog of Eq.(5) \(\frac{\nabla^{2}\chi}{2m^{2}v^{2}\chi}=\frac{\chi^{2}}{v^{2}}-1+\frac{h^{2}} {m^{2}v^{2}}|\phi|^{2}\) as long as we keep the \(\chi\) field "mass" term \(m^{2}v^{2}\chi\) to be greater than its momentum (\(\nabla^{2}\chi\leftrightarrow\hat{p}^{2}\chi\)). In this case, the lower equation of Eqs.(5) can be treated as algebraic with two possible outcomes.
Written above, it results in constructing a piece-wise effective potential. Two parts of effective potential are linked (\(V_{1}=V_{2}\)) at the value of field \(|\phi_{s}|\), at some radius \(R\). As it was discussed in [7], in order for Q-ball to exist, effective potential must be as \(V_{eff}=V_{1}\), if \(|\phi|>|\phi_{s}|\), and \(V_{eff}=V_{2}\), if \(|\phi|<|\phi_{s}|\). Consideration of \(|\phi|^{2}\) term contribution in Eq.(8) might appear crucial to having a correct representation of soliton configurations from the model (1). For the comparison between full effective potential and reduced one we will calculate integral characterisitics of Q-balls in both cases.
Thus, one can derive the Lagrangian form of the theory with effective potential
\[\mathcal{L}=\partial_{\mu}\phi^{*}\partial^{\mu}\phi-\left(m_{\phi}^{2}|\phi| ^{2}-\frac{h^{4}}{2m^{2}}|\phi|^{4}\right)\theta\left(\frac{mv}{h}-|\phi| \right)-\frac{m^{2}v^{4}}{2}\theta\left(|\phi|-\frac{mv}{h}\right)=\partial_{ \mu}\phi^{*}\partial^{\mu}\phi-V(|\phi|^{2}) \tag{9}\]
where \(\theta\)- is a Heaviside step function. Effective potential is of the form
Figure 1: The effective potential Eq.(10) as the function of \(|\phi|\) and reduced parabolic piece-wise potential.
\[V(|\phi|^{2})=\begin{cases}m_{\phi}^{2}|\phi|^{2}-\frac{h^{4}}{2m^{2}}|\phi|^{4},& \text{if }|\phi|<|\phi_{s}|=\frac{mv}{h}\text{(region A)}\\ \frac{m^{2}v^{4}}{2},&\text{if }|\phi|>|\phi_{s}|=\frac{mv}{h}\text{ (region B)}\end{cases} \tag{10}\]
and the equations of motion
\[\begin{split}& A:\nabla^{2}f_{A}(\vec{x})=(m_{\phi}^{2}-\omega^{ 2})f_{A}(\vec{x})-\frac{h^{4}}{m^{2}}f_{A}^{3}(\vec{x}),\text{ outside of a Q-ball}\\ & B:\nabla^{2}f_{B}(\vec{x})=-\omega^{2}f_{B}(\vec{x}),\text{ inside the core of a Q-ball}\end{split} \tag{11}\]
We will use mathematical convention to write \(C^{n}\), where n- is the order of the smoothness of the function. In this terms, \(V(|\phi|)\in C^{1}\), so at least we would expect \(f(\vec{x})\in C^{2}\). Further investigation of model Eq.(9) in \((1+1)\) and \((3+1)\)-dimension space-time will be provided in Sec.III and Sec.IV.
## III \((1+1)\)-dimensional model
In \((1+1)\)-dimensional space-time Eqs.(11) are written as
\[\begin{split}& A:f_{A}^{{}^{\prime\prime}}(x)=(m_{\phi}^{2}- \omega^{2})f_{A}(x)-\frac{h^{4}}{m^{2}}f_{A}^{3}(x)\\ & B:f_{B}^{{}^{\prime\prime}}(x)=-\omega^{2}f_{B}(x)\end{split} \tag{12}\]
In the reduced single field theory, the number of first integrals of Eqs.(12) allows us to find analytical solution. Indeed, since the upper equation in Eqs.(12) is non-linear, the amplitude of the function is strictly fixed by the equation itself. The only possibility remaining is to introduce \(x_{0}\) which is the spatial center of the \(f_{A}\) solution. The bottom equation of Eqs.(12) is a linear equation with a solution containing multiplier \(B_{\omega}\). With typical Q-ball boundary conditions (finiteness of Q-ball's energy requires \(\lim_{x\rightarrow\infty}f(x)=0\) and \(\lim_{x\rightarrow\infty}f^{{}^{\prime}}(x)=0\)) we find the function \(f_{A}\) to be in the form of
\[f_{A}(x)=\sqrt{\frac{2m^{2}(m_{\phi}^{2}-\omega^{2})}{h^{4}}}\frac{1}{\cosh \left(\sqrt{m_{\phi}^{2}-\omega^{2}}(|x-x_{0}|)\right)} \tag{13}\]
where \(x_{0}\) is the integration constant which corresponds to the center of configuration.
In the region \(B\) the solution profile should be even function and it takes form
\[f_{B}(x)=B_{\omega}\cos\left(\omega x\right) \tag{14}\]
Values of \(x_{0}\), \(R\) and \(B_{\omega}\) are determined from \(f_{A}(R)=f_{B}(R)=\frac{mv}{h}\) and \(f_{A}^{{}^{\prime}}(R)=f_{B}^{{}^{\prime}}(R)\) conditions, so that
\[\begin{split}& B_{\omega}=\frac{mv}{h\cos\left(\omega R \right)}\\ & x_{0}=R-\frac{\arcsin\left(\sqrt{2\left(1-\frac{\omega^{2}}{m_{ \phi}^{2}}\right)}\right)}{\sqrt{m_{\phi}^{2}-\omega^{2}}}\\ & R=\frac{\arctan\left(\sqrt{\frac{m_{\phi}^{2}}{2\omega^{2}}-1} \right)}{\omega}\end{split} \tag{15}\]
The determination of the integration constants begets two remarkable things. Firstly, by determining three parameters \(B_{\omega},x_{0}\) and \(R\) we can directly check that \(f_{A}^{{}^{\prime}}(R)=f_{B}^{{}^{\prime}}(R)\), so \(f(x)\in C^{2}\). From the solution (15) for the matching
radius \(R\) one can find restrictions for \(\omega\) to be in the interval of \(\omega\in\left(0,\frac{m_{\phi}}{\sqrt{2}}\right)\). When \(\omega>\frac{m_{\phi}}{\sqrt{2}}\) potential (10) turns into the plain massive \((\phi^{*}\phi)^{2}\) potential.
Now we are ready to present values of \(U(1)\) charge and energy for Q-ball in the theory (12)
\[Q=2|\phi_{s}|^{2}\left(\frac{\omega R}{\cos^{2}\left(\omega R\right)}+\tan \left(\omega R\right)\right)+\frac{8m^{2}\omega}{h^{4}}\left(\sqrt{m_{\phi}^{2 }-\omega^{2}}-\sqrt{\frac{m_{\phi}^{2}}{2}-\omega^{2}}\right) \tag{16}\]
\[E=\omega Q+2|\phi_{s}|^{2}\left(\frac{\omega^{2}}{\cos^{2}\left(\omega R \right)}(R-\frac{\sin\left(2\omega R\right)}{2\omega})\right)\frac{4m^{2}}{3h^ {4}}(m_{\phi}^{2}-\omega^{2})^{\frac{3}{2}}\left(1-\tanh^{3}\left(\sqrt{m_{ \phi}^{2}-\omega^{2}}(R-x_{0})\right)\right) \tag{17}\]
when \(\omega\in\left(0,\frac{m_{\phi}}{\sqrt{2}}\right)\). For the rest interval of parameter \(\omega\) the results are provided by plain massive \((\phi^{*}\phi)^{4}\) theory
\[Q=\frac{8m^{2}\omega}{h^{4}}\sqrt{m_{\phi}^{2}-\omega^{2}} \tag{18}\]
\[E=\frac{8m^{2}}{h^{4}}\sqrt{m_{\phi}^{2}-\omega^{2}}\left(m_{\phi}^{2}-\frac{ 2}{3}(m_{\phi}^{2}-\omega^{2})\right) \tag{19}\]
One can check that the following differential relation
\[\frac{dE}{d\omega}=\omega\frac{dQ}{d\omega}\text{ or }\frac{dE}{dQ}=\omega \tag{20}\]
is satisfied. Eq.(20) is fulfilled in a majority of theories with \(U(1)\) symmetry.
### Parabolic piece-wise potential
Before going further, let us investigate how the presence of the non-linear \((\phi^{*}\phi)^{2}\) term in the effective potential affects its ability to reproduce integral characteristics of the Friedberg-Lee-Sirlin model. Lagrangian in this case is very similar to the one studied in [22]
\[\mathcal{L}=\partial_{\mu}\phi^{*}\partial^{\mu}\phi-h^{2}v^{2}|\phi|^{2} \theta\left(\frac{mv}{h\sqrt{2}}-|\phi|\right)-\frac{m^{2}v^{4}}{2}\theta \left(|\phi|-\frac{mv}{h\sqrt{2}}\right)=\partial_{\mu}\phi^{*}\partial^{\mu} \phi-V(|\phi|^{2}) \tag{21}\]
where \(|\phi_{s}|=\frac{mv}{h\sqrt{2}}\). On first glance, this linear model seems to be more convenient than Eq.(9), and it provides an analytical solution for models in any dimensions. Potential is a piece-wise function of \(|\phi|\) with two main properties.
When \(|\phi|<|\phi_{s}|\) (region A) model (21) is a free massive scalar field theory, and if \(|\phi|>|\phi_{s}|\) (region B) \(V_{eff}\) is a flat potential, which ensures the existence of Q-ball. At the point \(|\phi|=|\phi_{s}|\)\(V_{eff}\) remains continuous, and \(f\) should be at least \(C^{1}\) function.
The equations of motion for \((1+1)\)-dimensional space-time are the same as Eqs.(12) with the only difference being in the abundance of non-linear terms. In different regions, the solution takes form
\[\begin{split} A:f_{A}(x)&=\frac{mv}{\sqrt{2}h}e^{ \sqrt{m_{\phi}^{2}-\omega^{2}}(R-x)}\\ B:f_{B}(x)&=\frac{mv}{\sqrt{2}h}\frac{\cos\left( \omega x\right)}{\cos\left(\omega R\right)}\end{split} \tag{22}\]
where \(f_{A}(R)=f_{B}(R)=\frac{mv}{\sqrt{2}h}\). The matching radius \(R=\frac{\arctan\left(\frac{m_{\phi}^{2}-\omega^{2}}{\omega}\right)}{\omega}\) determined from condition \(f_{A}^{{}^{\prime}}(R)=f_{B}^{{}^{\prime}}(R)\). Now \(U(1)\) charge and energy of Q-ball could be calculated as functions of the free parameter \(\omega\)
\[Q=\frac{m^{2}v^{2}}{h^{2}}\left(\frac{\omega}{\sqrt{m_{\phi}^{2}-\omega^{2}}}+ \frac{\omega R}{\cos^{2}\left(\omega R\right)}+\tan\left(\omega R\right)\right) \tag{23}\]
\[E=\frac{m^{2}v^{2}}{h^{2}}\left(\frac{m_{\phi}^{2}}{\sqrt{m_{\phi}^{2}-\omega^{ 2}}}+\frac{\omega^{2}R}{\cos^{2}\left(\omega R\right)}+m_{\phi}^{2}R\right) \tag{24}\]
equations (23,24) might be additionally checked with the relation (20).
### Comparison with numerical results
From the Fig.2 one can see that the effective potential and its parabolic approximation reproduce the correct asymptotic for integral characteristics at large charges of stable Q-balls. In order to better understand the mathematical aspects underlying this result, one can see how field profiles from effective potential differ from numerical solutions of Eq.(12). Knowing the analytical solution of the model with effective potential, one can see how field \(\chi\) was integrated out
\[\begin{split}& A:\chi_{eff}=\sqrt{v^{2}-\left(\sqrt{\frac{2m^{2}(m_{ \phi}^{2}-\omega^{2})}{h^{4}}}\frac{1}{\cosh\left(\sqrt{m_{\phi}^{2}-\omega^{2 }}(x-x_{0})\right)}\right)^{2}}\\ & B:\chi_{eff}=0\end{split} \tag{25}\]
Fig. 2: The energies of the FLS non-topological solitons, effective and parabolic piece-wise potentials Q-balls vs. their \(U(1)\) charge plotted for the \((1+1)\)-dimensional theory.
A feature of Eq.(25) can be seen in Fig.4, which is the presence of the bubble of the effectively massless field \(\phi\). If one takes a close look at the equations above and Fig.(3,4) the inferiority of the full description of the FLS model Eq.(1) through effective potential is seen. Equations (25) does not allow the resurgence of kink-like solutions with non-trivial vacuum structure. Moreover, effective theory reproduces the profile of complex scalar field \(\phi\) as shown in the Fig.3.
Fig. 4: The comparison of the field \(\chi\) profile from the FLS model and one from the integration procedure Eq.(25) is shown for different values of parameter \(\omega\).
Fig. 3: The profiles of the field \(\phi\) from the FLS model and theory with the effective potential are plotted for different values of parameter \(\omega\).
The Friedberg-Lee-Sirlin model (1) is known for not only having a soliton stabilized by \(U(1)\) charge when both fields \(|\phi|\) and \(\chi\) are even, but also for having a configurations with non-trivial topology [17]. This model is of interest due to the possibility to apply EFT methods for inhomogeneous configurations, see also Sec.V.
## IV \((3+1)\)-dimensional model
In \((3+1)\) dimensions equations of motion for a model with effective potential Eq.(10) is in the form of
\[\begin{cases}&A:\partial_{r}\left(r^{2}\partial_{r}f_{A}(r)\right)=r^{2}\left[ (m_{\phi}^{2}-\omega^{2})f_{A}(r)-\frac{h^{4}}{m^{2}}f_{A}^{3}(r)\right]\\ &B:\partial_{r}\left(r^{2}\partial_{r}f_{B}(r)\right)=-r^{2}\omega^{2}f_{B}(r) \end{cases} \tag{26}\]
where \(\partial_{r}\equiv\frac{\partial}{\partial r}\).
As it was mentioned before, field \(\phi\) in the region \(A\) corresponds to the \((\phi^{*}\phi)^{2}\) theory that was extensively studied before. From [23], we know that all particle-like solutions in \((\phi^{*}\phi)^{2}\) theory are classically unstable. Indeed, there is only one branch of solutions that is unstable both classically and kinematically. The difference in effective potential (10) is that for large values of field \(\phi\) potential is a flat function. As shown in Fig.(5), an added flat potential region results in an additional second branch of solutions that is classically stable. The resulted effective theory reproduces the main peculiarities of the \((3+1)\)-dimensional FLS model, see Fig.6.
### Parabolic piece-wise potential
When applied to \((3+1)\)-dimension space-time model calculations similar to those performed in Sec.III.1 resulted in
\[\frac{Q}{4\pi}=R^{2}\omega\frac{m^{2}v^{2}}{2h^{2}}\left(\frac{m_{\phi}^{2}(1 +R\sqrt{m_{\phi}^{2}-\omega^{2}})}{\omega^{2}\sqrt{m_{\phi}^{2}-\omega^{2}}}\right) \tag{27}\]
\[\frac{E}{4\pi}=\frac{\omega Q}{4\pi}+\frac{R^{3}}{3}m_{\phi}^{2}\frac{m^{2}v^ {2}}{2h^{2}} \tag{28}\]
Fig. 5: The energies of non-topological solitons for the \((\phi^{*}\phi)^{2}\) theory and model with the effective potential vs. their \(U(1)\) charge are plotted for the \((3+1)\)-dimensional theory.
where \(R=\frac{1}{\omega}\left(\pi-\arctan\left(-\frac{\omega}{\sqrt{m_{\phi}^{2}-\omega^ {2}}}\right)\right)\).
Now the relation (20) can be checked for Eqs.(27,28).
For the comparison we plotted soliton energy as the function of it's \(U(1)\) charge for parabolic piece-wise and effective potentials in Fig.(6). The difference can be seen in the values of critical parameters of theories and explicitly if the dependence of both integral quantities on \(\omega\) is taken into account.
### Comparison with numerical results
In this section, we are interested in studying solitons of the FLS model in \((3+1)\) dimensions in comparison to ones from effective potential (10). As can be seen from Eq.(29) and Fig.(6), the main differences from the \((1+1)\)-dimensional FLS model are the presence of the dissipative term and having two branches of solutions [7; 9].
\[\left\{\begin{array}{c}\partial_{r}\left(r^{2}\partial_{r}\phi\right)=r^{2} \left[h^{2}\chi^{2}\phi-\omega^{2}\phi\right]\\ \partial_{r}\left(r^{2}\partial_{r}\chi\right)=2r^{2}\left[h^{2}|\phi|^{2} \chi+m^{2}(\chi^{2}-v^{2})^{2}\right]\end{array}\right. \tag{29}\]
Fig.(6) shows that effective potential is better at resurging the results of the original model Eq.(1). At large \(U(1)\) charge effective theory gives right asymptotic behavior for both branches of solutions. For the bottom branch the analytically predicted \(E\sim Q^{\frac{3}{2}}\) asymptotic behavior is restored in the full and reduced EFT. In accordance with [7; 9], in three spatial dimensions, the gradient term of the Lagrangian has a large contribution to the energy functional, and effective action is required for more precise calculations.
## V Q-balls within the domain wall
The method of effective potential is inappropriate for the description of topological configurations of the Friedberg-Lee-Sirlin model in which case gradient term is crucial. For large \(U(1)\) charge of the field \(\phi\) both non-topological and topological solutions behave remarkably similarly, and effective potential is a reliable instrument of reproducing the integral characteristics of the theory. For small charges description of the model by effective potential (10) is not appropriate. However, in this case one can use the perturbation theory in the background of the domain wall.
Fig. 6: The \(E(Q)\) plots for the FLS model non-topological solitons, effective, and parabolic piece-wise potentials Q-balls are shown for the \((3+1)\)-dimensional theory.
The main peculiarity of topological configurations of the FLS model is \(\chi\rightarrow-\chi\) symmetry. In order to use perturbation theory for topological configurations, we treated field \(\phi\) as a constant background field and compared FLS potential (2) to kink potential
\[V_{k}(\chi)=\frac{m^{2}}{2}(\chi^{2}-(v^{{}^{\prime}})^{2})^{2} \leftrightarrow V(|\phi|^{2},\chi)=h^{2}|\phi|^{2}\chi^{2}+\frac{m^{2}}{2}( \chi^{2}-v^{2})^{2} \tag{30}\]
The matching can be explicitly seen from the equation of motion of the field \(\chi\)
\[\chi\left(\frac{h^{2}}{m^{2}}|\phi|^{2}+(\chi^{2}-v^{2})-(\chi^{2 }-(v^{{}^{\prime}})^{2})\right)=0 \tag{31}\]
In terms of Eq.(31) a deviation of original field \(\chi\) from kink configuration is also studied by application of condition (6).
Through these calculations, one can define \(v^{{}^{\prime}}=\sqrt{v^{2}-\frac{h^{2}}{m^{2}}|\phi|^{2}}\), and solution of
\[\chi^{{}^{\prime\prime}}(x)=2h^{2}|\phi|^{2}\chi+2m^{2}(\chi^{2}-v^{2})\chi=2m ^{2}\left(\chi^{2}-\left(v^{{}^{\prime}}\right)^{2}\right)\chi \tag{32}\]
will take the form of
\[\chi(|\phi|^{2})=\sqrt{v^{2}-\frac{h^{2}}{m^{2}}|\phi|^{2}}\tanh \left(mx\sqrt{v^{2}-\frac{h^{2}}{m^{2}}|\phi|^{2}}\right) \tag{33}\]
and can be used to integrate out the heavy field \(\chi\).
Let us revive Sec.III and reproduce the effective potential using Lagrangian instead of the equations of motion. It can be seen that a series of transformations \(\partial_{\mu}\chi\partial^{\mu}\chi\rightarrow-\chi\partial_{\mu}\partial^{ \mu}\chi\), \(\partial_{\mu}\partial^{\mu}\chi\rightarrow-\frac{\partial V(|\phi|^{2},\chi)} {\partial\chi}\) being applied to FLS Lagrangian results in
\[\mathcal{L}=\partial_{\mu}\phi^{*}\partial^{\mu}\phi+\frac{m^{2}}{2}\left( \chi^{4}(|\phi|^{2})-v^{4}\right) \tag{34}\]
which leads to the effective potential in the presence of an inhomogeneous background field. A close look at the Eq.(34) shows that when applied to Sec.II it provides the same form of \(V_{eff}\). Substituting field \(\chi\) as Eq.(8) and as \(\chi=0\) will lead to potentials \(V_{1}\) and \(V_{2}\) with only difference that the transition can be done in a continuous way.
\[\mathcal{L}=\partial_{\mu}\phi^{*}\partial^{\mu}\phi+\frac{m^{2}}{2}(\chi^{4} (|\phi|^{2})-v^{4})=\partial_{\mu}\phi^{*}\partial^{\mu}\phi+\frac{m^{2}}{2} \left((v^{2}-\frac{h^{2}}{m^{2}}|\phi|^{2})^{2}\tanh^{4}\left(mx\sqrt{v^{2}- \frac{h^{2}}{m^{2}}|\phi|^{2}}\right)-v^{4}\right) \tag{35}\]
when \(|\phi|\leq\frac{mv}{h}\), otherwise
\[\mathcal{L}=\partial_{\mu}\phi^{*}\partial^{\mu}\phi-\frac{m^{2}v^{4}}{2} \tag{36}\]
Eqs.(35,36) can only be solved numerically and result in a qualitatively good approximation of the FLS model for topological configurations.
Figure 7: The energies (top figure) and \(U(1)\) charges (bottom figure) of topological configurations of the FLS model, effective potential Q-balls, and Q-balls within the domain wall are plotted as functions of parameter \(\omega\). The upper bound in \(\omega\) for topological configurations of the FLS model and Q-balls within domain walls appears due to the existence of a bosonic bound state on kink.
Information about topological configurations of the Friedberg-Lee-Sirlin model could be extracted from the Eq.(35) by using the perturbation theory. Formal expansion of the Eq.(33) in Taylor series requires condition \(\frac{h^{2}|\phi|^{2}}{m^{2}v^{2}}\ll 1\), therefore
\[\chi\approx v\tanh\left(mvx\right)+... \tag{37}\]
Equation of motion of theory (35) in the zeroth-order of perturbation theory can be written by substituting Eq.(37) into the upper equation of Eqs.(3) as follows using ansatz (4)
\[f^{{}^{\prime\prime}}(x)+\left[(\omega^{2}-m_{\phi}^{2})+\frac{m_{\phi}^{2}}{ \cosh^{2}\left(mvx\right)}\right]f(x)=0 \tag{38}\]
with the lowest frequency solution (see App.A)
\[\omega_{0}^{2}=\frac{m^{2}v^{2}}{2}\left(\sqrt{1+\frac{4h^{2}}{m^{2}}}-1\right),\qquad f(x)=\frac{A}{\cosh^{\frac{\sqrt{m_{\phi}^{2}-m_{\phi}^{2}}}{m^{2}}} \left(mvx\right)} \tag{39}\]
which is a bound state of \(\phi\)-bosons on the domain wall. Even though the calculation were performed in linear approximation the result appears as a non-perturbative effect.
Figure 8: The energies of the FLS topological configurations, effective potential Q-balls, and Q-balls within domain wall vs. their \(U(1)\) charge.
Now we can perform integration of solution Eq.(A2) to obtain energy and \(U(1)\) charge
\[Q=2\omega_{0}A^{2}\int_{-\infty}^{\infty}dxf^{2}(x) \tag{40}\]
\[E=\frac{4}{3}mv^{3}+\omega_{0}Q \tag{41}\]
for which the following relation \(\frac{dE}{dQ}=\omega_{0}\) is fulfilled. The \(E(Q)\) characteristics for bosons bound state on kink and topological configurations of the FLS model plotted on Fig.(9) arise new interesting questions. Firstly, the most noticeable change is in transformation of the asymptotic vacuum. As well, topological structure of domain walls reshapes the vacuum making shift in energy by kink's mass. In contrast with non-topological solitons that are compared to plane waves in term of quantum mechanical stability, topological configurations are absolutely stable. One can see that at large charges we cannot restrict ourselves to the zeroth order of the perturbation theory, backreaction of the bound state on kink must be taken into account (see Sec.V.1). Nonetheless, both topological field configurations and configuration (39) might be considered as separate states. Since, by definition, seeking for soliton solutions means studying the true vacuum of the theory at a fixed symmetry charge (Noether or topological), it is not surprising that we have become interested in the mechanism of localization of bosons bounded on kink into a energetically more preferable soliton configuration. This issue may be revised in the quantum theory of solitons. For example, it is interesting to reproduce the results of [24] in the \((1+1)\)-dimensional FLS model.
The results of this section are in agreement with numerical results obtained for topological configurations. Another non-trivial issue is the form of Eq.(35) as a polynomial function of field \(\phi\) which is crucial to explicitly understanding of formation of Q-ball. It can be derived in a series expansion of Eq.(33) in spatial coordinate (see App.B).
### Q-ball on kink revisited
As it can be seen from the analysis above topological configuration of model Eq.(1) with zero \(U(1)\) charge is well described as Q-ball on scalar kink. Similar models with fermions and kink was studied in details in [25, 26, 27]. In the Friedberg-Lee-Sirlin model in contrast to referred studies of fermion fields coupled to scalar field by Yukawa interaction, we observed the absence of zeroth mode and severe modification of the vaccum of the theory. According to [28], non-zeroth mode bound state of Q-ball localized on kink implies backreaction on the profile of the kink in higher orders of perturbation theory.
Figure 9: The comparison of the \(E(Q)\) characteristics between topological configurations of the FLS model and bosonic bound state on kink with an arbitrary number of particles is provided.
\[\chi\approx v\tanh(mvx)+\frac{\left(\frac{A}{\frac{\sqrt{m_{0}^{2}-v_{0}^{2}}}{mv} \frac{}{(mvx)}}\right)^{2}\left(-h^{2}mvx+h^{2}mvx\tanh^{2}(mvx)-h^{2}\tanh(mvx) \right)}{2m^{2}v}+... \tag{42}\]
From both equation Eq.(42) and numerical analysis it can be seen that back-reaction of the localization of Q-ball on static kink does not results in kink-antikink oscillations as it was in [28]. It is worth to mention that bound state Eq.(39) is not affected by Pauli's exclusion principle, and could be filled with numerous boson particles.
## VI Outlook
In this paper, we applied EFT methods to analyze classical field theory solutions. We looked specifically at how integrating out the real field in the FLS model affects on the potential of the complex field. The effective potential was constructed by assuming the FLS model's parameter hierarchy. The existence of Q-balls was allowed by the resulting simplified one scalar field potential. Compared to the non-topological solitons of the original theory, Q-balls from the EFT showed reproduce the integral characteristics of the theory in both \((1+1)\) and \((3+1)\) dimensions. The presence of the topological field configurations in the FLS model opened up a question about the possibility of constructing an EFT for the case of an inhomogeneous background. The new EFT is a theory with broken Lorentz symmetry while still allowing for Q-balls that resembled topological configurations of the FLS theory. In addition, by using the perturbation theory in this new EFT, we clarified interpretation of the bound state of bosons on the domain wall. This result was reaffirmed by numerical calculations.
Solitons in the classical field theory have been found to be useful in various phenomenological models in cosmology, particle physics, etc. Typically, Q-balls appear in supersymmetric theories [29; 30; 31] contrary to solitons of the FLS model. An effective theory developed in this work could be useful for implementing previously developed Q-ball formalism to the EFT Q-balls in the FLS model for phase transitions and dark matter in cosmology [32; 33; 34; 10; 29], particle physics [35], etc. The EFT potentials with flat directions are themselves of interest in early Universe inflation theories [36]. Recent research provides observations of nHz gravitational waves (GW) [37; 38; 39; 40] revives the search for sources capable of producing these extreme GW. A discussion of the role of Q-balls in the formation of GW can be found in [41; 42; 43]. An oscillon is another type of localized lumps of the classical field [44; 45; 46]. The main difference
Fig. 10: The profiles of kink and field \(\chi\) in the first order of perturbation theory Eq.(42) for different values of amplitude.
between non-topological solitons (or Q-balls) and oscillons is the absence of unbroken internal symmetry. However, approximate conservation of the charge stabilize the solution, resulting in oscillons being dissipative yet long-living objects [47; 48]. If both scalar fields are made real in the FLS model, oscillons can form. Results of the Sec.II can provide theory with effectively flat potential that is suitable for the study of oscillons. It may be of research interest due to the proposed role of oscillons in cosmology (see references in [49]).
The topological structure of the vacuum is another non-trivial aspect of constructing an EFT in the context of studying Q-balls. As previously discussed, effective theory (9) was shown to reproduce the integral characteristics of the FLS model solitons at large charges. Taking into account the topological configurations of the original theory, this effective potential was unsuitable to study Q-balls within a domain wall. By constructing an effective theory in the presence of an inhomogeneous real field, this issue was resolved. Our method, combined with the perturbation theory, made it possible to analyze the condensation of bosons on the domain wall as well as the rearrangement of the theory's vacuum. As a conclusion, we would like to provide a brief discussion of future research on the current issue. Firstly, consistent development requires not only the construction of effective potential but also a method of calculation of an effective action for the FLS theory. Gradient terms of the Lagrangian will be acknowledged as a result of this advancement. In order to have more accurate matching between the EFT Q-balls and FLS solitons, the given step should be resolved. Moreover, constructing an effective action for the original theory not only improves an analysis of the classical solution of field theory but also allows quantum or thermal corrections to be considered. Secondly, the FLS model is a theory of two interacting scalar fields, which makes it possible to take quantum corrections into account and construct Coleman-Weinberg-type effective potential [16] in this case. Models with soliton solutions can be modified by adding an Abelian gauge field coupled to the original field (or fields) [50; 51; 52]. The development of EFT for these models may be the subject of future research.
## VII Acknowledgments
The authors are grateful to Dmitry Levkov, Anuaruly Oraz, Andrey Shkerin, Yakov Shnir, Mikhail Smolyakov, and Sergey Troitsky for useful discussions and helpful comments on the paper. This work was supported by the grant RSF 22-12-00215.
## Appendix A Bosons on kink
In this appendix we will show that like fermions [28] bosons are also could be localized on kink in \(\chi^{4}\) theory in \((1+1)\)-dimensional space-time. We are starting with a corresponding differential equation that describes dynamics of field \(\phi\) with interaction with kink within ansatz (4)
\[f^{{}^{\prime\prime}}(x)+\left[(\omega^{2}-m_{\phi}^{2})+\frac{m_{\phi}^{2}}{ \cosh^{2}\left(mvx\right)}\right]f(x)=0 \tag{10}\]
This is also known as bound states problem in modified Poschl-Teller potential [53], we obtain solutions of this equation by firstly denoting
\[\xi=\tanh\left(mvx\right),\epsilon=\frac{\sqrt{m_{\phi}^{2}-\omega^{2}}}{mv},s=\frac{-1+\sqrt{1+\frac{4h^{2}}{m^{2}}}}{2}\]
After that solution of Eq.(10) could be expressed through hypergeometric function as
\[f(x)=A(1-\xi^{2})^{\frac{5}{2}}F\left(\epsilon-s,\epsilon+s+1,\epsilon+1, \frac{1-\xi}{2}\right) \tag{11}\]
which due to the arguments given below, is reduced to the Eq.(39).
In order for \(f(x)\) to be finite and \(f(\infty)=0\) we should keep \(\epsilon-s=-n\), where n is the principal quantum number of the corresponding bound state. Only the \(0^{\text{th}}\) bound state is of interest since bound states with higher number \(n\) experience wave function sign changes. Therefore, we obtain
\[\omega_{n}^{2}=\left(n+\frac{1}{2}\right)m^{2}v^{2}\sqrt{1+\frac{4h^{2}}{m^{2}}}-m^ {2}v^{2}(n^{2}+n)-\frac{m^{2}v^{2}}{2} \tag{10}\]
The classical limit on the free parameter of the theory \(\omega\) is \(\omega\leq m_{\phi}\) results in having only one bound state of bosonic \(\phi\) particles on static kink due to Eq.(10).
## Appendix B Series expansion in topological configurations
In this appendix, we will illustrate how integrating out field \(\chi\) in form of Eq.(33) affects the dynamics of the field \(\phi\) in terms of the new polynomial potential \(\tilde{V}(|\phi|)\). These calculations are required for an extensive understanding of the structure of the effective potential in theory (35,36). A convenient way to construct \(\tilde{V}(|\phi|)\) is to start from equation of motion for field \(\phi\)
\[\begin{split}&\partial_{\mu}\partial^{\mu}\phi=\frac{h^{2}|\phi| \left(h^{2}|\phi|^{2}-m^{2}v^{2}\right)\left(2mx\sqrt{v^{2}-\frac{h^{2}|\phi| ^{2}}{m^{2}}}+\sinh\left(2mx\sqrt{v^{2}-\frac{h^{2}|\phi|^{2}}{m^{2}}}\right) \right)\tanh^{3}\left(mx\sqrt{v^{2}-\frac{h^{2}|\phi|^{2}}{m^{2}}}\right)}{2 m^{2}}\times\\ &\times\frac{\operatorname{sech}^{2}\left(mx\sqrt{v^{2}-\frac{h^ {2}|\phi|^{2}}{m^{2}}}\right)}{2m^{2}}\end{split} \tag{11}\]
since RHS of the equation in simply \(-\frac{d\tilde{V}(|\phi|)}{d\phi}\). A better understanding of the physics underlying in Eq.(11) is given after series expansion in coordinate \(x\) is done. Here we restrict ourselves only up to the fourth-order of expansion, and after integration we get
\[\tilde{V}(|\phi|)=2h^{2}m^{4}v^{6}x^{4}|\phi|^{2}-3h^{4}m^{2}v^{4}x^{4}|\phi| ^{4}+2h^{6}v^{2}x^{4}|\phi|^{6}-\frac{h^{8}x^{4}|\phi|^{8}}{2m^{2}} \tag{12}\]
After calculations above, we can qualitatively see how \(\tilde{V}(|\phi|)\) is structured. The existence of a Q-ball is provided by the form of the potential, similar to the effective potential (10).
Figure 11: The effective potential \(\tilde{V}\) gained from the equation of motion (11) in a series expansion in coordinate variable up to fourth-order and amended with a flat potential as in Sec.II. The profile of the resulted potential (for a given value of coordinate \(x=0.1\)) allows an analytical prediction of the existence of Q-balls in a theory (35,36).
## Appendix C Numerical procedure
In this section, we will briefly introduce our numerical method of solving non-linear equations. The first unavoidable step is to transform Lagrangian into dimensionless Lagrangian. When applied to the Friedberg-Lee-Sirlin model, energy \(E\) and \(U(1)\) charge were transformed to \(\tilde{E}\frac{v}{m}\) and \(\frac{\tilde{Q}}{m^{2}}\) for the \((3+1)\) dimensions and \(\tilde{E}mv\) and \(\tilde{Q}\) for the \((1+1)\) dimensions, where tilde is for dimensionless parameters.
The remaining equations of motion are in the form of
\[\begin{cases}&\nabla^{2}f=h^{2}\chi^{2}f-\omega^{2}f\\ &\nabla^{2}\chi=2h^{2}f^{2}\chi+2\chi(\chi^{2}-1)\end{cases} \tag{10}\]
With boundary conditions
\[\begin{cases}&f^{{}^{\prime}}(\infty)=0\\ &f(\infty)=0\\ &\chi^{{}^{\prime}}(\infty)=0\\ &\chi(\infty)=1.\end{cases} \tag{11}\]
The calculations were performed at fixed parameter \(h=1\). As in [9], integration is by Runge-Kutta of fourth-order (lattice spacing \(\epsilon=10^{-3}\)) with the shooting method of initial conditions. Limitations of shooting parameters could be derived from the analysis of the energy functional of the theory.
Since the non-topological configurations are even, this implies that \(\chi^{{}^{\prime}}(0)=0\) and \(f^{{}^{\prime}}(0)=0\)). Overall, the following restrictions on initial values of fields are of the form
\[\begin{cases}&\chi(0)\leq\omega\\ &f(0)\geq\frac{1-\chi^{2}(0)}{\sqrt{2(\omega^{2}-\chi^{2}(0))}}\end{cases} \tag{12}\]
and for topological configurations (non-zero topological charge implies \(\chi(0)=0\) and \(\mathcal{Z}_{2}\) symmetry causes \(f^{{}^{\prime}}(0)=0\))
\[\begin{cases}&\chi^{{}^{\prime}}(0)>0\\ &f(0)\geq\sqrt{\frac{1-(\chi^{{}^{\prime}}(0))^{2}}{2\omega^{2}}}\end{cases} \tag{13}\]
The same method was applied to the theories in which the field \(\chi\) was integrated out. In these cases, shooting is much easier due to having only one parameter to shoot-\(f(0)\).
|
2301.13782 | Active Nematic Multipoles: Flow Responses and the Dynamics of Defects
and Colloids | We introduce a general description of localised distortions in active
nematics using the framework of active nematic multipoles. We give the
Stokesian flows for arbitrary multipoles in terms of differentiation of a
fundamental flow response and describe them explicitly up to quadrupole order.
We also present the response in terms of the net active force and torque
associated to the multipole. This allows the identification of the dipolar and
quadrupolar distortions that generate self-propulsion and self-rotation
respectively and serves as a guide for the design of arbitrary flow responses.
Our results can be applied to both defect loops in three-dimensional active
nematics and to systems with colloidal inclusions. They reveal the
geometry-dependence of the self-dynamics of defect loops and provide insights
into how colloids might be designed to achieve propulsive or rotational
dynamics, and more generally for the extraction of work from active nematics.
Finally, we extend our analysis also to two dimensions and to systems with
chiral active stresses. | Alexander J. H. Houston, Gareth P. Alexander | 2023-01-31T17:31:46Z | http://arxiv.org/abs/2301.13782v1 | # Active Nematic Multipoles: Flow Responses and the Dynamics of Defects and Colloids
###### Abstract
We introduce a general description of localised distortions in active nematics using the framework of active nematic multipoles. We give the Stokesian flows for arbitrary multipoles in terms of differentiation of a fundamental flow response and describe them explicitly up to quadrupole order. We also present the response in terms of the net active force and torque associated to the multipole. This allows the identification of the dipolar and quadrupolar distortions that generate self-propulsion and self-rotation respectively and serves as a guide for the design of arbitrary flow responses. Our results can be applied to both defect loops in three-dimensional active nematics and to systems with colloidal inclusions. They reveal the geometry-dependence of the self-dynamics of defect loops and provide insights into how colloids might be designed to achieve propulsive or rotational dynamics, and more generally for the extraction of work from active nematics. Finally, we extend our analysis also to two dimensions and to systems with chiral active stresses.
## I Introduction
Active liquid crystals model a wide range of materials, both biological and synthetic [1; 2; 3], including cell monolayers [4], tissues [5], bacteria in liquid crystalline environments [6] and bacterial suspensions [7], and synthetic suspensions of microtubules [8]. Nematic and polar phases have been the focus of attention but smectic [9; 10], cholesteric [11; 12] and hexatic [13] phases have also been considered. Key features and motifs of the active nematic state include self-propelled topological defects [14; 15; 16], spontaneous flows and vortices, and on how these may be controlled through boundary conditions, confinement [17; 18; 19], external fields, geometry or topology. Active defects, in particular, have been related to processes of apoptosis in epithelial sheets [5], tissue dynamics, bacterial spreading and biofilm formation, and morphogenesis in _Hydra_[20].
In three-dimensional active nematics the fundamental excitations are defect loops and system-spanning lines [21; 22]. The defect loops actively self-propel [23], and self-orient [24], in addition to undergoing deformations in shape. Their finite extent means that they represent localised distortions to the nematic director, on scales larger than their size, and this facilitates a description through elastic multipoles [24]. It also invites comparison with colloidal inclusions in passive liquid crystals, which create localised realignments of the director and act as elastic multipoles [25; 26; 27]. These multipole distortions mediate interactions between colloids and allow for a means of controlling both the colloidal inclusions and the host material. For instance, they facilitate self-assembly and the formation of metamaterials [28; 29], and enable novel control of topological defects [27; 30; 31]. While there have been studies of active nematic droplets in a host passive liquid crystal [32; 33], colloidal inclusions in host active nematics have not been looked at previously.
The multipole approach to describing colloidal inclusions and localised director distortions in general, offers an equally fruitful paradigm in active nematics. Here, we present a generic analysis of the active flows generated by multipole director distortions in an active nematic and predict that the presence of colloids transforms their behaviour similarly to the passive case. These active multipole flows represent the responses of the active nematic both to localised features, such as defect loops, and to colloidal inclusions. This allows us to identify those distortions which produce directed or rotational flows and show that such distortions may be naturally induced by colloids. We also characterise the response in terms of the active forces and torques that they induce. This general connection can serve as a guide for using colloidal inclusions as a means to control active nematics, or how to design them to engineer a desired response, or extract work. The properties of inclusions have been studied in scalar active matter [34], as have active droplets in passive nematics [35], but while there have been specific demonstrations of propulsive colloids [36; 37] the general responses of inclusions in active nematics have not previously been considered. Understanding how such responses relate to local manipulations and molecular fields in active nematics will bring both fundamental insights and the potential for control of active metamaterials.
The remainder of this paper is structured as follows. In Section 2 we briefly review the equations of active nematohydrodynamics and describe the regime in which our linear multipole approach applies. In Section 3 we present these
multipoles as complex derivatives acting on \(1/r\), showing how this naturally elucidates their symmetries. In Section 4 we show that the linear active response to a harmonic distortion is generated by the same complex derivatives acting on fundamental flow and pressure solutions and highlight certain examples that illustrate the self-propulsive and rotational dynamics that can arise. We then show in Section 5 that these phenomenological responses can be discerned from integrals of the active stress, allowing the identification of the distortion which produces propulsion along or rotation about a given axis. Sections 6 and 7 contain extensions of our approach, first to two-dimensional systems and then to those with chiral active stresses. Section 8 gives a discussion and summary.
## II Hydrodynamics of active Nematics
We summarise the hydrodynamics of active nematics as described by their director field \(\mathbf{n}\) and fluid velocity \(\mathbf{u}\). The fluid flow satisfies the continuity \(\partial_{i}u_{i}=0\) and Stokes \(\partial_{j}\sigma_{ij}=0\) equations, with stress tensor [1; 2; 3]
\[\sigma_{ij}=-p\delta_{ij}+2\mu D_{ij}+\frac{\nu}{2}\big{(}n_{i}h_{j}+h_{i}n_{j }\big{)}+\frac{1}{2}\big{(}n_{i}h_{j}-h_{i}n_{j}\big{)}+\sigma_{ij}^{\rm E}- \zeta n_{i}n_{j}. \tag{1}\]
Here, \(p\) is the pressure, \(\mu\) is the viscosity, \(D_{ij}=\frac{1}{2}(\partial_{i}u_{j}+\partial_{j}u_{i})\) is the symmetric part of the velocity gradients, \(\nu\) is the flow alignment parameter, \(h_{i}=-\delta F/\delta n_{i}\) is the molecular field associated with the Frank free energy \(F\), \(\sigma_{ij}^{\rm E}\) is the Ericksen stress, and \(\zeta\) is the magnitude of the activity. The active nematic is extensile when \(\zeta>0\) and contractile when \(\zeta<0\). The director field satisfies the relaxational equation
\[\partial_{t}n_{i}+u_{j}\partial_{j}n_{i}+\Omega_{ij}n_{j}=\frac{1}{\gamma}h_{ i}-\nu\big{[}D_{ij}n_{j}-n_{i}(n_{j}D_{jk}n_{k})\big{]}, \tag{2}\]
where \(\gamma\) is a rotational viscosity and \(\Omega_{ij}=\frac{1}{2}(\partial_{i}u_{j}-\partial_{j}u_{i})\) is the antisymmetric part of the velocity gradients. We adopt a one-elastic-constant approximation for the Frank free energy [38]
\[F=\int\frac{K}{2}\big{(}\partial_{i}n_{j}\big{)}\big{(}\partial_{i}n_{j}\big{)} \,dV, \tag{3}\]
for which the molecular field is \(h_{i}=K\big{(}\nabla^{2}n_{i}-n_{i}n_{j}\nabla^{2}n_{j}\big{)}\) and the Ericksen stress is \(\sigma_{ij}^{\rm E}=-K\partial_{i}n_{k}\,\partial_{j}n_{k}\).
An often-used analytical approximation is to consider the active flows generated by an equilibrium director field. This approximation has been used previously in the theoretical description of the active flows generated by defects in both two [39; 16] and three dimensions [23], including on curved surfaces [40], and in active turbulence [41]. It may be thought of in terms of a limit of weak activity, however, even when the activity is strong enough to generate defects, their structure may still be close to that of equilibrium defects and the approximation remain good and the comparison of active defect motion and flows described in this way with full numerical simulations suggests that this is at least qualitatively the case. The equations can then be reduced to \(\mathbf{h}=\mathbf{0}\) for the director field and the Stokes equation
\[-\nabla p+\mu\nabla^{2}\mathbf{u}=\zeta\nabla\cdot\big{(}\mathbf{n}\big{)}, \tag{4}\]
for the active flow. Here we have neglected the Ericksen stress since for an equilibrium director field it can be balanced by a contribution to the pressure (representing nematic hydrostatic equilibrium).
We limit our analysis to director fields that can be linearised around a (locally) uniformly aligned state, \(\mathbf{n}=\mathbf{e}_{z}+\delta\mathbf{n}\), with \(\delta\mathbf{n}\cdot\mathbf{e}_{z}=0\), for which the equations reduce to
\[\nabla^{2}\delta\mathbf{n} =0, \tag{5}\] \[\nabla\cdot\mathbf{u} =0,\] (6) \[-\nabla p+\mu\nabla^{2}\mathbf{u} =\zeta\big{[}\mathbf{e}_{z}\big{(}\nabla\cdot\delta\mathbf{n} \big{)}+\partial_{z}\delta\mathbf{n}\big{]}. \tag{7}\]
These correspond to elastic multipole states in the director field, which are often thought of as an asymptotic description, however, they provide a close approximation even at only moderate distances outside a 'core' region that is the source of the multipole. To illustrate this we show in Fig. 1 a comparison between the exact director field (red streamlines) and linear multipole approximation (blue rods) for the most slowly varying monopole distortion created by uniformly rotating the director by an angle \(\theta_{0}\) within a sphere of radius \(a\). The agreement is close anywhere outside the sphere and only deviates significantly in the near-field region inside it. This is relevant to the active system as it is well-known that the uniformly aligned active nematic state is fundamentally unstable [42] and active nematics are turbulent on large enough scales. Our solutions should be interpreted as describing the behaviour on intermediate scales, larger than the core structure of the source but smaller than the scale on which turbulence takes over.
## III Multipole Director Distortions
In this section, we describe the multipole director fields satisfying (5). The far-field orientation \(\mathbf{e}_{z}\) gives a splitting of directions in space into those parallel and perpendicular to it. We complexify the perpendicular plane to give the decomposition as \(\mathbb{R}^{3}\cong\mathbb{C}\oplus\mathbb{R}\) and convert the director deformation \(\delta\mathbf{n}\) to the complex form \(\delta n=\delta n_{x}+i\delta n_{y}\). The real and imaginary parts of \(\delta n\) are harmonic, meaning that at order \(l\) they may be expressed as spherical harmonics \(1/r^{l+1}Y_{m}^{l}\) or, as we shall do, as \(l\) derivatives of \(1/r\)[43; 44; 45]. These order \(l\) multipole solutions form a \(2(2l+1)\)-real-dimensional vector space. Associated to the \(\mathbb{C}\oplus\mathbb{R}\) splitting is a local symmetry group isomorphic to \(U(1)\), preserving \(\mathbf{e}_{z}\), whose irreducible representations provide a natural basis for the vector space of multipoles at each order. We write the complex derivatives on \(\mathbb{C}\) as \(\partial_{w}=\frac{1}{2}(\partial_{x}-i\partial_{y})\) and \(\partial_{\bar{w}}=\frac{1}{2}(\partial_{x}+i\partial_{y})\) in terms of which the director deformation can be written
\[\delta n=\sum_{l=0}^{\infty}\sum_{m=-l}^{l}q_{lm}\,a^{l+1}\,\partial_{\bar{w}} ^{m}\partial_{z}^{l-m}\,\frac{1}{r}, \tag{8}\]
where \(q_{lm}\) are complex coefficients and \(a\) is a characteristic length scale of the multipole, as might be set by the radius of a colloid. For compactness of notation it is to be understood that when \(m\) is negative \(\partial_{\bar{w}}^{m}\) represents \(\partial_{w}^{|m|}\). The index \(m\) denotes the topological charge of the phase winding of the spherical harmonic. This gives the spin of the corresponding vector field as \(1-m\), where the \(1\) is due to a vector (\(\delta\mathbf{n}\) or \(\delta n\)) being a spin-1 object. The multipoles at order \(l\) therefore have spins that range from \(1-l\) to \(1+l\). They are illustrated up to quadrupole order in Fig. 2, along with a representation in terms of topological defects which we shall elaborate upon shortly. The structure of Fig. 2 is such that differentiation maps the distortions of one order to the next, with \(\partial_{z}\) leaving the distortion in the same spin class, \(\partial_{\bar{w}}\) moving it one column to the left and \(\partial_{w}\) moving it one column to the right. The operators \(\partial_{w}\) and \(\partial_{\bar{w}}\) play the same role as the raising and lowering operators in quantum mechanics and the shift by one in the spin values simply results from the object on which they act being a spin-1 director deformation as opposed to a spin-0 wavefunction.
The monopole distortions, with \(l=0\), result from a rotation of the director by an angle \(\theta_{0}\) in a sphere of radius \(a\)[46]. They form a two-real-dimensional vector space for which a basis may be taken to be the distortions \(\frac{1}{r}\) and \(i\,\frac{1}{r}\). These are shown at the top of Fig. 2 and can be controllably created in passive nematics using platelet inclusions [47].
The director distortions of dipole type, with \(l=1\), form a six-real-dimensional vector space that splits into two
Figure 1: Comparison of the exact director field (red streamlines) and linearised multipole approximation (blue rods) for the most slowly decaying monopole distortion. This is produced by uniformly rotating the director by an angle \(\theta_{0}\) within a spherical volume of radius \(a\), indicated by the grey disc; the alignment inside the sphere is indicated by the thick red line. The figure shows only the \(xz\)-plane in which the director rotates and in which the comparison is most strict.
Figure 2: The multipolar director distortions up to quadrupole order. The director is shown on a planar cross-section as blue rods, along with a topological skeleton corresponding to the spherical harmonic, where appropriate. Defect loops are coloured according to wedge (blue) or twist (red-green) type and the charge of point defects is indicated through the use of opposing colour pairs: red (\(+1\)) and cyan (\(-1\)), yellow (\(+2\)) and blue (\(-2\)), and green (\(+3\)) and magenta (\(-3\)). Their charge is further indicated by a local decoration of the director with an orientation, indicated by black arrows. Each multipole order is classified into vertical pairs according to the spin of the distortion. For the chiral multipoles, the visualisation instead shows the director along some of its integral curves (orange).
real-dimensional subspaces for each value of the spin (0, 1, or 2) as
\[\mathbf{p}^{0} =\Big{\{}\partial_{\bar{w}}\frac{1}{r},i\,\partial_{\bar{w}}\frac{1 }{r}\Big{\}}\sim-\frac{1}{2r^{3}}\big{\{}x\,\mathbf{e}_{x}+y\,\mathbf{e}_{y},-y \,\mathbf{e}_{x}+x\,\mathbf{e}_{y}\big{\}}\sim\frac{1}{r^{2}}\big{\{}Y_{1}^{1}, i\,Y_{1}^{1}\big{\}}, \tag{9}\] \[\mathbf{p}^{1} =\Big{\{}\partial_{z}\frac{1}{r},i\,\partial_{z}\frac{1}{r} \Big{\}}\sim-\frac{1}{r^{3}}\big{\{}z\,\mathbf{e}_{x},z\,\mathbf{e}_{y}\big{\}} \sim\frac{1}{r^{2}}\big{\{}Y_{1}^{0},i\,Y_{1}^{0}\big{\}},\] (10) \[\mathbf{p}^{2} =\Big{\{}\partial_{w}\frac{1}{r},i\,\partial_{w}\frac{1}{r} \Big{\}}\sim-\frac{1}{2r^{3}}\big{\{}x\,\mathbf{e}_{x}-y\,\mathbf{e}_{y},y\, \mathbf{e}_{x}+x\,\mathbf{e}_{y}\big{\}}\sim\frac{1}{r^{2}}\big{\{}Y_{1}^{-1},i\,Y_{1}^{-1}\big{\}}. \tag{11}\]
For comparison, we have presented three representations for the distortions of each spin class: in terms of complex derivatives of \(1/r\), two-component vectors whose coefficients are homogenous polynomials of degree 1 and complex spherical harmonics. In the interest of space we have suppressed certain prefactors in the last of these, but note the difference in sign, and in some cases normalisation, between our representation as complex derivatives and the standard form of the harmonic distortions as two-component vectors [48]. The two basis functions of any spin class are related by a factor of \(i\), which corresponds to a local rotation of the transverse director distortion by \(\frac{\pi}{2}\). For a spin-\(s\) distortion this is equivalent to a global rotation by \(\frac{\pi}{2s}\), with the pair of distortions having the same character and simply providing a basis for all possible orientations. The exception is when \(s=0\), such distortions lack an orientation and the local rotation produces two distinct states that transform independently under rotations as a scalar and pseudoscalar. In the dipole case the first is the isotropic distortion recognisable as the UPenn dipole [25] and the second is an axisymmetric chiral distortion with the far-field character of left-handed double twist. Separating \(\mathbf{p}^{0}\) into its isotropic and chiral components allows a decomposition of the dipole director deformations into the basis
\[\mathbf{p}=p^{I}\oplus p^{C}\oplus\mathbf{p}^{1}\oplus\mathbf{p}^{2}, \tag{12}\]
a decomposition which was presented in [49].
Similarly, the quadrupolar distortions (\(l=2\)) form a ten-real-dimensional vector space that splits into a sum of two-real-dimensional subspaces for each value of the spin
\[\mathbf{Q}^{-1} =\Big{\{}\partial_{\bar{w}}^{2}\frac{1}{r},i\,\partial_{\bar{w}} ^{2}\frac{1}{r}\Big{\}}\sim\frac{3}{4r^{5}}\big{\{}(x^{2}-y^{2})\,\mathbf{e}_ {x}+2xy\,\mathbf{e}_{y},-2xy\,\mathbf{e}_{x}+(x^{2}-y^{2})\,\mathbf{e}_{y} \big{\}}\sim\frac{1}{r^{3}}\big{\{}Y_{2}^{2},i\,Y_{2}^{2}\big{\}}, \tag{13}\] \[\mathbf{Q}^{0} =\Big{\{}\partial_{\bar{w}z}^{2}\frac{1}{r},i\,\partial_{\bar{w}z }^{2}\frac{1}{r}\Big{\}}\sim\frac{3}{2r^{5}}\big{\{}xz\,\mathbf{e}_{x}+yz\, \mathbf{e}_{y},-yz\,\mathbf{e}_{x}+xz\,\mathbf{e}_{y}\big{\}}\sim\frac{1}{r^ {3}}\big{\{}Y_{2}^{1},i\,Y_{2}^{1}\big{\}},\] (14) \[\mathbf{Q}^{1} =\Big{\{}\partial_{z}^{2}\frac{1}{r},i\,\partial_{z}^{2}\frac{1} {r}\Big{\}}\sim\frac{1}{r^{5}}\big{\{}(2z^{2}-x^{2}-y^{2})\,\mathbf{e}_{x},(2z ^{2}-x^{2}-y^{2})\,\mathbf{e}_{y}\big{\}}\sim\frac{1}{r^{3}}\big{\{}Y_{2}^{0 },i\,Y_{2}^{0}\big{\}},\] (15) \[\mathbf{Q}^{2} =\Big{\{}\partial_{w}^{2}\frac{1}{r},i\,\partial_{w}^{2}\frac{1} {r}\Big{\}}\sim\frac{3}{2r^{5}}\big{\{}xz\,\mathbf{e}_{x}-yz\,\mathbf{e}_{y}, yz\,\mathbf{e}_{x}+xz\,\mathbf{e}_{y}\big{\}}\sim\frac{1}{r^{3}}\big{\{}Y_{2}^{-1},i\,Y_{2}^{-1}\big{\}},\] (16) \[\mathbf{Q}^{3} =\Big{\{}\partial_{w}^{2}\frac{1}{r},i\,\partial_{w}^{2}\frac{1} {r}\Big{\}}\sim\frac{3}{4r^{5}}\big{\{}(x^{2}-y^{2})\,\mathbf{e}_{x}-2xy\, \mathbf{e}_{y},2xy\,\mathbf{e}_{x}+(x^{2}-y^{2})\,\mathbf{e}_{y}\big{\}}\sim \frac{1}{r^{3}}\big{\{}Y_{2}^{-2},i\,Y_{2}^{-2}\big{\}}. \tag{17}\]
Once again the spin-0 distortions can be further partitioned into those that transform as a scalar and pseudoscalar, these being the Saturn's ring distortion [50] and a chiral quadrupole with opposing chirality in the two hemispheres, respectively. This yields the basis for the quadrupolar director deformations
\[\mathbf{Q}=\mathbf{Q}^{-1}\oplus Q^{I}\oplus Q^{C}\oplus\mathbf{Q}^{1}\oplus \mathbf{Q}^{2}\oplus\mathbf{Q}^{3}. \tag{18}\]
The well-known multipoles, the UPenn dipole and Saturn ring quadrupole, are associated to a configuration of topological defects in the core region and we describe now an extension of this association to all of the multipoles. In general, such an association is not unique, for instance, the colloidal 'bubblegum' configuration [51] represents the same far field quadrupole as the Saturn ring, however, for each multipole we can construct a representative arrangement of topological defects which produce it in the far field on the basis of commensurate symmetries and defects of a type and location corresponding to the nodal set of the harmonic. This correspondence allow us to condense the visualisation of complicated three-dimensional fields into a few discrete elements, suggests means by which such distortions might be induced and enables us to build an intuition for their behaviour in active systems through established results for defects [23].
We first describe some examples, shown in Fig. 3. On the left is the spherical harmonic that describes the UPenn dipole, with the form \(\partial_{\bar{w}}\frac{1}{r}\sim e^{i\phi}\sin\theta\), visualised on a spherical surface. This has nodes at the two poles about which the phase has \(-1\) winding and so we can infer similar winding of the director in the transverse plane. Supplementing
with the far-field alignment along \(\mathbf{e}_{z}\) yields the familiar picture of a pair of oppositely charged hedgehog defects. Similarly, the Saturn ring quadrupole, described by \(\partial_{\bar{w}z}\frac{1}{r}\sim e^{i\phi}\sin 2\theta\), has zeros at the poles and around the equator. The winding about the poles is still \(+1\), but the sign change in the lower hemisphere means that in the transverse plane around the south pole the vector points inwards, resulting in both point defects having topological charge \(+1\). With regards to the equatorial line, since the director is everywhere radial the winding vector must be tangential to the defect loop, shown by the red arrows in Fig. 3. As the phase changes by \(\pi\) on passing from one hemisphere to the other the winding must be \(\pm 1\) and the far-field alignment allows us to determine it to be \(-1\). For a general multipole distortion of the form \(\partial_{\bar{w}}^{m}\partial_{z}^{l-m}(1/r)\) the nodal set is the poles along with \(l-m\) lines of latitude. The phase winding of the spherical harmonic dictates the transverse winding of the director and, when supplemented with the far-field alignment, allows us to associate topological point defects with the poles. Similarly, nodal lines may be connected with defect loops with integer winding and a winding vector that rotates according to \(e^{im\phi}\). In Fig. 3 we illustrate this for the case \(\partial_{\bar{w}}^{2}\partial_{z}^{3}(1/r)\sim-Y_{2}^{5}/r^{6}\).
We now describe briefly the correspondence for our basis of dipolar and quadrupolar distortions. As already stated, the isotropic scalar in \(\mathbf{p}^{0}\) is the UPenn dipole, its pseudoscalar counterpart a chiral splay-free twist-bend distortion whose integral curves are shown in orange in Fig. 2. As a twist-bend mode it may be of particular relevance to extensional systems given their instability to bend distortions. The two dipoles of \(\mathbf{p}^{1}\) are transverse to the far-field alignment, they are related to those resulting from a defect loop of wedge-twist type [21]. The distortions of \(\mathbf{p}^{2}\) have a hyperbolic character; they describe the far field of a pair of point defects both of which have a hyperbolic structure. Such hyperbolic defect pairs arise in toron configurations in frustrated chiral nematics [52; 53].
Similarly, \(\mathbf{Q}^{0}\) contains the Saturn ring quadrupole as the scalar, with the pseudoscalar a pure bend chiral distortion. For the latter, the integral curves of the director possess opposing chirality in the two hemispheres, which could be generated by an appropriately coated Janus particle. The director distortion exhibits a helical perversion in the \(z=0\) plane and, being a local rotation of the Saturn ring distortion, may be viewed as resulting from a pair of vortex point defects along with a pure twist defect loop with integer winding. This is similar to the bubblegum defect lines [54; 51] that appear between a colloid diad with normal anchoring, suggesting that this chiral quadrupole could be formed by two colloids with opposing chiral tangential anchoring.
The spin-1 quadrupoles consist of pairs of wedge-twist defect loops. The distortions of \(\mathbf{Q}^{2}\) may be associated with a pair of hyperbolic defects along with a defect ring with the appropriate symmetry. The harmonics of spin \(-1\) and \(3\) contain no \(z\)-derivatives and so are associated with pairs of point defects only.
## IV Flows from multipole distortions
In this section we calculate the active flow generated by an arbitrary director multipole. We present this initially in vectorial form, converting to the complex representation subsequently. As (7) is linear the responses due to the two components of \(\delta\mathbf{n}\) are independent and so to simplify the derivation we consider only distortions in the \(x\)-component for now and extend to the general case afterwards. Within this restriction a generic multipole distortion at order
Figure 3: The connection between spherical harmonics and nematic topological defects. The coloured spheres indicate the phase of the complex spherical harmonics with the nodal set shown in white for simplicity. A representative skeleton of the corresponding nematic distortion is shown in black and the red arrows indicate the winding vector of the director.
may be written as
\[\delta n_{x}=a^{l}\nabla_{{\bf v}_{1}}\cdots\nabla_{{\bf v}_{l}}\frac{a}{r}, \tag{19}\]
where \({\bf v}_{1},\ldots,{\bf v}_{l}\) are \(l\) directions for the differentiation. Substituting this into (7) gives the Stokes equation in the form
\[-\nabla p^{(x)}+\mu\nabla^{2}{\bf u}^{(x)}=a^{l+1}\zeta\nabla_{{\bf v}_{1}} \cdots\nabla_{{\bf v}_{l}}\Bigl{[}{\bf e}_{x}\,\partial_{z}+{\bf e}_{z}\, \partial_{x}\Bigr{]}\frac{1}{r}, \tag{20}\]
where the use of the superscript \({}^{(x)}\) is to emphasise that we are only treating the response to distortions in the \(x\)-component of the director. Taking the divergence of both sides we have
\[-\nabla^{2}p^{(x)}+\mu\nabla^{2}\nabla\cdot{\bf u}^{(x)}=a^{l+1}\zeta\nabla_{{ \bf v}_{1}}\cdots\nabla_{{\bf v}_{l}}\partial_{xz}^{2}\frac{2}{r}. \tag{21}\]
Making use of the continuity equation \(\nabla\cdot{\bf u}^{(x)}=0\) in conjunction with the identity \(\nabla^{2}r=\frac{2}{r}\) we arrive at the solution for the pressure
\[p^{(x)}=-a^{l+1}\zeta\nabla_{{\bf v}_{1}}\cdots\nabla_{{\bf v}_{l}}\,\partial _{x}\partial_{z}r=a^{l+1}\zeta\nabla_{{\bf v}_{1}}\cdots\nabla_{{\bf v}_{l}} \,\frac{xz}{r^{3}}. \tag{22}\]
Substituting this back into the Stokes equation (20) we obtain
\[\mu\nabla^{2}{\bf u}^{(x)}=a^{l+1}\zeta\nabla_{{\bf v}_{1}}\cdots\nabla_{{\bf v }_{l}}\biggl{\{}{\bf e}_{x}\,\partial_{z}\biggl{[}\frac{1}{r}-\partial_{x} \partial_{x}r\biggr{]}-{\bf e}_{y}\,\partial_{x}\partial_{y}\partial_{z}r+{ \bf e}_{z}\,\partial_{x}\biggl{[}\frac{1}{r}-\partial_{z}\partial_{z}r\biggr{]} \biggr{\}}, \tag{23}\]
which can be integrated using the identity \(\nabla^{2}r^{3}=12r\) to find
\[{\bf u}^{(x)}=a^{l+1}\frac{\zeta}{4\mu}\nabla_{{\bf v}_{1}}\cdots\nabla_{{\bf v }_{l}}\biggl{\{}{\bf e}_{x}\biggl{[}\frac{z}{r}+\frac{x^{2}z}{r^{3}}\biggr{]} +{\bf e}_{y}\,\frac{xyz}{r^{3}}+{\bf e}_{z}\biggl{[}\frac{x}{r}+\frac{xz^{2}}{ r^{3}}\biggr{]}\biggr{\}}. \tag{24}\]
Both the pressure and flow solutions for a generic multipole distortion are given in terms of derivatives of a fundamental response to a monopole deformation, namely
\[p^{(x)}=a\zeta\frac{xz}{r^{3}}, \tag{25}\]
\[{\bf u}^{(x)}=\frac{a\zeta}{4\mu}\left\{{\bf e}_{x}\biggl{[}\frac{z}{r}+\frac {x^{2}z}{r^{3}}\biggr{]}+{\bf e}_{y}\,\frac{xyz}{r^{3}}+{\bf e}_{z}\biggl{[} \frac{x}{r}+\frac{xz^{2}}{r^{3}}\biggr{]}\right\}. \tag{26}\]
This flow response, shown as the top panel in Fig. 4, is primarily extensional in the \(xz\)-plane. Interestingly, the flow solution (26) does not decay with distance; this reflects the generic hydrodynamic instability of active nematics [42] providing a real-space local response counterpart to the usual Fourier mode analysis. However, the active flow produced by any higher multipole does decay and vanishes at large distances.
The pressure and flow solutions in (25) and (26) are complemented by analogous ones resulting from distortions in the \(y\)-component of the director, obtained by simply interchanging \(x\) and \(y\). The linearity of (7) makes these fundamental responses sufficient to obtain the active flow induced by an arbitrary multipole distortion through taking derivatives appropriate to describe the \(x\) and \(y\) components of the director, respectively.
We now convert this description to the complex notation used in SS III. This is achieved by taking the combinations \(p=p^{(x)}-ip^{(y)}\) and \({\bf u}={\bf u}^{(x)}-i{\bf u}^{(y)}\). To see this consider the multipole distortion \(\delta n=({\cal L}_{x}+i{\cal L}_{y})1/r\), where the \({\cal L}_{i}\) are generic real differential operators which generate the \(i\)-component of the director by acting on \(1/r\). This distortion has a conjugate partner given by \(i({\cal L}_{x}+i{\cal L}_{y})1/r=(-{\cal L}_{y}+i{\cal L}_{x})1/r\). Acting with this same operator on \({\bf u}^{(x)}-i{\bf u}^{(y)}\) we have
\[({\cal L}_{x}+i{\cal L}_{y})({\bf u}^{(x)}-i{\bf u}^{(y)})=({\cal L}_{x}{\bf u }^{(x)}+{\cal L}_{y}{\bf u}^{(y)})-i(-{\cal L}_{y}{\bf u}^{(x)}+{\cal L}_{x}{ \bf u}^{(y)}), \tag{27}\]
and can see that the flow response for our original distortion forms the real part and that for its conjugate partner the coefficient of \(-i\) and the same holds for the pressure response. This leads us to a complex fundamental pressure response
\[\tilde{p}=a\zeta\frac{\bar{w}z}{r^{3}}, \tag{28}\]
and, introducing complex basis vectors \(\mathbf{e}_{w}=\mathbf{e}_{x}+i\mathbf{e}_{y}\) and \(\mathbf{e}_{\bar{w}}=\mathbf{e}_{x}-i\mathbf{e}_{y}\), a complex-valued fundamental flow vector
\[\tilde{\mathbf{u}}=\frac{a\zeta}{4\mu}\left\{\mathbf{e}_{w}\,\frac{\bar{w}^{2}z }{2r^{3}}+\mathbf{e}_{\bar{w}}\!\left[\frac{z}{r}+\frac{w\bar{w}z}{r^{3}} \right]+\mathbf{e}_{z}\,\frac{\bar{w}}{r}\left(1+\frac{z^{2}}{r^{2}}\right) \right\}. \tag{29}\]
We use a tilde to distinguish these fundamental responses from those that result due to a generic distortion and which may be found by appropriate differentiation. This provides a unified framework in which the active response to a generic nematic multipole can be calculated through the application of the same complex derivatives that we have used to describe the director distortion. The resulting active flows for distortions up to quadrupole order are shown
Figure 4: The active flows due to three-dimensional nematic multipole distortions up to quadrupole order. The flows are grouped according to their spin, in correspondence with the distortions in Fig. 2. Green and red arrows indicate the net active force and torque for the relevant dipoles and quadrupoles respectively, see §V.
in Fig. 4, with their layout corresponding to that of the nematic distortions in Fig. 2 which induce them. We now describe some examples in more detail.
### UPenn and chiral dipole
Typically the active responses induced by the two distortions in a spin class will, like the distortions themselves, be related by a global rotation such that while both are needed to form a sufficient basis, the real part essentially serves as a proxy for the pair. This is not true for the spin-0 distortions, due to their rotational symmetry, and so we use them in providing an explicit illustration of the active flow calculation. We begin with the UPenn dipole [25] and its partner the chiral dipole, for which the far-field transverse director is
\[\delta n\approx\alpha a\,\partial_{\bar{w}}\frac{a}{r}, \tag{30}\]
where \(\alpha\) is a dimensionless coefficient, and the corresponding derivative of the fundamental flow solution in (29) gives
\[\alpha a\partial_{\bar{w}}\bar{\mathbf{u}}=\frac{\zeta\alpha a^{2}}{4\mu r^{ 5}}\left\{\mathbf{e}_{w}\,z\bar{w}(4z^{2}+w\bar{w})-\mathbf{e}_{\bar{w}}\,3zw ^{2}\bar{w}+\mathbf{e}_{z}\,2\left[3z^{4}+(z^{2}+w\bar{w})^{2}\right]\right\}. \tag{31}\]
Taking the real part gives, after some manipulation, the flow induced by the UPenn dipole as
\[\mathbf{u}=\alpha a\,\mathfrak{R}\,\partial_{\bar{w}}\bar{\mathbf{u}}=\frac{ \zeta\alpha a^{2}}{8\mu}\bigg{\{}\mathbf{e}_{z}\bigg{(}\frac{1}{r}+\frac{z^{2 }}{r^{3}}\bigg{)}+\mathbf{e}_{r}\frac{z}{r^{2}}\bigg{(}\frac{3z^{2}}{r^{2}}-1 \bigg{)}\bigg{\}}, \tag{32}\]
where \(\mathbf{e}_{r}\) is the unit vector in the radial direction. The flow response to the conjugate distortion, the isotropic chiral dipole is given by
\[\mathbf{u}=-\alpha a\,\mathfrak{I}\,\partial_{\bar{w}}\bar{\mathbf{u}}=- \frac{\zeta\alpha a^{2}}{4\mu}\frac{z}{r^{2}}\mathbf{e}_{\phi}, \tag{33}\]
with \(\mathbf{e}_{\phi}\) the azimuthal unit vector. Both flows decay at large distances like \(1/r\) and are highlighted in the top row of Fig. 5. The UPenn dipole flow has a striking net flow directed along the \(z\)-axis, reminiscent of that of the Stokeslet flow [55; 56] associated with a point force along \(\mathbf{e}_{z}\). The chiral dipole generates an axisymmetric flow composed of two counter-rotating vortices aligned along \(\mathbf{e}_{z}\), mirroring the circulating flows produced by spiral defects in two dimensions [57]. The \(1/r\) decay of these active vortex flows is unusually slow, slower than the decay of a point torque in Stokesian hydrodynamics [56].
Despite the similarity between the active flow induced by the UPenn dipole and a Stokeslet, there is a key difference in their angular dependence. In a Stokeslet, and all related squirming swimmer flows [58; 59] that result from derivatives of it, the terms with higher angular dependence decay more quickly such that the lowest order terms dominate the far field. By contrast, distortions in active nematics produce asymptotic flow fields in which all terms decay at the same rate regardless of their angular dependence as they all result from the same derivative of the fundamental flow. Thus, even if the same angular terms are present in both systems, the lowest order ones will dominate in the squirming case while the far field will bear the signature of the highest order in the active nematics.
A closer point of comparison comes from the flows induced by active colloids within a passive nematic [60; 35]. Calculation of the relevant Green's functions [61] has shown that the anisotropy of the medium leads to a difference in effective viscosities such that a Stokeslet aligned along the director pumps more fluid in this direction. This fits with the anisotropy displayed in (32), reaffirming the similarity between the flow induced by the UPenn dipole and the Stokeslet.
Considering the pressure response for these distortions in the same way we have
\[\alpha a\partial_{\bar{w}}\tilde{p}=\frac{\zeta\alpha a^{2}}{2r^{5}}z(2z^{2}- w\bar{w})=\frac{\zeta\alpha a^{2}z}{2r^{3}}\left(\frac{3z^{2}}{r^{2}}-1\right). \tag{34}\]
As this expression is purely real it comprises the response due to the UPenn dipole in its entirety; the vanishing of the imaginary part shows that the chiral dipole is compatible with a zero pressure solution. Our complexified construction allows this property to be read off immediately, since \(\partial_{\bar{w}}(\bar{w}z^{m}/r^{n})\) will be real for any \(m\) and \(n\), with this also resulting in the vanishing \(z\)-component of flow for the chiral dipole. Indeed, this property of pure realness is unchanged by the action of \(\partial_{z}\), it being real itself, and so extends to higher order distortions.
### Saturn ring and chiral quadrupole
Proceeding in the same fashion for the spin-0 quadrupoles, for which \(\delta n\approx\alpha a^{2}\partial_{\bar{w}z}^{2}a/r\), we find that the complexified flow is
\[\begin{split}\alpha a^{2}\partial_{\bar{w}z}^{2}\tilde{\mathbf{u}}& =-\frac{\zeta\alpha a^{3}}{4\mu r^{7}}\left\{-\mathbf{e}_{w}\bar{w}(w^{2} \bar{w}^{2}+8w\bar{w}z^{2}-8z^{4})+\mathbf{e}_{\bar{w}}3w^{2}\bar{w}(w\bar{w}- 4z^{2})\right.\\ &\left.+\mathbf{e}_{z}2z(w^{2}\bar{w}^{2}-10w\bar{w}z^{2}+4z^{2}) \right\}.\end{split} \tag{35}\]
Taking the real part gives the flow induced by the Saturn ring quadrupole as
\[\mathbf{u}=\alpha a^{2}\Re\partial_{\bar{w}z}^{2}\tilde{\mathbf{u}}=-\frac{ \zeta\alpha a^{3}}{2\mu r^{6}}(r^{4}-12z^{2}r^{2}+15z^{4})\mathbf{e}_{\tau}, \tag{36}\]
that is a purely radial flow reminiscent of a stresslet along \(\mathbf{e}_{z}\), shown in the bottom left of Fig. 5. The purely radial nature is a result of the divergencelessness of the flow, combined with the \(1/r^{2}\) decay and rotational invariance about \(\mathbf{e}_{z}\). Working in spherical coordinates we have
\[\nabla\cdot\mathbf{u}=\frac{1}{r^{2}}\partial_{r}(r^{2}u_{r})+\frac{1}{r\sin \theta}\left[\partial_{\theta}(u_{\theta}\sin\theta)+\partial_{\phi}u_{\phi} \right]=0 \tag{37}\]
All active flows induced by quadrupole distortions decay as \(1/r^{2}\) and so \(\partial_{r}(r^{2}u_{r})=0\). The distortion is rotationally symmetric and achiral, meaning \(u_{\phi}=0\) and the condition of zero divergence reduces to
\[\frac{1}{r\sin\theta}\partial_{\theta}(u_{\theta}\sin\theta)=0. \tag{38}\]
The only non-singular solution is \(u_{\theta}=0\), resulting in \(u_{r}\) being the only non-zero flow component. The corresponding pressure is given by
\[\alpha a^{2}\partial_{\bar{w}z}^{2}\tilde{p}=-\frac{3\alpha a^{3}}{2r^{7}}(r^{ 4}-12z^{2}r^{2}+15z^{4}). \tag{39}\]
Figure 5: The active flows induced by spin 0 dipole (top row) and quadrupole (bottom row) distortions. The flow is indicated by blue arrows and superposed upon integral curves of the director, shown in orange. On the left are the UPenn dipole and Saturn ring quadrupole and on the right their chiral counterparts.
Taking the imaginary part of (35) reveals the flow response of the chiral quadrupole to be
\[\mathbf{u}=-\alpha a^{2}\mathcal{I}\partial_{\bar{w}z}^{2}\tilde{\mathbf{u}}= \frac{\zeta\alpha a^{3}}{\mu r^{2}}(3\cos^{2}\theta-1)\sin\theta\mathbf{e}_{ \phi}. \tag{40}\]
As illustrated in Fig. 5 this is a purely azimuthal flow corresponding to rotation about the \(z\) axis and, as for the chiral dipole, is compatible with a zero pressure solution. The \(1/r^{2}\) decay of this rotational flow is the same as that which results from the rotlet [55; 56], but unlike the rotlet the flow direction is not uniform. Rather, as can be seen in Fig. 5, there is an equatorial band of high-velocity flow accompanied by two slowly counter-rotating polar regions. The distribution of flow speeds is such that the net flow is along \(-\mathbf{e}_{\phi}\), consistent with a rotlet along \(-\mathbf{e}_{z}\).
### Other multipoles
For the remaining multipoles up to quadrupole order we do not provide the same explicit calculation but instead highlight the key features of the active flows they induce. In full we find that half of the dipole distortions contain directed components in their active flow responses. Along with the isotropic UPenn dipole which produces flow along \(\mathbf{e}_{z}\) the two spin-1 dipoles produce directed flows transverse to it. These directed flows indicated that were the source of the distortion free to move it would exhibit active self-propulsion. The net transverse flows for the dipoles of \(\mathbf{p}^{1}\) is in accordance with the previously established motile nature of such defect loops [23]. A more complete description of the active dynamics of defect loops via their multipole distortions is presented in Section IV.4 and [24].
Along with the chiral dipole, the two additional dipoles which do not generate directed flows are those with spin 2. These produce active flows which are extensional with the expected two-fold rotational symmetry about the \(z\)-axis. Direct calculation shows that the flows resulting from spin-2 distortions have zero azimuthal component. Once again, this observation is unaffected by \(z\)-derivatives and so holds true for the higher-order multipoles of the form \(\partial_{z}^{n}\partial_{w}(1/r)\).
Similarly, there are ten linearly independent quadrupoles, five of which can be seen from Fig. 4 to generate rotational flows. As expected, it is the four modes of \(\mathbf{Q}^{\pm 1}\) that generate rotations about transverse directions and \(Q^{C}\) that produces rotation around \(\mathbf{e}_{z}\). For two of these, namely those in \(\mathbf{Q}^{1}\), the director distortions are planar, suggesting a two-dimensional analogue and the potential to generate them with cogs or gears [62]. These distortions may be associated with a pair of opposingly oriented charge-neutral defect loops and so the rotational flow generated by these distortions is in accordance with their antiparallel self-propulsion.
The quadrupoles of \(\mathbf{Q}^{-1}\) are composed of pairs of point defects with topological charge \(+2\). Using \(\partial_{\bar{w}}^{2}\frac{1}{r}\) as an example, the rotation can be understood by considering the splay distortions in the \(xz\) plane. The splay changes sign for positive and negative \(x\), leading to antiparallel forces. The active forces are greatest in this plane, as this is where the transverse distortion is radial resulting in splay and bend distortions. Along \(\mathbf{e}_{y}\) the distortions are of twist type and so do not contribute to the active force. This results in the rotational flow shown in Fig. 4. The stretching of the flow along \(\mathbf{e}_{z}\) is as observed for a rotlet in a nematic environment [61].
Although they lack the rotational symmetry of a stresslet, the flows produced by the quadrupoles of \(\mathbf{Q}^{2}\) are also purely radial. The argument is largely the same as for the Saturn ring distortion, except that the vanishing of \(u_{\phi}\) is not due to rotational invariance but a property inherited from the spin-2 dipoles.
The quadrupoles of \(\mathbf{Q}^{3}\) produce extensional flows whose spin-3 behaviour under rotations about \(\mathbf{e}_{z}\) is commensurate with that of the distortions. Although they visually resemble the similarly extensional flows produced by the dipoles of \(\mathbf{p}^{2}\), they do not share the property of a vanishing azimuthal flow component.
### Defect loops
Of particular relevance to the dynamics of three-dimensional active nematics are charge-neutral defect loops [23; 21; 24]. For such defect loops the director field has the planar form
\[\mathbf{n}=\cos\frac{\Upsilon}{4}\,\mathbf{e}_{z}+\sin\frac{\Upsilon}{4}\, \mathbf{e}_{x}, \tag{41}\]
where \(\Upsilon\) is the solid angle function for the loop [63; 43], and is a critical point of the Frank free energy in the one-elastic-constant approximation [64]. This allows a multipole expansion for the director at distances larger than the loop size in which the multipole coefficients are determined explicitly by the loop geometry [24]
\[\Upsilon(\mathbf{x})=\frac{1}{2}\int_{K}\epsilon_{ijk}\,y_{j}\,\mathrm{d}y_{k }\,\partial_{i}\frac{1}{r}-\frac{1}{3}\int_{K}\epsilon_{ikl}\,y_{l}y_{k}\, \mathrm{d}y_{l}\,\partial_{i}\partial_{j}\frac{1}{r}+\ldots, \tag{42}\]
where \(\mathbf{y}\) labels the points of the loop \(K\) and \(r=|\mathbf{x}|\) with the 'centre of mass' of the loop defined to be at \(\mathbf{x}=0\). The dipole moment vector is the projected area of the loop, while the quadrupole moment is a traceless and symmetric tensor with an interpretation via the first moment of area or, in the case of loops weakly perturbed from circular, the torsion of the curve.
The planar form of the director field (41) corresponds to a restricted class of director deformations in which \(\delta n\) is purely real. This disrupts the complex basis we have adopted for the representation of multipoles, so that another choice is to be preferred. We may say that the planar director selects a real structure for the orthogonal plane \(\mathbb{C}\), breaking the \(U(1)\) symmetry, and the restricted multipoles should then be decomposed with respect to this real structure. Accordingly, the pressure and flow responses may be generated by derivatives of the fundamental responses for distortions in \(\mathbf{e}_{x}\), (25) and (26), with these derivatives corresponding to the multipole expansion of the solid angle shown in (42). The details of this approach along with the consequences it has for both the self-propulsive and self-rotational dynamics of active nematic defect loops are given in [24].
### Technical note
We conclude this section with a technical note on the flow solutions that we have presented. The construction for calculating active flow responses that we have developed in this section requires knowledge of the multipole as a specified set of derivatives of \(1/r\). The harmonic director components satisfy \(\nabla^{2}n_{i}\propto\delta(\mathbf{r})\) and while this delta function does not affect the far-field director it impacts the flow solutions. Consequently, at quadrupole order and higher, distinct derivatives of \(\frac{1}{r}\) can produce the same multipole distortion in the director but have different associated active flows. As an explicit example we take the spin-1 quadrupole shown in Fig. 2, which may be written as \(\mathbf{n}=a^{2}\partial_{z}^{2}\,\mathbf{e}_{x}+\mathbf{e}_{z}\) and therefore induces an active flow given by the action of \(a^{2}\partial_{z}^{2}\) on 29, as is illustrated in Fig. 4. However the same director distortion is captured by \(\mathbf{n}=-4a^{2}\partial_{w\bar{w}}^{2}\,\mathbf{e}_{x}+\mathbf{e}_{z}\), for which the corresponding active flow is shown in Fig. 6. A partial resolution to this ambiguity is that any non-equilibrium phenomenological features such as propulsion or rotation will be invariant to this choice of derivatives since, as we shall show in the following section, they can be expressed directly in terms of the director components. As a more complete resolution we reiterate that whenever an exact solution for the director is known the appropriate derivatives can be determined, as demonstrated earlier for defect loops [24], and so the apparent ambiguity disappears.
## V Active forces and torques
The directed and rotational active flow components highlighted above result in viscous stresses whose net effect must be balanced by their active counterparts, since the net force and torque must be zero. Consequently, these generic aspects of the response of an active nematic can be identified by considering the contribution that the active
Figure 6: Additonal flow solutions induced by spin-1 nematic multipoles. The nematic multipoles which induce the flows are shown below them as complex derivatives of \(1/r\). The red arrows indicate the net active torque.
stresses make to the force and torque
\[\mathbf{f}^{a}=\int\zeta\mathbf{n}\mathbf{n}\cdot\mathrm{d}\mathbf{A} \approx\int\zeta\bigg{\{}\mathbf{e}_{x}\frac{z\,\delta n_{x}}{r}+\mathbf{e}_{y} \frac{z\,\delta n_{y}}{r}+\mathbf{e}_{z}\frac{x\,\delta n_{x}+y\,\delta n_{y}}{ r}\bigg{\}}\mathrm{d}A, \tag{43}\] \[\boldsymbol{\tau}^{a}=\int\mathbf{x}\times\zeta\mathbf{n} \mathbf{n}\cdot\mathrm{d}\mathbf{A} \approx\int\zeta\bigg{\{}\mathbf{e}_{x}\bigg{[}\frac{xy\,\delta n_{x}}{r }+\frac{(y^{2}-z^{2})\delta n_{y}}{r}\bigg{]}+\mathbf{e}_{y}\bigg{[}\frac{(z^ {2}-x^{2})\delta n_{x}}{r}-\frac{xy\,\delta n_{y}}{r}\bigg{]}\] (44) \[\qquad\qquad\qquad\qquad\qquad+\mathbf{e}_{z}\frac{z(-y\,\delta n _{x}+x\,\delta n_{y})}{r}\bigg{\}}\mathrm{d}A,\]
integrating over a large sphere of radius \(r\). These integrals depend on the surface of integration, as the active stresses are neither divergenceless nor compactly supported. However, a spherical surface is concordant with the multipole approach we are taking and the results are then independent of the radius, as a direct consequence of the orthogonality of spherical harmonics. From these expressions we can read off the multipole that will generate any desired active force or torque; dipoles generate forces and quadrupoles generate torques. When the active torque is non-zero, the compensating viscous torque will drive a persistent rotation of the multipole, creating an active ratchet; similarly, a non-zero active force will generate directed fluid flow. The above integrals therefore provide a solution to the inverse problem: given a particular non-equilibrium response, which distortion induces it? Hence they serve as a design guide for generating out of equilibrium responses in active nematics.
If the multipole is free to move it will self-propel and rotate. The translational and rotational velocities are related to the viscous forces and torques by a general mobility matrix [65]. In passive nematics, experiments [66] and simulations [67; 68] have found that it is sufficient to take a diagonal form for the mobility (no translation-rotation coupling) with separate viscosities for motion parallel, \(\mu_{\parallel}\), and perpendicular, \(\mu_{\perp}\), to the director, with typical ratio of viscosities \(\mu_{\perp}/\mu_{\parallel}\sim 1.6\)[66; 67; 68]. This has the consequence that in general the force and velocity are not colinear
\[\mathbf{U}=\frac{-1}{6\pi a}\bigg{[}\frac{1}{\mu_{\parallel}}f_{\parallel}^{a }\,\mathbf{e}_{z}+\frac{1}{\mu_{\perp}}\,\mathbf{f}_{\perp}^{a}\bigg{]}. \tag{45}\]
We again use the UPenn dipole as an example. Integrating the active stresses over a spherical surface of radius \(R\) we find an active force
\[\int\zeta\mathbf{n}\mathbf{n}\cdot\mathrm{d}\mathbf{A}\approx-\frac{\zeta \alpha a^{2}}{2}\int\biggl{\{}\mathbf{e}_{x}\frac{xz}{R^{4}}+\mathbf{e}_{y} \frac{yz}{R^{4}}+\mathbf{e}_{z}\bigg{[}\frac{z}{R}+\frac{x^{2}+y^{2}}{R^{4}} \bigg{]}\biggr{\}}\mathrm{d}A=-\frac{4\pi\zeta\alpha a^{2}}{3}\,\mathbf{e}_{z}. \tag{46}\]
Balancing this against Stokes drag predicts a'self-propulsion' velocity for the active dipole of
\[\mathbf{U}=\frac{2\zeta\alpha a}{9\mu_{\parallel}}\,\mathbf{e}_{z}. \tag{47}\]
For extensile activity (\(\zeta>0\)) the dipole moves 'hyperbolic hedgehog first' and with a speed that increases linearly with the core size \(a\). This self-propulsion is in accordance with the directed component of the active flow, as can be seen in Fig. 5. The same self-propulsion speed along \(\mathbf{e}_{x}\) and \(\mathbf{e}_{y}\) is found for the transverse dipoles of \(\mathbf{p}^{1}\), except that the parallel viscosity \(\mu_{\parallel}\) should be replaced with \(\mu_{\perp}\). Again, this self-propulsion agrees with the directed flow induced by these distortions, as calculated through the multipole approach, shown in Fig. 4[24] and also with the results of both a local flow analysis and simulations [23]. The same directed motion has been observed in a related system of an active droplet within a passive nematic [35], with the droplet inducing a UPenn dipole in the nematic and moving in the direction of the hedgehog defect at a speed that grew with the droplet radius. The mechanism at play is different however; the motion results from directional differences in viscosity resulting from the anisotropic environment.
To illustrate the rotational behaviour we use a member of \(\mathbf{Q}^{1}\), \(\partial_{z}^{2}(1/r)\), as an example. We find an active torque
\[\int\zeta\mathbf{x}\times\mathbf{n}\mathbf{n}\cdot\mathrm{d} \mathbf{A} \approx\zeta\alpha a^{3}\int\frac{1}{r^{6}}\bigl{(}2z^{2}-x^{2}-y^ {2}\bigr{)}\,\bigl{\{}xy\mathbf{e}_{x}+(z^{2}-x^{2})\mathbf{e}_{y}-yz\mathbf{e }_{z}\bigr{\}}\,\mathrm{d}A \tag{48}\] \[=\frac{8\pi\zeta\alpha a^{3}}{5}\mathbf{e}_{y}. \tag{49}\]
Balancing against Stokes drag as was done in the dipole case gives an angular velocity
\[\mathbf{\Omega}=-\frac{\zeta\alpha}{5\mu}\mathbf{e}_{y}. \tag{50}\]
We note that for this and all other distortions which result in net torques the angular velocity is independent of the colloid size. In accordance with the relation \(\partial_{z}^{2}+4\partial_{w\bar{w}}^{2}(1/r)=0\), the torque resulting from \(\partial_{w\bar{w}}^{2}(1/r)\) is of the opposite sign and a quarter the strength. The net active torques due to harmonics of \(\mathbf{Q}^{0}\) and \(\mathbf{Q}^{-1}\) have the directions indicated in Fig. 4 and half the magnitude of (49).
Let us consider the approximate magnitude of the effects we have described. Beginning with the self-propulsion speed, the fluid viscosity is roughly \(10^{-2}\) Pa s [17], although effects due to the elongated form of the nematogens could increase this by a factor of 30 or so [69; 70]. Both the activity [16] and the dipole moment constant [48] are of order unity, meaning the colloid would approximately cover its radius in a second. Similar approximations for the quadrupole give an angular velocity of about \(2/3\) rad s\({}^{-1}\). For a colloid of radius \(10\)\(\mu\)m this has an associated power of the order of femtowatts, the same as predicted for bacterial ratchets [71].
## VI Two-dimensional systems and ratchets
As noted above, the planar nature of the rotational distortions in \(\mathbf{Q}^{1}\) suggests the existence of two-dimensional analogues. In part motivated by this we now discuss the active response of multipolar distortions in two dimensions, again beginning with the connection between these multipoles and topological defect configurations.
### Multipoles and topological defects
The categorisation of the harmonic distortions in two dimensions is much simpler, but we provide it here for completeness. Taking the asymptotic alignment to be along \(\mathbf{e}_{y}\) the symmetry of the far-field director is now described by the order 2 group \(\{1,R_{y}\}\), with \(R_{y}\) reflection with axis \(\mathbf{e}_{y}\), under which the monopole distortion \(n_{x}\sim A\log(r/a)\) is antisymmetric. The higher-order distortions are once again generated via differentiation of the monopole, with \(\partial_{y}\) leaving the symmetry under \(R_{y}\) unchanged and \(\partial_{x}\) inverting it.
It should be noted that the potential multiplicity of differential representations of harmonics that arose in three dimensions does not occur in two dimensions. This is because, under the assumption of a single elastic constant, the director angle \(\phi\) may be written as the imaginary part of a meromorphic function of a single complex variable and this naturally defines the appropriate set of derivatives. Making \(z=x+iy\) our complex variable we write \(\phi=\mathfrak{I}\left\{\mathfrak{f}(z)\right\}\) which upon performing a Laurent expansion of \(\mathfrak{f}(z)\) around \(z=0\) and assuming the existence of a uniform far-field alignment gives
\[\phi=\mathfrak{I}\left\{\sum_{n=-\infty}^{0}a_{n}z^{n}\right\}=\mathfrak{I} \left\{a_{0}+\sum_{n=1}^{\infty}(-1)^{n-1}\frac{a_{n}}{(n-1)!}\partial_{z}^{ n}(\ln z)\right\}. \tag{51}\]
Hence at every order there is a one parameter family of distortions, corresponding to the phase of the \(a_{n}\). A natural basis at order \(n\) is provided by \(\left\{\mathfrak{R}\left\{\partial_{z}^{n}(\ln z)\right\},\mathfrak{I}\left\{ \partial_{z}^{n}(\ln z)\right\}\right\}\). This basis consists of a symmetric and anti-symmetric distortion under the action of \(R_{y}\), the roles alternating with order, and of course correspond to the two harmonic functions \(\cos n\theta/r^{n}\) and \(\sin n\theta/r^{n}\).
In two dimensions the connection between defect configurations and far-field multipole distortions can be made concrete, and also serves as an illustration of how a particular set of derivatives is determined. For defects with topological charges \(s_{j}\) at locations \(z_{j}\) the angle that the director makes to \(\mathbf{e}_{x}\) is given by
\[\phi=\phi_{0}+\sum_{j}s_{j}\mathfrak{I}\left\{\ln\left(\frac{z-z_{j}}{a} \right)\right\}, \tag{52}\]
which, upon performing a series expansion, gives
\[\phi =\phi_{0}+\sum_{j}s_{j}\mathfrak{I}\left\{\ln(z/a)\right\}-\sum _{n=1}^{\infty}\frac{\mathfrak{I}\left\{\sum_{j}s_{j}z_{j}^{n}\bar{z}^{n} \right\}}{n|z|^{2n}}, \tag{53}\] \[=\phi_{0}+\sum_{j}s_{j}\mathfrak{I}\left\{\ln(z/a)\right\}+\sum _{n=1}^{\infty}\frac{(-1)^{n}\mathfrak{I}\left\{\sum_{j}s_{j}z_{j}^{n}\partial _{z}^{n}\ln z\right\}}{n!}, \tag{54}\]
Provided the total topological charge is zero the winding term proportional to \(\ln w\) vanishes and \(\phi_{0}\) is the far-field alignment. The distortions are given as a series of harmonics in which the coefficient of the \(n^{\text{th}}\) harmonic is determined by a sum of \(z_{j}^{n}\) weighted by the defect charges.
We would like to have a basis of representative defect configurations for each harmonic distortion. However, it can be seen from (54) that the correspondence between arrangements of topological defects and the leading order nematic multipole is not one-to-one. Two defect-based representations of harmonic will prove particularly useful to us. The first, which we develop in this chapter, provides a representation in terms of half-integer defects on the disc and allows an intuition for the response to multipole distortions in active nematics through known results for such defects [15; 16]. The second uses the method of images to construct defect arrangements corresponding to a specific anchoring condition on the disc, with the same multipoles dominating the nematic distortion in the far field. This representation naturally lends itself to the control of induced multipoles through colloidal geometry and is explored fully in [62]. Nonetheless, both of these representations will be of use to us in the remainder of this chapter and as they are equally valid near-field representations for the asymptotic distortions that we are considering we will pass fairly freely between them.
With this aforementioned half-integer representation in mind, let us consider sets of \(2m\) defects sitting on the unit circle, with \(-1/2\) defects at the \(m^{\text{th}}\) roots of unity and \(+1/2\) defects at the intermediate points. A useful formula here is the following for the sum of a given power of these roots of unity, after first rotating them all by a given angle \(\theta\)
\[\sum_{k=0}^{m-1}\left(e^{i\theta}e^{i\frac{2\pi}{\text{s}}k}\right)^{n}= \begin{cases}me^{in\theta},&\text{if }m|n\\ 0,&\text{otherwise}\end{cases}. \tag{55}\]
The vanishing of this sum for values of \(n\) that are not multiples of \(m\) comes directly from the expression for the geometric sum and is a consequence of the cyclic group structure of the roots of unity. It means that the lowest order multipole distortion induced by such an arrangement of defects is order \(m\) and so allows a desired multipole distortion to be selected as the dominant far-field contribution. Explicitly, the director angle is given by
\[\phi=\phi_{0}+\sum_{k\text{ odd}}\frac{\mathfrak{I}\left\{\bar{z}^{mk}\right\}}{k |z|^{2mk}}=\phi_{0}+\frac{\mathfrak{I}\left\{\bar{z}^{m}\right\}}{|z|^{2m}}+O \left(\frac{1}{z^{3m}}\right), \tag{56}\]
with the approximation becoming rapidly better for higher-order multipoles due to the condition that \(n\) must be an odd multiple of the number of defects. Rotating the entire set of defects rigidly by an angle \(-\pi/(2m)\) generates the conjugate multipole as the dominant far-field contribution
\[\phi=\phi_{0}+\sum_{k\text{ odd}}\frac{\mathfrak{I}\left\{(-i)^{k}\bar{z}^{ mk}\right\}}{k|z|^{2mk}}=\phi_{0}-\frac{\mathfrak{R}\left\{\bar{z}^{m}\right\}}{|z|^{ 2m}}+O\left(\frac{1}{z^{3m}}\right), \tag{57}\]
with the natural interpolation between these two harmonics as the defect configuration is rigidly rotated.
Hence we can interchange between a given harmonic distortion and a defect arrangement which has this harmonic as its dominant far-field contribution, with the correspondence becoming rapidly more accurate for higher orders, allowing us to relate the existing results for the behaviour of active defects [15; 16] to ours and vice versa. This correspondence is illustrated in Fig. 7. The locations of \(+1/2\) and \(-1/2\) defects are indicated with red and cyan dots respectively and the background colouring denotes the phase of the complex function \(\sum s_{j}\ln(z-z_{j})\), whose imaginary part provides the director angle for the given defect arrangement. The integral curves of this director field are shown in black and are remarkably well matched by those of the leading multipole, shown in white, despite the asymptotic nature of the approximation. In this context we are able to make precise the notion of a core region of a singular distortion, outside of which our multipole approach applies. The series in (54) is attained through a Taylor series of terms of the form \(\ln(1-1/z)\), which are convergent for \(|z|>1\). More generally the greatest radial displacement of a defect defines a core radius, outside of which the multipole series converges onto the exact director angle.
### Flows from multipole distortions
We can proceed analogously to our three-dimensional calculation in generating the active flows from a fundamental response in two dimensions, provided we are mindful of the logarithmic form that the monopole now has. A director rotation by \(\theta_{0}\) inside a disc of radius \(a\) results in an equilibrium texture given by
\[\mathbf{n}=\cos\left(\frac{\theta_{0}\log(r/R)}{\log(a/R)}\right)\mathbf{e}_{y }+\sin\left(\frac{\theta_{0}\log(r/R)}{\log(a/R)}\right)\mathbf{e}_{x}, \tag{58}\]
which in the far field tends to a monopole distortion \(\mathbf{n}\approx\mathbf{e}_{y}+\frac{\theta_{0}\log(r/R)}{\log(a/R)}\mathbf{ e}_{x}\). Due to the logarithmic divergence of the fundamental harmonic in two dimensions it is necessary to normalise through a large length \(R\) such that a uniformly aligned far-field director is recovered.
Following our three-dimensional analysis we solve Stokes' equations to linear order in nematic deformations for a monopole distortion. We write Stokes' equations in terms of complex derivatives as
\[2\partial_{\bar{z}}(-p+i\mu\omega)=f, \tag{59}\]
where we have used that \(2\partial_{z}u=\nabla\cdot\mathbf{u}+i\omega\), with \(\omega\) the vorticity. Hence we seek \(f\) as a \(\bar{z}\)-derivative, implicitly performing a Helmholtz derivative with the real and imaginary parts of the differentiated term corresponding to the scalar and vector potentials respectively. Expressing the active force in this way we have
\[2\partial_{\bar{z}}(-p+i\mu\omega)=\frac{\zeta\theta_{0}}{\log(a/R)}\partial_{ \bar{z}}\left(\frac{i\bar{z}}{z}\right) \tag{60}\]
and so
\[-p+i\mu\omega=\frac{\zeta\theta_{0}}{2\log(a/R)}\frac{i\bar{z}}{z}. \tag{61}\]
Reading off the pressure and vorticity, solving for the flow and converting back to Cartesians the fundamental flow response is now found to be
\[\tilde{\mathbf{u}}=\frac{\zeta\theta_{0}}{8\mu\log(a/R)}\bigg{[} \frac{x^{2}-y^{2}}{r^{2}}(-y\mathbf{e}_{x}+x\mathbf{e}_{y})+2\log\left(\frac{r }{R}\right)(y\mathbf{e}_{x}+x\mathbf{e}_{y})\bigg{]}, \tag{62}\] \[\tilde{p}=-\frac{\zeta\theta_{0}}{\log(a/R)}\frac{xy}{r^{2}}. \tag{63}\]
There is a clear similarity between these solutions and their three-dimensional counterparts, but while the fundamental flow response is still extensional it now grows linearly with distance from the distortion, with this change in scaling inherited by the subsequent harmonics.
Figure 7: Representative defect configurations for nematic multipoles in two dimensions. The red and cyan dots indicate the locations of \(+1/2\) and \(-1/2\) defects respectively. The black curves are the integral curves of the corresponding director field and the background colour shows the phase of the complex function whose imaginary part gives the exact director angle, as in (52). The white lines are the integral curves of the dominant multipole, that is the leading term of (54). The multipole series converges onto the exact director angle outside a core region, shown as a white disc, and the leading multipole provides a remarkably good approximation in this region.
As in the three-dimensional case we can gain general insight into the active response of a nematic by considering the net contribution of the active stresses to the force and torque when integrated over a large circle of radius \(r\)
\[\int\zeta{\bf nn}\cdot{\bf e}_{r}{\rm d}r\approx\int\zeta\left\{ \frac{y\delta n_{x}}{r}{\bf e}_{x}+\frac{x\delta n_{x}}{r}{\bf e}_{y}\right\}{ \rm d}r, \tag{64}\] \[\int{\bf x}\times\zeta{\bf nn}\cdot{\bf e}_{r}{\rm d}r\approx \int\zeta\frac{(y^{2}-x^{2})\delta n_{x}}{r}{\rm d}r. \tag{65}\]
We see that in two dimensions both dipoles will self-propel if free to move and there is a single chiral quadrupole which produces rotations.
The far-field flow solutions for distortions up to dipole order are illustrated in Fig. 8, superposed over the nematic director. Both dipoles are now motile and as in the three-dimensional case they set up flows reminiscent of the Stokeslet. Vertical and horizontal self-propulsive modes may be viewed as resulting from normal and tangential anchoring respectively of the nematic on a disc. Interpolating between these orthogonal modes the angle of motility changes commensurately with the anchoring angle, such that sufficient control of the boundary conditions would allow for self-propulsion at an arbitrary angle with respect to the far-field alignment. This change in the dipole character can be represented by rigidly rotating the defect pair around the unit circle and the resulting motility is as would be expected from the position and orientation of the \(+1/2\) defect [72; 16; 73]. Determining the motility induced by these dipolar modes is complicated by the Stokes paradox and although this can be circumvented by various means we do not pursue this here. If such dipolar colloids were fixed within the material they would pump the ambient fluid and so it should be possible to use them to produce the concentration, filtering and corralling effects observed previously by funneling motile bacteria [74].
In line with our discussion at the beginning of this section, the basis quadrupoles are given by the real and imaginary parts of \(\partial_{z}^{2}\), these being an achiral and chiral mode respectively, which are shown along with their flows in Fig. 8. The flow generated by the achiral quadrupole in Fig. 8(d) is purely radial and resembles the stresslet flow, unsurprising as it results from differentiating the vertical dipole in the same way as the stresslet is related to the Stokeslet. It is produced by a quadrupole distortion which may be associated with normal anchoring on the disc - its counterpart with tangential anchoring has all the charges in its representative defect configuration inverted and a reversed flow response. Just as for the dipole distortions, the character of the quadrupole can be smoothly varied through adapting the boundary condition and the topological defects which represent the harmonic rotate rigidly in step with the changing anchoring angle. A generic anchoring angle will produce a net active torque, maximised for an angle of \(\pi/4\) as illustrated for the chiral quadrupole shown in Fig. 8(e). For extensile activity this distortion generates clockwise rotation, as can easily be justified via our representation of the far-field director structure as arising from a square arrangement of two \(+1/2\) and two \(-1/2\) defects - the dual mode with the defect charges interchanged rotates anticlockwise. By choosing boundary conditions such that the defects are positioned closer to the mid-line of the colloid the strength of the active torque can be tuned.
## VII Chiral active stresses
Chirality is a ubiquitous trait, in living systems and liquid crystals alike. In active matter it opens a wealth of new phenomena, including odd viscous [75] and elastic responses [76; 77], surface waves, rotating crystals [78] and non-reciprocal interactions [79]. Chiral active stresses induce vortex arrays in active cholesterics [12] and have also been shown to be important in nematic cell monolayers where they modify collective motion, the motility of topological defects and generate edge currents [80; 81]. We now consider the effects of such chiral active stresses on nematic multipoles, both in two and three dimensions.
### Two dimensions
For chiral stresses in two dimensions, the active stress tensor has the form \(\mathbf{\sigma}^{\rm c}=\chi J({\bf nn}-{\bf n}_{\perp}{\bf n}_{\perp})/2\), where \(J\) is the complex structure defined by \(J{\bf n}={\bf n}_{\perp}\) and \(J{\bf n}_{\perp}=-{\bf n}\). The chiral active force is
\[\nabla\cdot\mathbf{\sigma}^{\rm c}=\chi J\Big{(}\nabla\cdot({\bf nn})\Big{)}, \tag{66}\]
and is simply a \(\pi/2\) rotation of the achiral active force. Accordingly we can modify (61) to give
\[-p+i\mu\omega=-\frac{\zeta\theta_{0}}{2\log(a/R)}\frac{\bar{z}}{z}, \tag{67}\]
and solve as before to find
\[\tilde{\mathbf{u}}=\frac{\chi\theta_{0}}{8\mu\log(a/R)}\bigg{[}\frac{2xy}{r^{2}}(- y\mathbf{e}_{x}+x\mathbf{e}_{y})+2\log\left(\frac{r}{R}\right)(-x\mathbf{e}_{x}+y \mathbf{e}_{y})\bigg{]}, \tag{68}\]
\[\tilde{p}=\frac{\chi\theta_{0}}{\log(a/R)}\frac{x^{2}-y^{2}}{2r^{2}}. \tag{69}\]
Another way to understand the relation between achiral and chiral stresses is that, since the monopole active force field is spin-2, the \(\pi/2\) local rotation of the active force results in a global rotation by \(\pi/4\) of the force field and hence the fundamental flow responses. The action of this global rotation, denoted \(R_{\pi/4}\), may be seen by comparing the monopole flow responses for achiral and chiral stresses, shown in Fig. 8(a) and Fig. 9(a) respectively. For distortions of order \(n\) there are two basis flows, \(u_{r}\) and \(u_{i}\), corresponding to the real and imaginary parts of \(\partial_{z}^{n}\) respectively. The rotation of the monopole response has the consequence that for achiral and chiral active stresses these flows are related by
\[u_{r}^{c} =R_{\pi/4}\left[\cos\left(\frac{n\pi}{4}\right)u_{r}^{a}-\sin \left(\frac{n\pi}{4}\right)u_{i}^{a}\right], \tag{70}\] \[u_{i}^{c} =R_{\pi/4}\left[\sin\left(\frac{n\pi}{4}\right)u_{r}^{a}+\cos \left(\frac{n\pi}{4}\right)u_{i}^{a}\right], \tag{71}\]
Figure 8: Distortions up to quadrupole order in two-dimensional active nematics. The active flow in white is superposed on the pressure field, with the integral curves of the director shown in black. (a) The fundamental monopole response is extensional and grows linearly with distance from the distortion. (b) and (c) show the flows induced by dipole distortions, labelled by the appropriate derivative of the nematic monopole, with the green arrows indicating the direction of self-propulsion that would result from net active forces in extensile systems. The vertical and horizontal dipoles are the far-field director responses to normal and tangential anchoring respectively and may also be interpreted as arising from a pair of \(+1/2\) (cyan) and \(-1/2\) (red) defects. The self-propulsion matches that expected for the \(+1/2\) defect.
where the superscripts denote the nature of the stresses as achiral or chiral. Hence flow solutions for chiral and achiral stresses are related by a clockwise rotation by \(n\pi/4\) in the space of solutions followed by a rigid spatial rotation anticlockwise by \(\pi/4\), as can be seen in Fig 9. At dipole order the chiral flow fields are rotated superpositions of the achiral ones, with the overall effect of chirality being to rotate the self-propulsion direction anticlockwise by \(\pi/2\), interchanging the roles of horizontal and vertical propulsion. For a generic mixture of achiral and chiral stresses the direction of self-propulsion is rotated from the achiral case by an angle \(\arctan(\chi/\zeta)\), mirroring the effect such stresses have on the flow profile of a \(+1/2\) defect [80]. For the quadrupole distortions we have \(u_{i}^{c}=R_{\pi/4}u_{r}^{a}\) and \(u_{i}^{c}=R_{\pi/4}(-u_{i}^{a})=u_{i}^{a}\), again swapping which distortion produces a chiral or achiral flow response. It is worth emphasising that the sign of the macroscopic rotation is not necessarily the same as the sign of the chiral stresses, rather it is the product of the signs of the activity and the distortion, just as for achiral stresses.
Figure 9: Distortions up to quadrupole order in two-dimensional active nematics with purely chiral stresses. The active flow in white is superposed on the pressure field, with the integral curves of the director shown in black. (a) The fundamental monopole response is extensional and grows linearly with distance from the distortion. (b) and (c) show the flows induced by dipole distortions, labelled by the appropriate derivative of the nematic monopole, with the green arrows indicating the direction of self-propulsion that would result from net active forces in extensile systems.
### Three dimensions
In three dimensions the chiral active force is \(\chi\nabla\times[\nabla\cdot(\mathbf{nn})]\)[12] and so, by linearity, the fundamental flow responses are given by the curl of those derived earlier, namely
\[\mathbf{u}^{(x)} =\frac{a\chi}{2\mu r^{3}}\left[-\mathbf{e}_{x}xy+\mathbf{e}_{y}(x^ {2}-z^{2})+\mathbf{e}_{z}yz\right], \tag{72}\] \[\mathbf{u}^{(y)} =\frac{a\chi}{2\mu r^{3}}\left[-\mathbf{e}_{x}(y^{2}-z^{2})+ \mathbf{e}_{y}xy+-\mathbf{e}_{z}xz\right], \tag{73}\]
for monopole distortions in the \(x\)- and \(y\)-components respectively. Just as for achiral active stresses, we can combine these into a single complex fundamental flow response as \(\mathbf{u}^{(x)}-i\mathbf{u}^{(y)}\), giving
\[\tilde{u}=\frac{i}{r^{3}}\left[-\bar{w}^{2}\mathbf{e}_{w}+(w\bar{w}-2z^{2}) \mathbf{e}_{\bar{w}}+2\bar{w}z\mathbf{e}_{z}\right]. \tag{74}\]
Since the active chiral force is a pure curl the corresponding pressure is constant.
Owing to the additional derivative the functional behaviour of the flow responses is shifted up one order of distortion compared to achiral stresses, meaning dipole distortions induce rotations, although it should be noted that monopoles do not produce propulsive flows. The monopole flow responses are still spin-1, but since the flow response for a monopole distortion in \(n_{x}\) for achiral stresses is primarily in the \(x-z\) plane, the action of curl produces a flow that is dominantly in the \(y\)-direction and similarly the response to a monopole distortion in \(n_{y}\) is mainly along \(\mathbf{e}_{x}\). Together these ingredients mean that heuristically the flow response of a given distortion with chiral active stresses will resemble the achiral active stress flow response of the conjugate distortion at one higher order and with the same spin, that is the distortion reached by the action of \(i\partial_{z}\). This is illustrated in Fig. 10 for the spin-0 dipoles. The UPenn dipole induces rotation about \(\mathbf{e}_{z}\) while the chiral dipole produces a purely radial flow, resembling the achiral flow responses of the chiral quadrupole and Saturn's ring quadrupole respectively.
The phenomenological response can again be captured through integration of the stress tensor over a large sphere of radius \(r\), just as was done for achiral active stresses. To enable us to reduce the active torque to a single boundary integral we use the symmetric form of the chiral active stress tensor [12], \(\sigma_{ij}^{c}=\left[\nabla\times(\mathbf{nn})\right]_{ij}+\left[\nabla \times(\mathbf{nn})\right]_{ji}\), such that
Figure 10: The active flows induced by spin 0 dipole distortions with chiral active stresses. The flow is superposed upon the integral curves of the director, shown in orange, for the UPenn dipole (left) and chiral dipole (right).
to linear order in director distortions we have
\[\mathbf{f}^{a}=\int\chi\sigma^{\mathbf{c}}\cdot\mathrm{d}\mathbf{A}\approx 0, \tag{75}\]
\[\mathbf{\tau}^{a} =\int\mathbf{x}\times\chi\sigma^{\mathbf{c}}\cdot\mathrm{d} \mathbf{A}\approx\int\chi\bigg{\{}\mathbf{e}_{x}\bigg{[}\frac{xz\,\partial_{x} \delta n_{x}-2yz\partial_{y}\delta n_{x}+(y^{2}-z^{2})\partial_{z}\delta n_{x}} {r}\bigg{]} \tag{76}\] \[+\mathbf{e}_{y}\bigg{[}\frac{yz\partial_{y}\delta n_{y}-2xz \partial_{x}\delta n_{y}+(x^{2}-z^{2})\partial_{z}\delta n_{y}}{r}\bigg{]}\] \[+\mathbf{e}_{z}\frac{-2xy(\partial_{y}\delta n_{x}+\partial_{x} \delta n_{y})-(x^{2}-y^{2})(\partial_{x}\delta n_{x}-\partial_{y}\delta n_{y} )+z(x\partial_{z}\delta n_{x}+y\partial_{z}\delta n_{y})}{r}\bigg{\}}{\mathrm{ d}}A.\]
From the first of these equations we see that, to linear order, there are no harmonic distortions which produce net forces in a nematic with chiral active stresses. With regard to the net active torques, the \(x-\) and \(y-\) components involve only \(\delta n_{x}\) and \(\delta n_{y}\) respectively and each term yields a non-zero integral only for \(\delta n_{i}\sim\partial_{z}1/r\), hence the two spin-1 dipoles produce transverse torques. Turning to the \(z\)-component, each term gives a non-zero integral only for \(\delta n_{i}\sim\partial_{i}1/r\), and as the expression is symmetric under interchange of \(x\) and \(y\) we see that only the UPenn dipole produces torques around \(\mathbf{e}_{z}\). In other words, a dipolar director distortion which produces a net active force along a given direction in an achiral active nematic produces a net torque around the same direction in a chiral active nematic. These results of course accord with our earlier statements regarding the spins of distortions which are capable of producing torques about given axes. Performing the integrals we find that in each case the net active torque has magnitude \(-12\pi\chi\alpha a^{2}/5\). Balancing this against Stokes drag gives, using the UPenn dipole as an example, an angular velocity
\[\mathbf{\Omega}=\frac{3\chi\alpha}{10\mu a}\mathbf{e}_{z}. \tag{77}\]
While the angular velocity in achiral active nematics is independent of the distortion size, in chiral active nematics it is inversely proportional to the radius, a direct consequence of the additional derivative in the active stress tensor. Accordingly, in chiral active nematics the rotational velocity is largest for smaller colloids.
## VIII Discussion
We have introduced active nematic multipoles as a novel framework for understanding the dynamics of active nematics. Although only formally valid on mesoscopic lengthscales, this approach produces results for the propulsive dynamics of defect loops that agree with those of a local analysis [23; 24]. It also provides various testable predictions, for example for the axis of self-propulsion or rotation induced by a distortion or how the corresponding velocities would scale with the size of a colloid.
More broadly, our results reveal self-propulsion and rotation as generic non-equilibrium responses that naturally arise due to colloidal inclusions in active nematics but also provide a template for the tailored design of particular dynamics. This provides insight into the issue of harnessing the energy of active systems to perform useful work, something which has been demonstrated in bacterial suspensions [82; 71] and is now receiving greater attention in the nematic context [83; 36; 37; 84]. Specific anchoring conditions on colloids have been investigated as a means of generating directed motion [36]. Our results suggest that sufficient control of the anchoring conditions would allow for steerable and targeted colloidal delivery [85], although there may be routes to a similar degree of dynamical control through colloidal geometry alone [62].
The transformative power of colloids in passive nematics was revealed in their collective behaviour, forming crystalline structures [86; 87; 88; 89; 28] which can serve as photonic metamaterials [90]. While our predictions for the dynamics of individual colloids have utility in their own right, there is again considerable interest in the collective dynamics which might emerge [91]. Although our results are insufficient to fully address these questions, some basic points can nonetheless be extracted from the flow solutions. The long-range nature of the active flows suggests that the hydrodynamic interactions will be dominant over elastic ones. The leading contribution to the pair-wise hydrodynamic interactions will be the advection of each colloid by the flow field generated by the other, and the even inversion symmetry of dipole flows implies that this provides a mechanism for pair-wise propulsion, even for colloids which are not self-propulsive themselves.
To conclude, it has been long-established that the distinct symmetries of \(\pm 1/2\) nematic defects can be directly related to the qualitatively different dynamics they display in active systems [15; 16]. The aim of this paper is to bring the insights of this symmetry-based approach to generic nematic distortions.
###### Acknowledgements.
This work was supported by the UK EPSRC through Grant No. EP/N509796/1.
|
2308.16497 | Moore-Penrose Dagger Categories | The notion of a Moore-Penrose inverse (M-P inverse) was introduced by Moore
in 1920 and rediscovered by Penrose in 1955. The M-P inverse of a complex
matrix is a special type of inverse which is unique, always exists, and can be
computed using singular value decomposition. In a series of papers in the
1980s, Puystjens and Robinson studied M-P inverses more abstractly in the
context of dagger categories. Despite the fact that dagger categories are now a
fundamental notion in categorical quantum mechanics, the notion of a M-P
inverse has not (to our knowledge) been revisited since their work. One purpose
of this paper is, thus, to renew the study of M-P inverses in dagger
categories.
Here we introduce the notion of a Moore-Penrose dagger category and provide
many examples including complex matrices, finite Hilbert spaces, dagger
groupoids, and inverse categories. We also introduce generalized versions of
singular value decomposition, compact singular value decomposition, and polar
decomposition for maps in a dagger category, and show how, having such a
decomposition is equivalent to having M-P inverses. This allows us to provide
precise characterizations of which maps have M-P inverses in a dagger
idempotent complete category, a dagger kernel category with dagger biproducts
(and negatives), and a dagger category with unique square roots. | Robin Cockett, Jean-Simon Pacaud Lemay | 2023-08-31T07:00:02Z | http://arxiv.org/abs/2308.16497v1 | # Moore-Penrose Dagger Categories
###### Abstract
The notion of a Moore-Penrose inverse (M-P inverse) was introduced by Moore in 1920 and rediscovered by Penrose in 1955. The M-P inverse of a complex matrix is a special type of inverse which is unique, always exists, and can be computed using singular value decomposition. In a series of papers in the 1980s, Puystjens and Robinson studied M-P inverses more abstractly in the context of dagger categories. Despite the fact that dagger categories are now a fundamental notion in categorical quantum mechanics, the notion of a M-P inverse has not (to our knowledge) been revisited since their work. One purpose of this paper is, thus, to renew the study of M-P inverses in dagger categories.
Here we introduce the notion of a Moore-Penrose dagger category and provide many examples including complex matrices, finite Hilbert spaces, dagger groupoids, and inverse categories. We also introduce generalized versions of singular value decomposition, compact singular value decomposition, and polar decomposition for maps in a dagger category, and show how, having such a decomposition is equivalent to having M-P inverses. This allows us to provide precise characterizations of which maps have M-P inverses in a dagger idempotent complete category, a dagger kernel category with dagger biproducts (and negatives), and a dagger category with unique square roots.
## 1 Introduction
The Moore-Penrose inverse of an \(n\times m\) complex matrix \(A\) is an \(m\times n\) complex matrix \(A^{\circ}\) such that: \(AA^{\circ}A=A\), \(A^{\circ}AA^{\circ}=A^{\circ}\), \((AA^{\circ})^{\dagger}=AA^{\circ}\), and \((A^{\circ}A)^{\dagger}=A^{\circ}A\), where \(\dagger\) is the conjugate transpose operator. For any complex matrix, its Moore-Penrose inverse exists, is unique, and can be computed using singular value decomposition - see Example 2.9. The Moore-Penrose inverse is named after E. H. Moore and R. Penrose. Moore first described the notion in 1920 in terms of orthogonal projectors [19]. Without knowing about Moore's work, in 1955 Penrose described the notion using the identities above [21]. Curious readers can learn more about the fascinating history of the Moore-Penrose inverse and its origin in [2, 5]. Many useful - and quite recent - applications of the Moore-Penrose inverse in mathematics, physics, and computer science are described by Baksalary and Trenkler in [2].
The Moore-Penrose inverse can be generalized to other contexts besides complex matrices. For example, one may consider the Moore-Penrose inverse of a matrix over an involutive ring. While the Moore-Penrose inverse may not always exist, for certain involutive rings it is possible to precisely characterize which matrices have Moore-Penrose inverses. One can also consider Moore-Penrose inverses in involutive semigroups, and in particular in \(C^{*}\)-algebras. It is also possible to define the notion of Moore-Penrose inverses for bounded linear operators between Hilbert spaces, and to characterize precisely which have a Moore-Penrose inverse. Following in this direction, one can in fact define the notion of a Moore-Penrose inverse for maps in _dagger categories_.
Selinger in [28] introduced the term "dagger category", based on the use in physics of the symbol \(\dagger\) for conjugate transpose. Dagger categories are simply categories equipped with an involution on maps
(Def 2.1). In a dagger category, a Moore-Penrose inverse of a map \(f:A\to B\) is a map in the reverse direction \(f^{\circ}:B\to A\) satisfying the equations above (Def 2.3). The existence and computations of Moore-Penrose inverses for maps in general dagger categories were studied by Puystjens and Robinson in a series of papers in the 1980s [22, 23, 24, 25, 26]. Since Puystjens and Robinson's work, there does not appear to have been any further development of Moore-Penrose inverses in dagger categories. This, despite the fact that the theory of dagger categories itself has undergone significant development. Indeed, in the last decade, dagger categories have become a fundamental component of categorical quantum mechanics (see Heunen and Vicary's introductory level book on the subject [16]). Therefore, it makes perfect sense to revisit Moore-Penrose inverses in the context of dagger categories.
The main objective of this paper is to revisit and renew the study of Moore-Penrose inverses in dagger categories, in the hope that this will lead to new applications in categorical quantum mechanics and elsewhere. We shall apply techniques which have been developed since Puystjens and Robinson's work, such as dagger idempotent splitting and dagger kernels, to Moore-Penrose inverses. We also introduce and study the natural concept of a **Moore-Penrose dagger category**, which is a dagger category where every map has a Moore-Penrose inverse. We provide many examples of Moore-Penrose dagger categories including well-known ones, such as the category of complex matrices or finite-dimensional Hilbert spaces, and also various new ones, such as dagger groupoids and inverse categories.
As was mentioned above, singular value decomposition can be used to compute Moore-Penrose inverses of complex matrices. In Section 4, we introduce a generalized version of singular value decomposition for maps in a dagger category with dagger biproducts. Then, by using dagger kernels, we show how having a generalized singular value decomposition is equivalent to having a Moore-Penrose inverse (Thm 4.9). Another way to compute the Moore-Penrose inverse is by using _compact_ singular value decomposition: this is often easier to compute than full singular value decomposition. In Section 3, we introduce a generalized version of compact singular value decomposition for maps in any dagger category and then prove that having a generalized compact singular value decomposition is equivalent to having a Moore-Penrose inverse when dagger idempotents split (Prop 3.9). Therefore, we obtain a precise characterization of maps that have a Moore-Penrose inverse in any dagger idempotent complete category (Thm 3.10). Lastly in Section 5, we give a novel application of Moore-Penrose inverses by introducing the notion of a Moore-Penrose polar decomposition, which captures precisely polar decomposition for complex matrices.
**Acknowledgements:** The authors would like to thank Chris Heunen for useful discussions and support of this project, as well as thank Ben MacAdam and Cole Comfort for initial discussions on Moore-Penrose inverses and possible relations to restriction categories. The authors would also like to thank Masahito Hasegawa and RIMS at Kyoto University for helping fund research visits so that the authors could work together on this project.
## 2 Moore-Penrose Inverses
In this section, we discuss Moore-Penrose inverses and some basic properties thereof. In addition, Moore-Penrose dagger categories are introduced and various examples are provided. To set up notation and terminology, we begin by quickly reviewing the basics of dagger categories. For a more in-depth introduction to dagger categories, we refer the reader to [16]. For an arbitrary category \(\mathbb{X}\), we denote objects by capital letters \(A,B,X,Y\), etc. and maps by lowercase letters \(f,g,h\), etc. Identity maps are denoted as \(1_{A}:A\to A\). Composition is written in _diagrammatic order_, that is, the composition of a map \(f:A\to B\) followed by \(g:B\to C\) is denoted \(fg:A\to C\).
**Definition 2.1**: _[_15_, Def 2.32]_ _A **daggerger** on a category \(\mathbb{X}\) is a contravariant functor \((\_)^{\dagger}:\mathbb{X}\to\mathbb{X}\) which is the identity on objects and involutive. We refer to \(f^{\dagger}\) as the **adjoint** of \(f\). A **daggerger category** is a pair \((\mathbb{X},\dagger)\) consisting of a category \(\mathbb{X}\) equipped with a dagger \(\dagger\)._
Concretely, a dagger category can be described as a category \(\mathbb{X}\) where for each map \(f:A\to B\), there is a chosen map of dual type \(f^{\dagger}:B\to A\) such that \(1_{A}^{\dagger}=1_{A}\), \((fg)^{\dagger}=g^{\dagger}f^{\dagger}\), and \((f^{\dagger})^{\dagger}=f\). Thus, \((\_)^{\dagger}\) is a contravariant functor which is, furthermore, an involution - so the adjoint of the adjoint of \(f\) is \(f\) itself. It is important to note that a category \(\mathbb{X}\) can have multiple different daggers. This means that a dagger on a category is structure which must be chosen. Examples of dagger categories can be found below. Here are some special maps in a dagger category:
**Definition 2.2**: _[_15_, Def 2.34]_ _In a dagger category \((\mathbb{X},\dagger)\):_
1. \(A\) _map_ \(s:A\to B\) _is an_ **isometry** _if_ \(ss^{\dagger}=1_{A}\)_;_
2. \(A\) _map_ \(r:A\to B\) _is a_ **coisometry** _if_ \(r^{\dagger}r=1_{B}\)_;_
3. \(A\) _map_ \(u:A\to B\) _is_ \(a\) **unitary isomorphism** _if_ \(uu^{\dagger}=1_{A}\) _and_ \(u^{\dagger}u=1_{B}\)_;_
4. \(A\) _map_ \(q:A\to B\) _is a_ **partial isometry** _if_ \(qq^{\dagger}q=q\)_;_
5. \(A\) _map_ \(h:A\to A\) _is_ **self-adjoint** _(or_ **Hermitian**_) if_ \(h^{\dagger}=h\)_;_
6. \(A\) _map_ \(p:A\to A\) _is_ **positive** _if there exists a map_ \(f:A\to X\) _such that_ \(p=ff^{\dagger}\)_;_
7. \(A\) _map_ \(e:A\to A\) _is a_ \(\dagger\)_-_**idempotent** _if it self-adjoint and idempotent, that is,_ \(e^{\dagger}=e\) _and_ \(ee=e\)_._
This allows us to define the main concept of interest for this paper:
**Definition 2.3**: _In a dagger category \((\mathbb{X},\dagger)\), a **Moore-Penrose inverse** (M-P inverse) of a map \(f:A\to B\) is a map \(f^{\circ}:B\to A\) such that the following equalities hold:_
**[MP.1]**: \(ff^{\circ}f=f\)__**[MP.2]**: \(f^{\circ}ff^{\circ}=f^{\circ}\)__**[MP.3]**: \((ff^{\circ})^{\dagger}=ff^{\circ}\)__**[MP.4]**: \((f^{\circ}f)^{\dagger}=f^{\circ}f\)__
_If \(f\) has a M-P inverse, we say that \(f\) is **Moore-Penrose invertible** (M-P invertible). A **Moore-Penrose dagger category** is a dagger category such that every map is M-P invertible._
**[MP.1]** and **[MP.2]** say that \(f^{\circ}\) is a "regular" inverse of \(f\), while **[MP.3]** and **[MP.4]** say that \(ff^{\circ}\) and \(f^{\circ}f\) are self-adjoint. This allows us to interpret \(ff^{\circ}\) as the projection of the domain of \(f\), while \(f^{\circ}f\) is the projection of the range of \(f\). Examples of Moore-Penrose dagger categories can be found below. However, before looking at examples, we state some basic results for M-P inverses. Most importantly, M-P inverses (if they exist) are unique:
**Lemma 2.4**: _In a dagger category \((\mathbb{X},\dagger)\), if a map \(f:A\to B\) has a M-P inverse \(f^{\circ}:B\to A\), then \(f^{\circ}\) is the unique map which satisfies **[MP.1]** to **[MP.4].**_
Proof: Suppose that for a map \(f:A\to B\), there exist maps \(f^{\circ}:B\to A\) and \(f^{\bullet}:B\to A\) which are both M-P inverses of \(f\). Then we first compute that:
\[f^{\bullet}f=f^{\bullet}ff^{\circ}f=(f^{\bullet}f)^{\dagger}(f^{\circ}f)^{ \dagger}=f^{\dagger}(f^{\bullet})^{\dagger}f^{\dagger}(f^{\circ})^{\dagger}=( ff^{\bullet}f)^{\dagger}(f^{\circ})^{\dagger}=f^{\dagger}(f^{\circ})^{ \dagger}=(f^{\circ}f)^{\dagger}=f^{\circ}f.\]
So \(f^{\bullet}f=f^{\circ}f\) and, similarly, we can also compute that \(ff^{\bullet}=ff^{\circ}\). This allows the observation that:
\[f^{\bullet}=f^{\bullet}ff^{\bullet}=f^{\circ}ff^{\bullet}=f^{\circ}ff^{\circ}= f^{\circ}\]
So \(f^{\circ}=f^{\bullet}\) and therefore Moore-Penrose inverses are unique. \(\Box\)
An important consequence of the above lemma is that, for a dagger category, being Moore-Penrose is a property rather than a structure. That said, it is important to note that a map can have a M-P inverse with respect to one dagger structure but fail to have one for another, see Example 2.10. Having a M-P inverse, has a number of consequences:
**Lemma 2.5**: _In a dagger category \((\mathbb{X},\ddagger)\), if \(f\) has a M-P inverse \(f^{\circ}\) then:_
* \(f^{\circ}\) _is also M-P invertible where_ \(f^{\circ\circ}=f\)_;_
* \(f^{\dagger}\) _is also M-P invertible where_ \(f^{\uparrow\circ}=f^{\circ\dagger}\)_;_
* \(ff^{\circ}\) _and_ \(f^{\circ}f\) _are_ \(\dagger\)_-idempotents and M-P invertible where_ \((ff^{\circ})^{\circ}=ff^{\circ}\) _and_ \((f^{\circ}f)^{\circ}=f^{\circ}f\)_;_
* \(ff^{\dagger}\) _and_ \(f^{\dagger}f\) _are M-P invertible where_ \((ff^{\dagger})^{\circ}=f^{\uparrow\circ}f^{\circ}\) _and_ \((f^{\dagger}f)^{\circ}=f^{\circ}f^{\uparrow\circ}\)_;_
* \(ff^{\circ}=f^{\uparrow\circ}f^{\dagger}\) _and_ \(f^{\circ}f=f^{\dagger}f^{\uparrow\circ}\)_;_
* \(f=ff^{\dagger}f^{\uparrow\circ}=f^{\uparrow\circ}f^{\dagger}f\)_;_
* \(f^{\circ}=f^{\circ}f^{\uparrow\circ}f^{\dagger}=f^{\dagger}f^{\uparrow\circ}f^ {\circ}\)_;_
* \(f^{\dagger}=f^{\dagger}ff^{\circ}=f^{\circ}ff^{\dagger}\)_;_
* _If_ \(f\) _is self-adjoint, then_ \(f^{\circ}\) _is also self-adjoint (i.e._ \(f^{\circ\dagger}=f^{\circ}\)_) and_ \(f^{\circ}f=ff^{\circ}\)_;_
* _If_ \(f^{\circ}=f^{\dagger}\)_, then_ \(f\) _is a partial isometry._
Proof: These are straightforward to check, so we leave them as an exercise for the reader. \(\Box\)
It is known that computing M-P inverses of complex matrices can be reduced to computing the M-P inverses of Hermitian positive semi-definite matrices. The same is true in dagger categories:
**Lemma 2.6**: _In a dagger category \((\mathbb{X},\ddagger)\), for any map \(f:A\to B\), the following are equivalent:_
* \(f\) _is M-P invertible;_
* \(f^{\dagger}f\) _is M-P invertible and_ \(f(f^{\dagger}f)^{\circ}f^{\dagger}f=f\)_;_
* \(ff^{\dagger}\) _is M-P invertible and_ \(ff^{\dagger}(ff^{\dagger})^{\circ}f=f\)__
_Therefore \((\mathbb{X},\ddagger)\) is Moore-Penrose if and only if every map \(f\) satisfies \((ii)\) or \((iii)\)._
Proof: Lemma 2.5.(iv) and (vii) gives us \((i)\Rightarrow(ii)\) and \((i)\Rightarrow(iii)\). Conversely, if \(f^{\dagger}f\) (resp. \(ff^{\dagger}\)) is M-P invertible, then \((f^{\dagger}f)^{\circ}f^{\dagger}\) (resp. \(f^{\dagger}(ff^{\dagger})^{\circ}\)) will always satisfy **[MP.2], [MP.3]**, and **[MP.4]**. The extra assumption that \(f(f^{\dagger}f)^{\circ}f^{\dagger}f=f\) (resp. \(ff^{\dagger}(ff^{\dagger})^{\circ}f=f\)) is precisely **[MP.1]**. So we have that \(f\) is M-P invertible, giving \((ii)\Rightarrow(i)\) and \((iii)\Rightarrow(i)\). \(\Box\)
In any dagger category, there are some maps that always have M-P inverses:
**Lemma 2.7**: _In a dagger category \((\mathbb{X},\ddagger)\):_
* _Identity maps_ \(1_{A}\) _are M-P invertible where_ \(1_{A}^{\circ}=1_{A}\)_;_
* _If_ \(f\) _is an isomorphism, then_ \(f\) _is M-P invertible where_ \(f^{\circ}=f^{\uparrow}\)_;_
* _If_ \(f\) _is a partial isometry or a (co)isometry or unitary, then_ \(f\) _is M-P invertible where_ \(f^{\circ}=f^{\dagger}\)_;_
* _If_ \(e\) _is a_ \(\dagger\)_-idempotent, then_ \(e\) _is M-P invertible where_ \(e^{\circ}=e\)
_._
5. _If_ \(p\) _is a positive map such that there exists a M-P invertible map_ \(f\) _such that_ \(p=ff^{\dagger}\)_, then_ \(p\) _is M-P invertible where_ \(p^{\circ}=f^{\circ\dagger}f^{\circ}\)_, and so_ \(p^{\circ}\) _is also positive;_
6. _If_ \(p\) _is a positive map and M-P invertible, then for any map_ \(f\) _such that_ \(p=ff^{\dagger}\) _and_ \(pp^{\circ}f=f\)_,_ \(f\) _is also M-P invertible where_ \(f^{\circ}=f^{\dagger}p^{\circ}\)_._
Proof: These are straightforward to check, so we leave them as an exercise for the reader. \(\Box\)
It is important to note that, in general, Moore-Penrose inverses are not compatible with composition. Indeed, even if \(f\) and \(g\) have M-P inverses, \(fg\) might not have a M-P inverse and, even if it does, \((fg)^{\circ}\) is not necessarily equal to \(g^{\circ}f^{\circ}\). Here are some conditions for when \((fg)^{\circ}=g^{\circ}f^{\circ}\) holds:
**Lemma 2.8**: _In a dagger category \((\mathbb{X},\dagger)\), if \(f:A\to B\) and \(g:B\to C\) are M-P invertible then:_
1. \(fg\) _is M-P invertible with_ \((fg)^{\circ}=g^{\circ}f^{\circ}\) _if and only if_ \(f^{\circ}fgg^{\circ}\) _and_ \(gg^{\circ}f^{\circ}f\) _are idempotent, and both_ \(fgg^{\circ}f^{\circ}=f^{\circ\dagger}gg^{\circ}f^{\dagger}\) _and_ \(g^{\circ}f^{\circ}fg=g^{\dagger}f^{\circ}fg^{\circ\dagger}\)_;_
2. _The following conditions_1 _are equivalent and imply_ \((fg)^{\circ}=g^{\circ}f^{\circ}\)_:_ Footnote 1: For complex matrices, the conditions of _(ii)_ are equivalent to \((fg)^{\circ}=g^{\circ}f^{\circ}\) [5, Sec 1.4 & 1.5]. However, for general dagger categories it appears that the conditions in _(ii)_ are sufficient – but not necessary – to obtain \((fg)^{\circ}=g^{\circ}f^{\circ}\). 1. \(gg^{\circ}f^{\circ}\)_,_ \(fgg^{\circ}f^{\circ}\) _and_ \(g^{\circ}f^{\circ}fg\) _are self-dual;_ 2. \(gg^{\dagger}f^{\circ}f\) _and_ \(f^{\dagger}fgg^{\circ}\) _are self-dual;_ 3. \(f^{\circ}fgg^{\dagger}f^{\dagger}=gg^{\dagger}f^{\dagger}\) _and_ \(gg^{\circ}f^{\dagger}fg=f^{\dagger}fg\)_._
Proof: These can be checked by lengthy and brute-force calculations. \(\Box\)
Here are some examples of Moore-Penrose dagger categories, as well as some non-examples but where we can still fully characterize the M-P invertible maps:
**Example 2.9**: _Let \(\mathbb{C}\) be the field of complex numbers and let \(\mathsf{MAT}(\mathbb{C})\) be the category whose objects are natural numbers \(n\in\mathbb{N}\) and where a map \(A:n\to m\) is an \(n\times m\) complex matrix. \((\mathsf{MAT}(\mathbb{C}),\dagger)\) is a dagger category where \(\dagger\) is the conjugate transpose operator, \(A^{\dagger}(i,j)=\overline{A(j,i)}\). Furthermore, \((\mathsf{MAT}(\mathbb{C}),\dagger)\) is also a Moore-Penrose dagger category where the M-P inverse of a matrix can be constructed from its singular value decomposition (SVD). For a \(n\times m\)\(\mathbb{C}\)-matrix \(A\), let \(d_{1},\ldots,d_{k}\) be the non-zero singular values of \(A\), so \(d_{i}\in\mathbb{R}\) with \(d_{i}>0\), and \(k\leq\min(n,m)\). Then there exists a unitary \(n\times n\) matrix \(U\) and a unitary \(m\times m\) matrix \(V\) such that:_
\[A=U\begin{bmatrix}D&0\\ 0&0\end{bmatrix}_{n\times m}V^{\dagger}\qquad\qquad\text{where $D$ is the diagonal $k\times k$ matrix $D=\begin{bmatrix}d_{1}&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&d_{k}\end{bmatrix}$}\]
_Then the M-P inverse of \(A\) is the \(m\times n\) matrix \(A^{\circ}\) defined as follows:_
\[A^{\circ}=V\begin{bmatrix}D^{\text{-1}}&0\\ 0&0\end{bmatrix}_{m\times n}U^{\dagger}\qquad\quad\text{where $D^{\text{-1}}$ is the diagonal $k\times k$ matrix $D^{\text{-1}}=\begin{bmatrix}\frac{1}{d_{1}}&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\frac{1}{d_{k}}\end{bmatrix}$}\]
_Since M-P inverses are unique, the construction does not depend on the choice of SVD._
**Example 2.10**: _On the other hand, \(\mathsf{MAT}(\mathbb{C})\) has another dagger given instead simply by the transpose operator, \(A^{\top}(i,j)=A(j,i)\). However, the dagger category \((\mathsf{MAT}(\mathbb{C}),\mathsf{T})\) is not Moore-Penrose. For example, the matrix \(\begin{bmatrix}i&1\end{bmatrix}\) does not have a M-P inverse with respect to the transpose. If it did, one can obtain the contradiction that \(i=0\), which we leave as an exercise for the reader._
**Example 2.11**: _Recall that an involutive ring is a ring R equipped with a unary operation \(*\), called the involution, such that \((x+y)^{*}=x^{*}+y^{*}\), and \((xy)^{*}=y^{*}x^{*}\), and \(x^{**}=x\). Let \(\mathsf{MAT}(R)\) be the category of matrices over R, that is, the category whose objects are natural numbers \(n\in\mathbb{N}\) and where a map \(A:n\to m\) is an \(n\times m\) matrix \(A\) with coefficients in R. Then \((\mathsf{MAT}(R),\dagger)\) is a dagger category where \(\dagger\) is given by the involution transpose operator, that is, \(A^{\dagger}(i,j)=A(j,i)^{*}\). In general \((\mathsf{MAT}(R),\dagger)\) will not necessarily be Moore-Penrose. However, in certain cases, it is possible to precisely characterize which \(R\)-matrices do have a M-P inverse. For example, if \(R\) is an involutive field, then an R-matrix \(A\) has a M-P inverse if and only if \(\mathsf{rank}(AA^{\dagger})=\mathsf{rank}(A)=\mathsf{rank}(A^{\dagger}A)\)[20, Thm 1]. Necessary and sufficient conditions for when an R-matrix has a M-P inverse have also been described in the case when R is an integral domain [3], a commutative ring [4], or even a semi-simple artinian ring [18]._
**Example 2.12**: _Let \(\mathsf{HILB}\) be the category of (complex) Hilbert spaces and bounded linear operators between them. Then \((\mathsf{HILB},\dagger)\) is a dagger category where the dagger is given by the adjoint, that is, for a bounded linear operator \(f:H_{1}\to H_{2}\), \(f^{\dagger}:H_{2}\to H_{1}\) is the unique bounded linear operator such that \(\langle f(x)|y\rangle=\langle x|f^{\dagger}(y)\rangle\) for all \(x\in H_{1}\) and \(y\in H_{2}\). \((\mathsf{HILB},\dagger)\) is not Moore-Penrose but there is a characterization of the M-P invertible maps: a bounded linear operator is M-P invertible if and only if its range is closed [12, Thm 2.4]. Explicitly, for a bounded linear map \(f:H_{1}\to H_{2}\), let \(\mathsf{Ker}(f)\subseteq H_{1}\) be its kernel and \(\mathsf{im}(f)\subseteq H_{2}\) be its range, and let \(\mathsf{Ker}(f)^{\perp}\) and \(\mathsf{im}(f)^{\perp}\) be their orthogonal complements. If \(\mathsf{im}(f)\) is closed, then we have that \(H_{2}=\mathsf{im}(f)\oplus\mathsf{im}(f)^{\perp}\) and also that \(f|_{\mathsf{Ker}(f)^{\perp}}:\mathsf{Ker}(f)^{\perp}\to\mathsf{im}(f)\) is a bounded linear isomorphism. Then define the M-P inverse \(f^{\circ}:H_{2}\to H_{1}\) as \(f^{\circ}(y)=f^{-1}|_{\mathsf{Ker}(f)^{\perp}}(y)\) for \(y\in\mathsf{im}(f)\) and \(f^{\circ}(y)=0\) for \(y\in\mathsf{im}(f)^{\perp}\). For more details, see [12, Ex 2.16]. Now let \(\mathsf{FHLB}\) be the subcategory of finite dimensional Hilbert spaces. Then \((\mathsf{FHLB},\dagger)\) is also a dagger category and it is well known that \((\mathsf{FHLB},\dagger)\simeq(\mathsf{MAT}(\mathbb{C}),\dagger)\). As such, \((\mathsf{FHLB},\dagger)\) is also a Moore-Penrose dagger category where we this time use SVD on linear operators to construct the M-P inverse. So let \(H_{1}\) be a Hilbert space of dimension \(n\) and \(H_{2}\) a Hilbert space of dimension \(m\). Then for any linear operator \(f:H_{1}\to H_{2}\), if \(d_{1},\ldots,d_{k}\in\mathbb{R}\) are the non-zero singular values of \(f\) (so \(k\leq\min(n,m)\)), then there exists orthonormal bases \(u_{i}\in H_{1}\) and \(v_{j}\in H_{2}\) such that \(f(x)=\sum_{i=1}^{k}d_{i}\langle u_{i}|x\rangle v_{i}\) for all \(x\in H_{1}\). Then \(f^{\circ}:H_{2}\to H_{1}\) is defined as follows \(f^{\circ}(y):=\sum_{i=1}^{k}\frac{1}{d_{i}}\langle v_{i}|y\rangle u_{i}\)._
**Example 2.13**: _Any field gives a simple example of a Moore-Penrose dagger category. So let \(k\) be a field, and let \(\bullet_{k}\) be the category with one object and whose maps are elements of \(k\), where composition is given by the multiplication and the identity map is the unit of \(k\). Then \((\bullet_{k},\dagger)\) is a Moore-Penrose dagger category where for all \(x\in k\), \(x^{\dagger}=x\) and \(x^{\circ}=x^{\perp}\) if \(x\neq 0\) or \(x^{\circ}=0\) if \(x=0\). In fact, a Moore-Penrose dagger category with only one object is precisely a \(*\)-regular monoid [9]._
**Example 2.14**: _Let \(\mathsf{REL}\) be the category of sets and relations, that is, the category whose objects are sets and where a map \(R:X\to Y\) is a subset \(R\subseteq X\times Y\). \((\mathsf{REL},\dagger)\) is a dagger category where \(\dagger\) is given by the converse relation, that is, \((y,x)\in R^{\dagger}\subseteq Y\times X\) if and only if \((x,y)\in R\subseteq X\times Y\). While \((\mathsf{REL},\dagger)\) is not a Moore-Penrose dagger category, it turns out that the M-P invertible maps are precisely the partial isometries (which recall by Lemma 2.7.(iii) always have M-P inverses). A partial isometry in \((\mathsf{REL},\dagger)\) is a difunctional relation [11, Def 1], which is a relation \(R\subseteq X\times Y\) which satisfies that if \((x,b),(a,b)\) and \((a,y)\in R\), then \((x,y)\in R\). It was previously observed that a relation between finite sets has M-P inverse if and only if it was a difunctional relation/partial isometry - since relations between finite sets
correspond to Boolean matrices, and Boolean matrices with M-P inverses were fully characterized in [27, Thm 4.3]. From this, it is not difficult to see that this can be extended to relations between arbitrary sets. Thus, in \((\mathsf{REL},\dagger)\), \(R\subseteq X\times Y\) has a M-P inverse if and only if \(R\) is a difunctional relation/partial isometry, which in this case means that the M-P inverse is the converse relations \(R^{\circ}=R^{\dagger}\subseteq Y\times X\). In fact, the same is true for allegories. Briefly, an **allegory**[10, Chap 2] is a dagger category \((\mathbb{X},\dagger)\) which is poset enriched and has meets, so in particular each homset \(\mathbb{X}(A,B)\) is a poset with order \(\leq\) and binary meets \(\cap\), and such that the modular law \(fg\cap h\leq(f\cap hg^{\dagger})g\) holds. Well-known examples of allegories include \((\mathsf{REL},\dagger)\) and more generally the category of relations of a regular category [10, Sec 2.111]. From the modular law, it follows that every map \(f\) in an allegory \((\mathbb{X},\dagger)\) satisfies \(f\leq ff^{\dagger}f\)[10, Sec 2.112]. Therefore, if \(f\) has a M-P inverse, using Lemma 2.5.(vii) and (viii), we easily compute that:_
\[f^{\dagger}=f^{\circ}ff^{\dagger}\leq f^{\circ}f^{\circ\dagger}f^{\circ}ff^{ \dagger}=f^{\circ}f^{\circ\dagger}f^{\dagger}=f^{\circ}\]
\[f^{\circ}=f^{\circ}f^{\circ\dagger}f^{\dagger}\leq f^{\circ}f^{\circ\dagger}f^ {\dagger}ff^{\dagger}=f^{\circ}ff^{\dagger}=f^{\dagger}.\]
_So we conclude that \(f^{\circ}=f^{\dagger}\), and so by Lemma 2.5.(x), \(f\) is a partial isometry. Thus, a map \(f\) in an allegory \((\mathbb{X},\dagger)\) has a M-P inverse if and only if \(f\) is a partial isometry, which means that its M-P inverse is its adjoint \(f^{\circ}=f^{\dagger}\)._
**Example 2.15**: _A **dagger groupoid** is a dagger category \((\mathbb{X},\dagger)\) where every map in \(\mathbb{X}\) is an isomorphism (though not necessarily a unitary). Every dagger groupoid \((\mathbb{X},\dagger)\) is a Moore-Penrose dagger category where \(f^{\circ}=f^{\cdot\cdot\cdot}\). In particular, from any dagger category, we can always construct a dagger groupoid via its subcategory of isomorphisms. So for any category \(\mathbb{X}\), let \(\mathbb{X}_{\mathsf{iso}}\) be the subcategory of isomorphisms of \(\mathbb{X}\). If \((\mathbb{X},\dagger)\) is a dagger category, then \((\mathbb{X}_{\mathsf{iso}},\dagger)\) is a dagger groupoid since if \(f\) is an isomorphism, then so is \(f^{\dagger}\) with inverse \(f^{\dagger\cdot\cdot}:={f^{\dagger}}^{\dagger}\). Therefore \((\mathbb{X}_{\mathsf{iso}},\dagger)\) is a Moore-Penrose dagger category._
**Example 2.16**: _An **inverse category**[7, Sec 2.3.2] is a dagger category \((\mathbb{X},\dagger)\) where \(ff^{\dagger}f=f\) for all maps \(f\) and \(ff^{\dagger}gg^{\dagger}=gg^{\dagger}ff^{\dagger}\) for all parallel maps \(f\) and \(g\). Inverse categories play an important role in the theory of restriction categories [7], since the subcategory of partial isomorphisms of a restriction category is an inverse category. Every inverse category \((\mathbb{X},\dagger)\) is a Moore-Penrose dagger category where the M-P inverse of \(f\) is its adjoint \(f^{\circ}=f^{\dagger}\) (since every map in an inverse category is a partial isometry by definition). So in particular, for any restriction category, its subcategory of partial isomorphisms is a Moore-Penrose dagger category. As a concrete example, let \(\mathsf{PINJ}\) be the category of sets and partial injections, which is the subcategory of partial isomorphisms of the restriction category of sets and partial functions. Then \((\mathsf{PINJ},\dagger)\) is an inverse category where for a partial injection \(f:X\to Y\), \(f^{\dagger}:Y\to X\) is defined as \(f^{\dagger}(y)=x\) if \(f(x)=y\) and is undefined otherwise._
**Example 2.17**: _If \((\mathbb{X}_{1},\dagger_{1})\) and \((\mathbb{X}_{2},\dagger_{2})\) are both Moore-Penrose dagger categories, then their product \((\mathbb{X}_{1}\times\mathbb{X}_{2},\dagger_{1}\times\dagger_{2})\) is also a Moore-Penrose dagger category. In particular, we can combine Example 2.13 and Example 2.16. So if \((\mathbb{X},\dagger)\) is an inverse category and \(k\) is a field, let \(\mathbb{X}_{k}\) be the category whose objects are those of \(\mathbb{X}\) but whose maps are pairs \((f,x)\) consisting of a map \(f\) in \(\mathbb{X}\) and an element \(x\in k\), so we may think of \(x\) as adding a weight or a cost to \(f\). Then \((\mathbb{X}_{k},\dagger)\) is a Moore-Penrose dagger category where \((f,x)^{\dagger}=(f^{\dagger},x)\) and \((f,x)^{\circ}=(f^{\dagger},x^{\circ})\)._
## 3 Compact Singular Value Decomposition
In Example 2.9, we explained how to construct the M-P inverse of a complex matrix using SVD. However, there is an alternative way to construct the M-P inverse using _compact_ singular value decomposition
(CSVD). This decomposition tells us that for any \(n\times m\) complex matrix, \(A\), again with singular values \(d_{1},\ldots,d_{k}\) and associated diagonal matrix \(D\), there exists an \(n\times k\) matrix \(R\) and an \(m\times k\) matrix \(S\) such that \(A=RDS^{\dagger}\) and \(R^{\dagger}R=S^{\dagger}S=I_{k}\). The decomposition allows one to construct the M-P inverse as \(A^{\circ}:=SD^{-1}R^{\dagger}\). In dagger categorical terms, \(R\) and \(S\) are coisometries, and \(D\) is an isomorphism2. Thus, generalized CSVD in an arbitrary dagger category is a factorization into a coisometry, followed by an isomorphism, followed by an isometry. We shall discuss the generalization of CSVD for dagger categories before discussing SVD because generalizing SVD requires dagger biproducts and dagger kernels, while generalizing CSVD can be explained without introducing further structure.
Footnote 2: The fact that \(D\) is a diagonal matrix of singular values is not relevant to this way of constructing the M-P inverse.
This generalized CSVD not only provides a simple way of computing M-P inverses, but is also directly related to the splitting of dagger idempotents, an important dagger category concept that was introduced by Selinger in [29]. Generalized CSVD allows us to precisely characterize the M-P invertible maps in dagger categories which are dagger idempotent complete. Furthermore, the dagger idempotent splitting completion leads us to an important reinterpretation of M-P inverses as being _actual_ inverses between dagger idempotents. As such, we begin this section by discussing the relationship between M-P inverses and dagger idempotent splitting.
**Definition 3.1**: _[_29_, Def 3.6]_ _In a dagger category \((\mathbb{X},\dagger)\), a **dagger idempotent \(e:A\to A\)** is an idempotent which is self-adjoint, \(ee=e=e^{\dagger}\). A dagger idempotent is said to \(\dagger\)**-split** if there exists a map \(r:A\to X\) such that \(rr^{\dagger}=e\) and \(r^{\dagger}r=1_{X}\) (so \(r\) is a coisometry). A **dagger idempotent complete category** is a dagger category \((\mathbb{X},\dagger)\) such that all \(\dagger\)-idempotents \(\dagger\)-split._
In Lemma 2.5.(iii), we saw that in any dagger category \((\mathbb{X},\dagger)\), if a map \(f\) has a M-P inverse, then \(ff^{\circ}\) and \(f^{\circ}f\) were both \(\dagger\)-idempotents. As such, we may ask these \(\dagger\)-idempotents to also be \(\dagger\)-split:
**Definition 3.2**: _In a dagger category \((\mathbb{X},\dagger)\), a map \(f\) is **Moore-Penrose split** (M-P split) if \(f\) has a M-P inverse \(f^{\circ}\) and the \(\dagger\)-idempotents \(ff^{\circ}\) and \(f^{\circ}f\)\(\dagger\)-split. A Moore-Penrose category in which all maps are M-P split is said to be **Moore-Penrose complete**._
A dagger category which is Moore-Penrose complete is the same thing as a Moore-Penrose category in which _all_ dagger idempotents split:
**Proposition 3.3**: _A dagger category \((\mathbb{X},\dagger)\) is Moore-Penrose complete if and only if \((\mathbb{X},\dagger)\) is dagger idempotent complete and Moore-Penrose._
Proof: The \(\Leftarrow\) direction is immediate by definition. For the \(\Rightarrow\) direction, suppose that \((\mathbb{X},\dagger)\) is Moore-Penrose complete. By definition, this means every map has a M-P inverse, so \((\mathbb{X},\dagger)\) is indeed Moore-Penrose. Now let \(e:A\to A\) be a \(\dagger\)-idempotent. By Lemma 2.7.(iv), \(e\) is its own M-P inverse, so \(e^{\circ}=e\), and therefore \(e^{\circ}e=e=ee^{\circ}\). However, by assumption, \(e\) is M-P split, which therefore implies that \(e\) is \(\dagger\)-split. So \((\mathbb{X},\dagger)\) is indeed \(\dagger\)-idempotent complete. \(\Box\)
We will now explain how every Moore-Penrose dagger category embeds into a Moore-Penrose complete dagger category. Let us first review how every dagger category embeds into a dagger idempotent complete category via the dagger version of the idempotent splitting completion, also called the dagger Karoubi envelope [29, Def 3.13]. So for a dagger category \((\mathbb{X},\dagger)\), define the dagger category \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\) whose objects are pairs \((A,e)\) consisting of an object \(A\) and a \(\dagger\)-idempotent \(e:A\to A\) in \((\mathbb{X},\dagger)\), and whose maps \(f:(A_{1},e_{1})\to(A_{2},e_{2})\) in are maps \(f:A_{1}\to A_{2}\) in \(\mathbb{X}\) such that \(e_{1}fe_{2}=f\) (or equivalently \(e_{1}f=f=fe_{2}\)). Composition in \(\mathsf{Split}_{\dagger}(\mathbb{X})\) is defined as in \(\mathbb{X}\), while identity maps
\(1_{(A,e)}:(A,e)\rightarrow(A,e)\) are defined as \(1_{(A,e)}:=e\). Lastly, the dagger of \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\) is defined as in \((\mathbb{X},\dagger)\), and furthermore \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\) is a dagger idempotent complete category [28, Prop 3.12]. There is also an embedding \(\mathcal{I}:(\mathbb{X},\dagger)\rightarrow(\mathsf{Split}_{\dagger}( \mathbb{X}),\dagger)\) which is defined on objects as \(\mathcal{I}(A)=(A,1_{A})\) and on maps as \(\mathcal{I}(f)=f\).
**Lemma 3.4**: _Let \((\mathbb{X},\dagger)\) be a Moore-Penrose dagger category. Then \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\) is a Moore-Penrose complete category._
Proof: Let \(f:(A,e)\rightarrow(B,e^{\prime})\) be a map in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\). Since composition and the dagger of \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\) are the same as in \((\mathbb{X},\dagger)\), it suffices to show that \(f^{\circ}:B\to A\) is also a map of type \((B,e^{\prime})\rightarrow(A,e)\) in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\). So we must show that \(e^{\prime}f^{\circ}e=f^{\circ}\). To do so we use Lemma 2.5.(vii) and that \(f^{\dagger}:(A_{2},e_{2})\rightarrow(A_{1},e_{1})\) is also a map in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\):
\[e^{\prime}f^{\circ}e=e^{\prime}f^{\circ}f^{\dagger^{\circ}}f^{\dagger}e=e^{ \prime}f^{\circ}f^{\dagger^{\circ}}f^{\dagger}=e^{\prime}f^{\dagger}f^{\dagger^ {\circ}}f^{\circ}=f^{\dagger}f^{\dagger^{\circ}}f^{\circ}=f^{\circ}\]
So \(f^{\circ}:(B,e^{\prime})\rightarrow(A,e)\) is a map in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\). \(\Box\)
We are now ready to discuss a generalization of CSVD in an arbitrary dagger category, and show that having a generalized CSVD is equivalent to being M-P split.
**Definition 3.5**: _In a dagger category, a **generalized compact singular value decomposition** (GCSVD) of a map \(f:A\to B\) is a triple \((r:A\to X,d:X\to Y,s:Y\to B)\), where \(r\) is a coisometry, \(d\) is an isomorphism, and \(s\) is an isometry, such that \(f=rds\)._
**Lemma 3.6**: _In a dagger category \((\mathbb{X},\dagger)\), if the two triples \((r_{1}:A\to X_{1},d_{1}:X_{1}\to Y_{1},s_{1}:Y_{1}\to B)\) and \((r_{2}:A\to X_{2},d_{2}:X_{2}\to Y_{2},s_{2}:Y_{2}\to B)\) are GCSVDs of \(f:A\to B\), then there exist unique unitary maps \(u:X_{1}\to X_{2}\) and \(v:Y_{1}\to Y_{2}\) such that \(r_{1}u=r_{2}\), \(d_{1}v=ud_{2}\), and \(s_{1}=vs_{2}\)._
Proof: Define \(u\) and \(v\) as the composites, \(u:=r_{1}^{\dagger}r_{2}\) and \(v:=s_{1}s_{2}^{\dagger}\). The necessary identities are checked via some straightforward diagram chasing. \(\Box\)
In order to show that having a GCSVD is equivalent to being M-P split, it will be useful to first observe that maps with M-P inverses in the base dagger category are actual isomorphisms in the dagger idempotent splitting completion:
**Lemma 3.7**: _A map \(f:A\to B\) in a dagger category \((\mathbb{X},\dagger)\) has a M-P inverse if and only if there exists \(\dagger\)-idempotents \(e_{1}:A\to A\) and \(e_{2}:B\to B\) such that \(f:(A,e_{1})\rightarrow(B,e_{2})\) is an isomorphism in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\). Explicitly:_
1. _If_ \(f:A\to B\) _has a M-P inverse_ \(f^{\circ}:B\to A\)_, then_ \(f:(A,ff^{\circ})\rightarrow(A,f^{\circ}f)\) _is an isomorphism in_ \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\) _with inverse_ \(f^{\circ}:(A,f^{\circ}f)\rightarrow(A,ff^{\circ})\)_;_
2. _If_ \(f:(A,e_{1})\rightarrow(B,e_{2})\) _is an isomorphism in_ \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\) _with inverse_ \(f^{\circ}:(B,e_{2})\rightarrow(A,e_{1})\)_, then_ \(f\) _is M-P invertible in_ \((\mathbb{X},\dagger)\) _with M-P inverse_ \(f^{\circ}\)_._
Proof: To start, let us explicitly spell out what it means for \(f:(A,e_{1})\rightarrow(B,e_{2})\) to be an isomorphism in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\). Firstly, we need that \(e_{1}fe_{2}=f\) (or equivalently \(e_{1}f=f=fe_{2}\)). Secondly, we also need a map \(g:(B,e_{2})\rightarrow(A,e_{1})\) in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\), so \(e_{2}ge_{1}=g\) (or equivalently \(e_{2}g=g=ge_{1}\)), and such that \(fg=1_{(A,e_{1})}=e_{1}\) and \(gf=1_{(B,e_{2})}=e_{2}\).
Suppose that \(f:A\to B\) has a M-P inverse \(f^{\circ}:B\to A\). By Lemma 2.5.(iii), \((A,ff^{\circ})\) and \((A,f^{\circ}f)\) are well-defined objects in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\). On the other hand, by **[MP.1]** and **[MP.2]**, it is easy to check that
\(f:(A,ff^{\circ})\to(A,f^{\circ}f)\) and \(f^{\circ}:(A,f^{\circ}f)\to(A,ff^{\circ})\) are well-defined maps in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\). Lastly, by definition we have that \(ff^{\circ}=1_{(A,ff^{\circ})}\) and \(f^{\circ}f=1_{(A,f^{\circ}f)}\). Thus we conclude that \(f:(A,ff^{\circ})\to(A,f^{\circ}f)\) is an isomorphism in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\).
Conversely, suppose that \(f:(A,e_{1})\to(B,e_{2})\) is an isomorphism in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\) with inverse \(f^{\circ}:(B,e_{2})\to(A,e_{1})\). In particular, this implies that \(ff^{\circ}=e_{2}\) and \(ff^{\circ}=e_{1}\). So \(ff^{\circ}\) and \(f^{\circ}f\) are both \(\dagger\)-idempotents, thus **[MP.3]** and **[MP.4]** hold. By the assumed properties of maps in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\), we have that \(ff^{\circ}f=e_{1}f=f\) and \(f^{\circ}ff^{\circ}=e_{2}f^{\circ}=f^{\circ}\), and so **[MP.1]** and **[MP.2]** hold. Therefore, we conclude that \(f\) is M-P invertible with M-P inverse \(f^{\circ}\). \(\Box\)
**Corollary 3.8**: _A map \(f:A\to B\) in a dagger category \((\mathbb{X},\dagger)\) is M-P split if and only if there exists \(\dagger\)-split \(\dagger\)-idempotents \(e_{1}:A\to A\) and \(e_{2}:B\to B\) such that \(f:(A,e_{1})\to(B,e_{2})\) is an isomorphism in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\)._
Proof: Suppose that \(f:A\to B\) is M-P split with M-P inverse \(f^{\circ}:B\to A\). By definition \(ff^{\circ}\) and \(f^{\circ}f\) are \(\dagger\)-split \(\dagger\)-idempotents and by Lemma 3.7, \(f:(A,ff^{\circ})\to(A,f^{\circ}f)\) is an isomorphism in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\). Conversely, suppose that \(e_{1}:A\to A\) and \(e_{2}:B\to B\) are \(\dagger\)-idempotents that \(\dagger\)-split via the coisometry \(r:A\to X\) and isometry \(s:Y\to B\) respectively, and also that \(f:(A,e_{1})\to(B,e_{2})\) is an isomorphism in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\) with inverse \(f^{\circ}:(B,e_{2})\to(A,e_{1})\). Then by Lemma 3.7, \(f^{\circ}\) is the M-P inverse of \(f\), and by assumption we also have that \(ff^{\circ}=e_{1}\) and \(f^{\circ}f=e_{2}\). So \(ff^{\circ}\) and \(f^{\circ}f\) are \(\dagger\)-split, and therefore we conclude that \(f\) is M-P split. \(\Box\)
We may now state the main result of this section:
**Proposition 3.9**: _In a dagger category \((\mathbb{X},\dagger)\), a map \(f\) has a GCSVD if and only if \(f\) is M-P split._
Proof: Suppose that \(f:A\to B\) has a GCSVD \((r:A\to X,d:X\to Y,s:Y\to B)\). Define \(f^{\circ}:=s^{\dagger}d^{-1}r^{\dagger}\). First note that \(rr^{\dagger}\) and \(s^{\dagger}s\) are \(\dagger\)-split \(\dagger\)-idempotents, so \((A,rr^{\dagger})\) and \((B,s^{\dagger}s)\) are well-defined objects in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\). We then compute that:
\[rr^{\dagger}fs^{\dagger}s=rr^{\dagger}rdss^{\dagger}s =rds=f\] \[s^{\dagger}sf^{\circ}rr^{\dagger}=s^{\dagger}ss^{\dagger}d^{-1}r^{ \dagger}rr^{\dagger}=s^{\dagger}d^{-1}r^{\dagger}=f^{\circ}\]
So \(f:(A,rr^{\dagger})\to(B,s^{\dagger}s)\) and \(f^{\circ}:(B,s^{\dagger}s)\to(A,rr^{\dagger})\) are maps in \((\mathsf{Split}_{\dagger}(\mathbb{X}),\dagger)\). Furthermore, we can also compute that:
\[ff^{\circ} =rdss^{\dagger}d^{-1}r^{\dagger}=rdd^{-1}r^{\dagger}=rr^{\dagger}= 1_{(A,rr^{\dagger})}\] \[f^{\circ}f =s^{\dagger}d^{-1}r^{\dagger}rds=s^{\dagger}d^{-1}ds=s^{\dagger}s= 1_{(B,s^{\dagger}s)}\]
Therefore, \(f:(A,ff^{\circ})\to(A,f^{\circ}f)\) is an isomorphism with inverse \(f^{\circ}:(B,s^{\dagger}s)\to(A,rr^{\dagger})\). So by Corollary 3.8, \(f\) is M-P split with M-P inverse \(f^{\circ}\).
Conversely, suppose that \(f:A\to B\) is M-P split, where \(ff^{\circ}\) and \(f^{\circ}f\) both \(\dagger\)-split via, respectively, the coisometry \(r:A\to X\) and isometry \(s:Y\to B\). Now define \(d:X\to Y\) as the composite \(d:=r^{\dagger}fs^{\dagger}\). We then immediately have \(f=rds\). So it remains to show that \(d\) is an isomorphism. So define \(d^{-1}:Y\to X\) as the composite \(d^{-1}=s^{\dagger}f^{\circ}r\). We compute that:
\[dd^{-1} =r^{\dagger}fs^{\dagger}sf^{\circ}r=r^{\dagger}ff^{\circ}ff^{\circ}r =r^{\dagger}rr^{\dagger}rr^{\dagger}r=1_{X}\] \[d^{-1}d =sf^{\circ}rr^{\dagger}fs^{\dagger}=sf^{\circ}ff^{\circ}fs^{ \dagger}=ss^{\dagger}ss^{\dagger}ss^{\dagger}=1_{Y}\]
Therefore, \((r:A\to X,d:X\to Y,s:Y\to B)\) is a CSVD of \(f\). \(\Box\)
We can now precisely characterize M-P invertible maps in a dagger idempotent complete category:
**Theorem 3.10**: _In a dagger idempotent complete category, a map is M-P invertible if and only if it has a GCSVD._
**Corollary 3.11**: _A dagger category is Moore-Penrose complete if and only if every map has a GCSVD._
Observe that Lemma 3.4 tells us that every Moore-Penrose dagger category embeds into a dagger category where every map has a GCSVD.
## 4 Singular Value Decomposition
The objective of this section is to generalize SVD for maps in a dagger category in such a way that we may compute M-P inverses in the same way that was done in Example 2.9. So generalized SVD can be described as a special factorization in terms of two unitaries and an isomorphism. However, in order to describe the middle component as a square matrix with the isomorphism in the top corner and zeroes everywhere else, we need to work in a setting with _daggerger biproducts_. It is worth mentioning that in [23], Puystjens and Robinson do discuss how the existence of a M-P inverse for a map with an epic-monic factorization is essentially equivalent to a factorization via dagger biproducts. Here, we drop the epic-monic factorization requirement, which allows us to provide a story of how M-P inverses are equivalent to a dagger biproduct factorization which more closely resembles the generalized version of SVD.
Let us begin by quickly recalling the definition of dagger biproducts. For a refresher on biproducts and zero objects, we refer the reader to [16, Chap 2]. So for category \(\mathbb{X}\) that has finite biproducts, we denote the biproduct as \(\oplus\), the projections as \(\pi_{j}:A_{1}\oplus\ldots\oplus A_{n}\to A_{j}\), the injections as \(\iota_{j}:A_{j}\to A_{1}\oplus\ldots\oplus A_{n}\), the zero object as 0, the sum of maps as \(f+g\), and lastly the zero maps as 0.
**Definition 4.1**: _[_16_, Def 2.39]_ _A dagger category \((\mathbb{X},\dagger)\) has **finite \(\dagger\)-biproducts** if \(\mathbb{X}\) has finite biproducts such that the adjoints of the projections are the injections, that is, \(\pi_{j}^{\dagger}=\iota_{j}\)._
Using dagger biproducts, we may now introduce generalized SVD:
**Definition 4.2**: _In a dagger category \((\mathbb{X},\dagger)\) with finite \(\dagger\)-biproducts, a **generalized singular value decomposition** (GSVD) of a map \(f:A\to B\) is a triple of maps \((u:A\to X\oplus Z,d:X\to Y,v:Y\oplus W\to B)\) such that \(u\) and \(v\) are unitary and \(d\) is an isomorphism, and such that \(f=u(d\oplus 0)v\)._
**Lemma 4.3**: _In a dagger category \((\mathbb{X},\dagger)\) with finite \(\dagger\)-biproducts, if for a map \(f:A\to B\), we have that \((u_{1}:A\to X_{1}\oplus Z_{1},d:X_{1}\to Y_{1},v_{1}:Y_{1}\oplus W_{1}\to B)\) and \((u_{2}:A\to X_{2}\oplus Z_{2},d:X_{2}\to Y_{2},v_{2}:Y_{2}\oplus W_{2}\to B)\) are both GSVDs of \(f\), then there exists unique unitary maps \(x:X_{1}\to X_{2}\), \(y:Y_{1}\to Y_{2}\), \(z:Z_{1}\to Z_{2}\), and \(w:W_{1}\to W_{2}\) such that \(u_{1}(x\oplus z)=u_{2}\), \(d_{1}y=xd_{2}\), and \(v_{1}=(y\oplus w)v_{2}\)._
Proof: Define \(x\), \(y\), \(z\), and \(w\) as the composites \(x:=\iota_{1}u_{1}^{\dagger}u_{2}\pi_{1}\), \(y:=\iota_{1}v_{1}v_{2}^{\dagger}\pi_{1}\), \(z:=\iota_{2}u_{1}^{\dagger}u_{2}\pi_{2}\), and lastly \(w:=\iota_{2}v_{1}v_{2}^{\dagger}\pi_{2}\). By straightforward diagram chasing, one can check all the necessary identities. \(\Box\)
We will explain below why this recaptures precisely SVD for complex matrices. We first observe that every GSVD induces a GCSVD. Therefore by applying the results of the previous section, having a GSVD implies that we have a M-P inverse:
**Proposition 4.4**: _In a dagger category \((\mathbb{X},\dagger)\) with \(\dagger\)-biproducts, suppose that a map \(f:A\to B\) has a GSVD \((u:A\to X\oplus Z,d:X\to Y,v:Y\oplus W\to B)\). Then \((u\pi_{1}:A\to X,d:X\to Y,v\iota_{1}:Y\to B)\) is a GCSVD of \(f\), and therefore \(f\) is M-P split where \(f^{\circ}:=v^{\dagger}(d^{\perp}\oplus 0)u^{\dagger}\)._
Proof: A unitary composed with a (co)isometry is always a (co)isometry. So \(u\pi_{1}\) is a coisometry and \(t_{1}v\) is an isometry. Next, note that the \(\dagger\)-biproduct structure gives us that \(a\oplus b=\pi_{1}a_{1}+\pi_{2}a_{2}\). So in our case, we have that \(d\oplus 0=\pi_{1}d_{1}\). Therefore, we have that \(f=u\pi_{1}d_{1}v\). So we conclude that \((u\pi_{1},d,t_{1}v)\) is a GCSVD of \(f\). Applying Proposition 3.9 we get that \(f^{\circ}:=v^{\dagger}\pi_{1}d^{-1}t_{1}u^{\dagger}\), which can alternatively be written as \(f^{\circ}:=v^{\dagger}(d^{-1}\oplus 0)u^{\dagger}\). \(\Box\)
Let us explain how GSVD does indeed generalize how SVD is used to compute M-P inverses for matrices. As explained in [16, Sec 2.2.4], in a dagger category with finite dagger biproducts, a map \(F:A_{1}\oplus\ldots\oplus A_{n}\to B_{1}\oplus\ldots\oplus B_{m}\) is uniquely determined by a family of maps \(f_{i,j}:A_{i}\to B_{j}\). Therefore \(F\) can be represented as a \(n\times m\) matrix where the term in the \(i\)-th row and \(j\)-th column is \(f_{i,j}\). So if \(f\) has a GSVD \((u,d,v)\), we may expand \(d\oplus 0\) as a \(2\times 2\) matrix, and therefore write \(f\) and \(f^{\circ}\) as:
\[f =u\begin{bmatrix}d&0\\ 0&0\end{bmatrix}v f^{\circ} =v^{\dagger}\begin{bmatrix}d^{-1}&0\\ 0&0\end{bmatrix}u^{\dagger}\]
which recaptures precisely how M-P inverses were constructed using SVD in Example 2.9. We now wish to go in the other direction, that is, going from a M-P inverse to a GSVD. To do so, we will need to use dagger kernels. For a refresher on ordinary kernels, we refer the reader to [16, Sec 2.4.2].
**Definition 4.5**: _[_13_, Def 2.1]_ _In a dagger category \((\mathbb{X},\dagger)\) with a zero object, a map \(f:A\to B\) has a \(\dagger\)-**kernel** if \(f\) has a kernel \(k:\ker(f)\to A\) such that \(k\) is an isometry. A **daggerger kernel category** is a dagger category with a zero object such that every map has a dagger kernel._
In [25], Puystjens and Robinson describe many necessary and sufficient conditions for when a map that has a kernel has a M-P inverse in a dagger category which is enriched over Abelian groups. However, dagger kernels are not discussed in [25]. Therefore, one could specialize certain results in [25] for dagger kernels instead. In this paper, we will show that having a M-P inverse _and_ a dagger kernel is equivalent to having a GSVD. Also note that, unlike in [25], we do not assume that we are working in a setting with negatives (i.e. additive inverses). Because of this, the statement does require a modest extra compatibility condition between the M-P inverse and the dagger kernel.
**Proposition 4.6**: _In a dagger category \((\mathbb{X},\dagger)\) with \(\dagger\)-biproducts, a map \(f\) has a GSVD if and only if \(f\) is M-P split and \(f\) has a \(\dagger\)-kernel \(k:\ker(f)\to A\) and \(f^{\dagger}\) has a \(\dagger\)-kernel \(c:\ker(f^{\dagger})\to B\) such that \(ff^{\circ}+k^{\dagger}k=1_{A}\) and \(f^{\circ}f+c^{\dagger}c=1_{B}\)._
Proof: Suppose that \(f:A\to B\) has a GSVD \((u:A\to X\oplus Z,d:X\to Y,v:Y\oplus W\to B)\). We have already explained why \(f\) is M-P split in the above lemma. Using the \(\dagger\)-biproduct identity that \(\iota_{i}\pi_{j}=0\) if \(i\neq j\) and \(\iota_{j}\pi_{j}=1\), it is straightforward to check that \(\iota_{2}u^{\dagger}:Z\to A\) is a \(\dagger\)-kernel of \(f\) and that \(\iota_{2}v:W\to B\) is a \(\dagger\)-kernel of \(f^{\dagger}\). For the extra identities, we first note that \(ff^{\circ}=u(1_{X}\oplus 0)u^{\dagger}\) and \(f^{\circ}f=v^{\dagger}(1_{Y}\oplus 0)v\), which we can alternatively write as \(ff^{\circ}=u\pi_{1}\iota_{1}u^{\dagger}\) and \(f^{\circ}f=v^{\dagger}\pi_{1}\iota_{1}v\). Then using the other \(\dagger\)-biproduct identity that \(\pi_{1}\iota_{1}+\pi_{2}\iota_{2}=1\), it follows that \(ff^{\circ}+k^{\dagger}k=1_{A}\) and \(f^{\circ}f+c^{\dagger}c=1_{B}\) as desired.
Conversely, suppose that \(f\) is M-P split, and has a \(\dagger\)-kernel \(k:\ker(f)\to A\) and \(f^{\dagger}\) has a \(\dagger\)-kernel \(c:\ker(f^{\dagger})\to B\) such that the two equalities \(ff^{\circ}+k^{\dagger}k=1_{A}\) and \(f^{\circ}f+c^{\dagger}c=1_{B}\) also hold. Then by Prop 3.9, \(f\) also has a GCSVD \((r:A\to X,d:X\to Y,s:Y\to B)\), so, in particular, \(f=rds\) and \(d\) is an isomorphism. Then, using matrix notation, define \(u:A\to X\oplus\ker(f)\) and \(v:Y\oplus\ker(f^{\dagger})\to B\) respectively as \(u:=\begin{bmatrix}r&k^{\dagger}\end{bmatrix}\) and \(v:=\begin{bmatrix}s\\ c\end{bmatrix}\). We first compute that:
\[u(d\oplus 0)v =\begin{bmatrix}r&k^{\dagger}\end{bmatrix}\begin{bmatrix}d&0\\ 0&0\end{bmatrix}\begin{bmatrix}s\\ c\end{bmatrix} =\begin{bmatrix}rd&0\end{bmatrix}\begin{bmatrix}s\\ c\end{bmatrix} =rds=f\]
So \(f=u(d\oplus 0)v\) as desired. We must also show that \(u\) and \(v\) are unitary. To show that \(u\) is unitary, recall that \(rr^{\dagger}=f^{\circ}f\) and also that since \(krds=kf=0\), it follows that \(kr=0\). Therefore we compute:
\[uu^{\dagger}=\begin{bmatrix}r&k^{\dagger}\end{bmatrix}\begin{bmatrix}r&k^{ \dagger}\end{bmatrix}^{\dagger}=\begin{bmatrix}r&k^{\dagger}\end{bmatrix} \begin{bmatrix}r^{\dagger}\\ k\end{bmatrix}=rr^{\dagger}+k^{\dagger}k=ff^{\circ}+k^{\dagger}k=1_{A}\]
\[u^{\dagger}u=\begin{bmatrix}r^{\dagger}\\ k\end{bmatrix}\begin{bmatrix}r&k^{\dagger}\end{bmatrix}=\begin{bmatrix}r^{ \dagger}r&r^{\dagger}k^{\dagger}\\ kr&kk^{\dagger}\end{bmatrix}=\begin{bmatrix}1_{A}&0\\ 0&1_{\ker(f)}\end{bmatrix}=1_{X\oplus\ker(f)}\]
So \(u\) is unitary. Similarly, we can show that \(v\) is unitary. So we conclude that \((u,d,v)\) is a GSVD of \(f\). \(\Box\)
**Corollary 4.7**: _In a dagger category \((\mathbb{X},\dagger)\) with finite \(\dagger\)-biproducts and negatives, a map \(f\) has a GSVD if and only if \(f\) is M-P split and both \(f\) and \(f^{\dagger}\) have \(\dagger\)-kernels._
Proof: We need only show that \(ff^{\circ}+k^{\dagger}k=1_{A}\) and \(f^{\circ}f+c^{\dagger}c=1_{B}\). First note that \((1_{A}-ff^{\circ})f=0\) and \((1_{B}-f^{\circ}f)f^{\dagger}=0\) (the latter of which is by Lemma 2.5.(viii)). So by universal property of the kernel, there exist unique maps \(z_{1}\) and \(z_{2}\) such that \(z_{1}k=1_{A}-ff^{\circ}\) and \(z_{2}c=1_{B}-f^{\circ}f\). By post-composing by \(k^{\dagger}\) and \(c^{\dagger}\) respectively, and also by using that \(f^{\circ}k^{\dagger}=0\) (which follows from Lemma 2.5.(vii)) and \(fc^{\dagger}=0\), we then obtain that \(z_{1}=k^{\dagger}\) and \(z_{2}=c^{\dagger}\). Therefore, \(k^{\dagger}k=1_{A}-ff^{\circ}\) and \(c^{\dagger}c=1_{B}-f^{\circ}f\), which in turn implies the desired equalities. \(\Box\)
Therefore, in a setting with negatives and all dagger kernels, we may state that:
**Corollary 4.8**: _In a dagger kernel category \((\mathbb{X},\dagger)\) with finite \(\dagger\)-biproducts and negatives, a map \(f\) has a GSVD if and only if \(f\) is M-P split._
Finally, assuming also that we are in a dagger idempotent complete setting, we obtain a precise characterization of M-P invertible maps in terms of a generalized version of SVD:
**Theorem 4.9**: _In a dagger kernel category \((\mathbb{X},\dagger)\) that is \(\dagger\)-idempotent complete and which has finite \(\dagger\)-biproducts and negatives, a map \(f\) is M-P invertible if and only if \(f\) has a GSVD._
## 5 Polar Decomposition
It is straightforward to give a generalized version of polar decomposition (PD) in a dagger category, it is the statement that a map factorizes as a partial isometry followed by a positive map. However, the statement of PD for bounded linear maps between Hilbert spaces is stronger: it also involves a requirement on the kernel (or range) of the partial isometry. In [17, Thm 8.3], Higham nicely explains how M-P inverses can play a role in the PD of complex matrices and can be used to replace that extra requirement. So recall that for an \(n\times m\) complex matrix, \(A\), there exists a unique partial isometry \(U\) and unique a positive semi-definite Hermitian matrix \(H\) such that \(A=UH\) and \(\mathsf{range}(U^{\dagger})=\mathsf{range}(H)\). The matrix \(H\) is given by the square root of the matrix \(A^{\dagger}A\), so \(H=(A^{\dagger}A)^{\frac{1}{2}}\), while the matrix \(U\) is constructed using the M-P inverse of \(H\), so \(U=AH^{\circ}\). Furthermore, the condition \(\mathsf{range}(U^{*})=\mathsf{range}(H)\) can be equivalently described in terms of M-P inverses as the equality \(U^{\dagger}U=HH^{\circ}\) (where note that \(U^{\circ}=U^{\dagger}\) since \(U\) is a partial isometry). Therefore PD of complex matrices can be completely expressed in terms of M-P inverses. As such in this section, we introduce the notion of a Moore-Penrose polar decomposition of maps in an arbitrary dagger category, which recaptures precisely PD for complex matrices.
**Definition 5.1**: _In a dagger category \((\mathbb{X},\dagger)\), for a map \(f:A\to B\),_
1. \(A\) _generalized polar decomposition (GPD)_ _of_ \(f\) _is a pair of maps_ \((u:A\to B,h:B\to B)\) _where_ \(u\) _is a partial isometry and_ \(h\) _is a positive map such that_ \(f=uh\)_;_
2. \(A\) _Moore-Penrose polar decomposition (M-P PD)_ _of_ \(f\) _is a GPD_ \((u:A\to B,h:B\to B)\) _of_ \(f\) _such that_ \(h\) _is M-P invertible and_ \(u^{\dagger}u=hh^{\circ}\)_._
We will show that for \(f\) to have a M-P PD is equivalent to requiring that \(f\) be M-P invertible and \(f^{\dagger}f\) has a square-root. The following definition is a Moore-Penrose version of Selinger's definition [29, Def 5.13].
**Definition 5.2**: _In a dagger category \((\mathbb{X},\dagger)\), a M-P invertible positive map \(p:A\to A\) has a **Moore-Penrose square root** (M-P square root) if there exists a M-P invertible positive map \(\sqrt{p}:A\to A\) such that \(\sqrt{p}\sqrt{p}=p\). A dagger category is said to have **(unique) M-P square roots** if all M-P invertible positive maps have a (unique) M-P square root._
**Proposition 5.3**: _In a dagger category \((\mathbb{X},\dagger)\), a map \(f\) has a M-P PD if and only if \(f\) is M-P invertible and \(f^{\dagger}f\) has a M-P square root._
Proof: Suppose that \((u:A\to B,h:B\to B)\) is a M-P PD of a map \(f:A\to B\). Since \(u\) is a partial isometry, \(u\) is M-P invertible where \(u^{\circ}=u^{\dagger}\). We also have that since \(h\) is positive, it is self-dual \(h^{\dagger}=h\), and by Lemma 2.5.(ix) we have that \(h^{\circ}h=hh^{\circ}\). Therefore by the assumption that \(u^{\dagger}u=hh^{\circ}\), it easily follows that \(u^{\dagger}uhh^{\circ}=hh^{\circ}u^{\dagger}u\). Therefore by Lemma 2.8.(ii), we have that \(f=uh\) is M-P invertible whose M-P inverse is \(f^{\circ}=(uh)^{\circ}=h^{\circ}u^{\dagger}\). By Lemma 2.5.(iv), \(f^{\dagger}f\) is a M-P invertible positive map. So it remains to compute that:
\[hh=hhh^{\circ}h=hu^{\dagger}uh=(uh)^{\dagger}uh=f^{\dagger}f\]
So \(hh=f^{\dagger}f\), and therefore \(h\) is a M-P square root of \(f^{\dagger}f\).
Conversely, suppose that \(f\) is M-P invertible and \(f^{\dagger}f\) has a M-P square root \(\sqrt{f^{\dagger}f}\). So define \(h:B\to B\) as \(h:=\sqrt{f^{\dagger}f}\), and define \(u:A\to B\) as the composite \(u:=fh^{\circ}\). We then compute the following:
\[uu^{\dagger}u=fh^{\circ}(fh^{\circ})^{\dagger}fh^{\circ}=fh^{ \circ}h^{\circ}f^{\dagger}fh^{\circ}=fh^{\circ}hhh^{\circ}=fh^{\circ}hh^{ \circ}hh^{\circ}=fh^{\circ}hh^{\circ}=fh^{\circ}=u\] \[u^{\dagger}u=(fh^{\circ})^{\dagger}fh^{\circ}=h^{\circ}f^{ \dagger}fh^{\circ}=h^{\circ}hhh^{\circ}=hh^{\circ}hh^{\circ}=hh^{\circ}\] \[uh=fh^{\circ}h=fh^{\circ}hh^{\circ}h=fh^{\circ}hhh=f(hh)^{ \circ}hh=f(f^{\dagger}f)^{\circ}f^{\dagger}f=ff^{\circ}f=f\]
So \(u\) is an isometry, \(u^{\dagger}u=hh^{\circ}\), and \(f=uh\). So we conclude that \((u,h)\) is a M-P PD of \(f\). \(\Box\)
**Corollary 5.4**: _In a dagger category \((\mathbb{X},\dagger)\) with M-P square roots, a map \(f\) is M-P invertible if and only if \(f\) has a M-P PD._
Unlike PD which is always unique, M-P PD is not necessarily unique in an arbitrary dagger category. The reason PD is unique is due to the fact that positive semi-definite Hermitian matrices have unique square roots. Therefore, if we work in a dagger category where the positive maps do have unique square roots, then M-P PD is also unique as desired.
**Lemma 5.5**: _In a dagger category \((\mathbb{X},\dagger)\) with unique M-P square roots, M-P PDs are unique._
Proof: Suppose that \((u:A\to B,h:B\to B)\) and \((v:A\to B,k:B\to B)\) are both M-P PDs of a map \(f:A\to B\). By Proposition 5.3, we have that \(u=fh^{\circ}\) and \(v=fk^{\circ}\), and that \(h\) and \(k\) are positive maps such that \(hh=f^{\dagger}f=kk\). By the uniqueness of M-P square roots, this implies that \(h=k\). In turn, this also implies that \(u=v\). So we conclude that a M-P PD is unique.
## 6 Conclusion
In this paper, we revisited and added to the story of Moore-Penrose inverses in a dagger category. This work was motivated in part by wishing to understand how partial isomorphisms (in the restriction categories sense) generalize to dagger categories: Moore-Penrose inverses seem to provide the appropriate generalization. However, their theory is more sophisticated and could be better understood. In particular, although a start has been made here, there is more to be understood about their compositional behaviour and their relation to dagger idempotents. Moore-Penrose inverses should also be considered in relation to other dagger structures, such as dagger limits [15], dagger monads [14], and dagger compact closedness [28]. One should also find other interesting examples of Moore-Penrose dagger categories. We conjecture that certain fragments of the ZX-calculus [8] and possibly PROPs with weights on strings will be Moore-Penrose dagger categories. Finally, as the Moore-Penrose inverse has many practical applications, it would also be worthwhile generalizing these applications to Moore-Penrose dagger categories. This may in turn lead to further applications for Moore-Penrose inverses.
|
2309.09846 | Matter Wave Isotope Separation in a Ring Trap | We devise a novel mechanism of isotope separation from a mixture of
Bose-Einstein condensate in the presence of interspecies interaction.
Fractional revivals of this miscible system are studied inside a ring waveguide
for spatially resolving the isotopes of $Rb$. The characteristic time scale is
influenced by the ring radius and the strength of interspecies interaction. We
identify the physical parameters for which the autocorrelation function
displays the signature of distinguishability. A study of the separability
function further suggests favourable time instances for separating the isotopes
with greater yields. The precise ranges of ring radius and interspecies
interaction strength are revealed. We illustrate condensate densities at
proposed time instances, which confirms our results and also validates our
method. | Sriganapathy Raghav, Suranjana Ghosh, Barun Halder, Utpal Roy | 2023-09-18T15:00:08Z | http://arxiv.org/abs/2309.09846v1 | # Matter Wave Isotope Separation in a Ring Trap
###### Abstract
We devise a novel mechanism of isotope separation from a mixture of Bose-Einstein condensate in the presence of interspecies interaction. Fractional revivals of this miscible system are studied inside a ring waveguide for spatially resolving the isotopes of \(R\!b\). The characteristic time scale is influenced by the ring radius and the strength of interspecies interaction. We identify the physical parameters for which the autocorrelation function displays the signature of distinguishability. A study of the separability function further suggests favourable time instances for separating the isotopes with greater yields. The precise ranges of ring radius and interspecies interaction strength are revealed. We illustrate condensate densities at proposed time instances, which confirms our results and also validates our method.
## 1 Introduction
The natural abundances of isotopes are mostly in the form of mixtures. Often, a particular isotope is required in pure form and thus, isotope separation has been an enduring problem in science. Separation of stable isotopes involves methods such as diffusion or centrifugation [1], ion-exchange chromatography [2], light-induced drift isotope separation (LIDIS) [3, 4]_etc._, which rely on the difference in the isotopic mass, isotopic charge and isotopic shift in atomic or molecular spectral lines, respectively. The isotopes of the alkali metals can be separated from an isotopic mixture of Bose-Einstein
condensate (BEC), where phase separation is exploited by tuning the interspecies Feshbach resonance [5; 6; 7]. One could note that the regime of separation for the ground state of the two species lies at the greater values of the interspecies interaction [6; 8; 9]. This is because, at higher values of interspecies interaction, the energy of the inhomogeneous state is lower than that of the homogeneous state, which favours the spatial separation of the isotopes. The isotope separation is also predicted for a BEC mixture under the Thomas-Fermi limit [10; 11; 12]. Experimental observation of spatial separation is also achieved, but by neglecting the role of mass-imbalance between the isotopes [5]. The miscible-immiscible transition is shown in the absence of an external trap, where the transition is governed by the strength of the interspecies interaction in comparison to the intraspecies interaction [10].
On the other hand, the phase boundary of such transition is also shown to alter by changing the trap frequency [13; 14; 15]. The external trap, being the most favourable physical quantity to control the dynamics of a BEC, drives a quick emergence of various technological applications [16; 17; 18]. A large amount of literature exists towards the theoretical and experimental studies of efficient trap engineering in BEC [19; 20; 21; 22; 23; 24; 25]. A ring-shaped waveguide is one of the most useful traps in 2D, which is formed by overlaying a blue-detuned laser in the middle of harmonic confinement [26; 27], where the radius of the ring can be efficiently controlled. BEC inside such a ring waveguide manifests a number of interesting physics. The phenomenon of fractional revivals (FR) is recently reported in this system [28] and is a very well-studied effect in diverse quantum systems in their time evolutions [29; 30; 31; 32; 33; 34; 35; 36; 37].
In this work, we present a novel technique of isotope separation from the isotopic mixture of trapped Bose-Einstein condensates. The underlying model involves the dynamics of an isotopic mixture of a binary BEC inside the ring trap. The inclusion of interspecies interaction makes the FR-physics more rigorous and interesting due to the emergence of two time-scales, which are influenced by each other. We provide a systematic study of the time-evolution and identify the region of experimental parameters, the trap parameters and the interspecies interaction strength, for which the isotopes of Rubidium, \({}^{85}Rb\) and \({}^{87}Rb\), will get spatially resolved. Employing FR of mixed BEC in a ring waveguide to spatially separate its isotopes will be the first-of-its-kind technique.
The paper is organized as follows. The next section deals with the model of interacting BEC mixture in a ring trap, along with the numerical technique to be adopted. Section III includes a detailed analysis of the combined dynamics through a modified FR phenomenon, where the influence of interspecies interaction on the individual time scales becomes apparent. A range of interspecies interactions is proposed in Sec.IV, which later suggests physical situations for isotope separation through autocorrelation function. In Sec.V, we quantify the degree of isotope separation at different time instances and identify the favourable situations for greater isotopic yields in experiments. A precise parameter domain is also revealed for ring radius and interspecies interaction. Visualization of condensate densities at the identified instances clearly manifests the spatial separation of the isotopes, which validates our model for practical applications. The paper concludes in Sec.VI with a summary and possible implications.
## 2 Basic Formulation of the System and the Method
We describe the method for the isotopic mixture of \({}^{85}Rb\) and \({}^{87}Rb\) BECs, having atomic masses \(m_{1}\) and \(m_{2}\), number of atoms \(N_{1}\) and \(N_{2}\), respectively. Both the components have their respective intra-species scattering lengths, \(a_{11}\) and \(a_{22}\), whereas inter-species coupling is governed by the scattering length \(a_{12}\) and is tuned to the desired values through the inter-species Feshbach resonance [38, 39, 40, 41, 42]. The dynamics of a two-component BEC in a ring trap is described by the three-dimensional (3D) mean-field Gross Pitaevskii Equation (GPE), which is made dimensionless by scaling the position, time, and energy by \(a_{\perp}\), \(1/\omega_{\perp}\) and \(\hbar\omega_{\perp}\), respectively. Here, \(a_{\perp}=a_{1,\perp}=\sqrt{\hbar/2m_{1}\omega_{\perp}}\) is the harmonic oscillator length in the transverse direction when the ring trap is created in the \(x\)-\(y\) plane. \(\omega_{\perp}\) is taken as \(\omega_{1,z}\), the trap frequency as experienced by \({}^{85}Rb\) in \(z\)-direction. All the physical quantities are chosen as per the experiment [5, 39]: \(N_{1}=N_{2}=10^{3}\); \(m_{1}=85\) a.u., \(m_{2}=87\) a.u., \(\omega_{\perp}=2\pi\times 130\) Hz, \(a_{\perp}=0.675\mu\)m; \(a_{11}=a_{22}=2.698\times 10^{-9}\) m; and the radial frequencies, \(\omega_{r}=\omega_{1,r}=\omega_{2,r}\)[5]. The GPE is written after dimensional reduction to quasi-2D as follows [43, 44, 45].
\[i\frac{\partial\psi_{i}}{\partial t}=[\mathcal{L}+\mathcal{N}]\psi_{i}, \tag{1}\]
where \(\psi_{i}\equiv\psi_{i}(x,y,t)\) and \(\mathcal{L}=-\frac{m_{1}}{2m_{i}}{\nabla_{x,y}}^{2}\) with \(\nabla_{x,y}^{2}=\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{ \partial y^{2}}\). The second term, \(\mathcal{N}=\sum_{j=1}^{2}g_{ij}|\psi_{j}|^{2}+V_{i}(x,y)\), contains the tunable quantities, such as the couplings, \(g_{ij}=\frac{\sqrt{2\pi\lambda}(m_{1}+m_{2})a_{ij}N_{j}}{m_{2}a_{\perp}}\) and the ring trap,
\[V_{i}(x,y)=\frac{1}{4}\rho_{i}\omega^{2}(x^{2}+y^{2})+V_{0}e^{-\frac{2(x^{2}+ y^{2})}{\sigma^{2}}}, \tag{2}\]
which is a combination of a 2D harmonic potential and a Gaussian potential. Here \(\rho_{i}=m_{i}/m_{1}\), \(\omega=\omega_{r}/\omega_{\perp}\). \(\sigma\) and \(V_{0}\) are the waist and amplitude of the Gaussian spike, respectively. The oscillations along the radial direction are suppressed by placing the condensate in the exact minimum (\(r_{0}\)) of the potential [46].
_Numerical Method_: We adopt the Split Step Fourier Method (SSFM) [47, 48, 49] for numerically solving the system, where both the parts of the dynamical equation (Eq.1) are treated separately. The first term is evolved in the momentum space, and the second term, involving the nonlinearity and the trap, is evolved in the coordinate space [47]. The \(x\)- and \(y\)-coordinates are equally divided into 512 grids with a step size of 0.1841. The step size for time is 0.0915 with a total of 16384 grids up to the second revival time.
## 3 Dynamics of the Condensate Mixture
We consider that the condensates of the two isotopes initially coexist in the form of a binary peak with waist, \(d_{0}=0.75\;a_{\perp}\), and at the diametrically opposite points of the ring with coordinates, \((\pm r_{0},0)\). The mixed cloud disperses along the ring waveguide
in clockwise and anti-clockwise directions. They will start interfering at \((0,\pm r_{0})\), and also continue to spread further.
It is important to note that a single component BEC (not the system under study) in a ring trap, after some specific time interval, revives in its initial position and shape. The time when the condensate replicates the initial configuration is termed the revival time, \(T_{R}\). This revival phenomenon has been thoroughly examined recently, where the exact revival time is given by \(T_{R}=\pi r_{0}^{2}\)[28]. Moreover, at some specific fractions of this revival time (\(t=T_{R}\times p/q\)), several mini replicas of the initial condensate are formed, and this phenomenon is known as fractional revivals (FR), where \(p\) and \(q\) are mutually prime integers and decide the number of splits [28]. According to the model, at time \(\frac{T_{R}}{4}\) (\(p=1\), \(q=4\)), a single initial cloud will split into two, and a dual initial cloud will split into four daughter condensates. Moreover, two components of a binary BEC with no interspecies interaction will independently show FR and follow the above model, which is not the case for an isotopic mixture of BECs where interspecies interactions are considered.
To display the resultant cloud of the isotopic mixture in the presence of interspecies interactions, we choose a FR time as \(T_{R}/4\) and the condensate is delineated in Fig.
Figure 1: (a) The density of the BEC mixture of \({}^{85}Rb\) and \({}^{87}Rb\) with interspecies interaction strength \(a_{12}=1.0\)\(a_{11}\), at the quarter revival time, \(t=T_{R}/4\). The densities of the two isotopes are shown separately in (b) and (c). The autocorrelation functions of the two interacting species are shown in (d). The ring radius is taken as \(r_{0}=12a_{\perp}\). \(x\) and \(y\) are in the units of \(a_{\perp}=0.675\)\(\mu\)m and, \(t\) is in the units of \(1/\omega_{\perp}=1.224\) ms.
1(a). It is apparent that the condensates of both the isotopes are mixed in a nontrivial manner due to their coexistence. This becomes further clarified when the condensate densities of \({}^{85}Rb\) and \({}^{87}Rb\) are shown separately in Fig.1(b) and 1(c), respectively. These densities look identical and not distinct, resulting in a miscible cloud, as shown in Fig. 1(a). To separate two constituent clouds of the interacting isotopes, we need to unwind the physics of miscibility and then try to devise a way. The well-known autocorrelation (AC) function helps us in the first stage. AC is the modulus square of the inner product of the initial and the temporally evolved wavefunctions, having mathematical definition as
\[|A(t)|^{2}=|\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\psi^{*}(x,y,0)\psi( x,y,t)dxdy|^{2}. \tag{3}\]
For a dispersing cloud, the AC function decays with time to manifest, gradually decreasing fidelity with the initial structure. However, the existence of significantly dominant peaks have different physics and are also observed in Fig.1(d) for the mixed cloud in Fig. 1(a). Periodic AC peaks with nearly one magnitude are the signature of revivals, whereas there are other periodic peaks of lower magnitudes, which are known as FR instances. Overall, the AC time-series provides us with comparable characteristic time scales for both the isotopic condensates in the presence of interspecies interactions, due to which the components remain indistinguishable.
Therefore, Fig.1(a) to Fig.1(d) suggest designing a physical situation when AC peaks will get separated in time during some FR instances to distinguish and measure two isotopic condensates from their mixture. The key physical parameters to control the dynamics are the radius of the ring trap (\(r_{0}\)), interspecies interaction (\(a_{12}\)) and time. It is clear from Fig.1 that the interspecies interaction, \(a_{12}=1.0\ a_{11}\), and ring radius, \(r_{0}=12a_{\perp}\), do not help in isotope separation during the whole temporal dynamics. Below, we will explore the influence of interspecies interaction on the individual revival dynamics of the isotopes.
### Revival Dynamics of the Mixture without and with Interspecies Interactions
For a two-component BEC, we have two-time scales, one for each component. First, we will discuss about the expression for revival times of the two species at zero interspecies interaction. It is known that an initial Gaussian wave packet of two isotopes with width \(w_{i,1}=w_{i,2}=w_{i}\), centred at \((r_{0},0)\), propagates along the ring and interferes with itself at \((-r_{0},0)\). The resulting interference maxima of the two species are proportional to the oscillating terms in their density [43] and are given by
\[I_{max,1} \propto\cos{\left(\frac{2Dtd}{w_{i}^{2}w_{t,1}^{2}}\right)},\] \[I_{max,2} \propto\cos{\left(\frac{2m_{1}Dtd}{m_{2}w_{i}^{2}w_{t,2}^{2}} \right)}, \tag{4}\]
where \(I_{max,1}\) and \(I_{max,2}\) are the interference maxima of \({}^{85}Rb\) and \({}^{87}Rb\), respectively. The condensates are initially separated by distance \(D\) at position \(d\) and with width \(w_{i}\). \(w_{t,1}\) and \(w_{t,2}\) are their widths at a later time \(t\). These widths are related by
\[w_{t,1}=\sqrt{w_{i}^{2}+\left(\frac{2t}{w_{i}}\right)^{2}},\] \[w_{t,2}=\sqrt{w_{i}^{2}+\left(\frac{2m_{1}t}{m_{2}w_{i}}\right)^ {2}}, \tag{5}\]
respectively. The difference in the interference maxima is brought out due to the difference in the revival times of the isotopes. Hence, we have different effective fringe separations for the two species:
\[\Delta d_{1}^{\prime}=\frac{4\pi t}{D},\ \ \Delta d_{2}^{\prime}=\frac{4\pi m_{1 }t}{m_{2}D}.\]
At revival times, the fringe separation becomes \(2\pi r_{0}\times p\) due to the circular geometry of the ring, where \(p\) denotes the winding number. Since the initial separation \(D\) between the wave packets is \(2\pi r_{0}\), the revival time for \({}^{85}Rb\) and \({}^{87}Rb\) are obtained as,
\[T_{R,1}=\pi r_{0}^{2}\times p,\] \[T_{R,2}=\frac{m_{2}}{m_{1}}\pi r_{0}^{2}\times p. \tag{6}\]
The difference of the revival times of \({}^{85}Rb\) and \({}^{87}Rb\) in the absence of interspecies interaction becomes
\[\Delta T_{R}=\left(\frac{m_{2}}{m_{1}}-1\right)\pi r_{0}^{2}. \tag{7}\]
It is clear from the above equation that the difference in the revival time scales for two noninteracting isotopes is merely due to the mass-imbalance of the two species. The solid line with squares in Fig. 2 shows \(\Delta T_{R}\) with the radius of the ring in the absence of interspecies interaction. This variation is quite straightforward from the above analytical expression. However, the presence of interspecies interaction makes the variation quite nontrivial, and it doesn't follow Eq.7. In this case, \(\Delta T_{R}\) is plotted after numerically solving the dynamical equation and by finding the revival times for an interacting mixture, as depicted by the dotted lines with circles and triangles in Fig. 2 for two nonzero interactions, \(a_{12}=0.5a_{11}\) and \(a_{12}=1.0a_{11}\), respectively. The difference in the time scales of the mixed constituents is not significant enough without offering us quite a favourable situation for separating the isotopes. This also explains why we didn't have separation for \(a_{12}=1.0\ a_{11}\) in fig.1. The important points from Fig. 2 are i) the nonuniform time scale variation with the ring radius and ii) the possibility of influencing the time scale variation with respect to the different interspecies interaction strengths.
## 4 Identifying Appropriate Interspecies Interaction for Isotope Separation
We have seen that a mixture of isotopes with interspecies interaction doesn't follow Eq.[6] and both the isotopes have their revival times altered due to interaction strength for a given external trap. We evaluate the revival times of both the species for a wide range of interaction strengths with constant ring radii and depict them in Fig.(3). A merging of the time scales is observed at higher interspecies interactions. This behaviour is also seen for other fractional and revival times of the two species at greater interspecies interaction. We have shown the merging of timescales for ring radii \(r_{0}=10a_{\perp}\), \(r_{0}=12a_{\perp}\) and, \(r_{0}=14a_{\perp}\), in the Fig.3(a), 3(b) and 3(c), respectively.
The interesting point to gain here is the wide difference in the time scales of the interacting species for a significant range of interspecies interaction. Moreover, the greater the radius, the greater the difference in the revival times at \(a_{12}=0\) as shown in Fig.2. This difference in the time scales is definitely the situation where the mixed condensate cloud should manifest a separation. The separation of isotopes is not possible for interspecies interaction strength \(a_{12}\gtrsim 0.5a_{11}\), due to insignificant differences in the revival times of the two species. Therefore, for separating isotopes, one needs to choose 1) an interspecies interaction strength for which the revival times of the two species are fairly separated and 2) a ring radius for which the difference in the revival times, in the absence of the interspecies interaction strength, is considerably large. To examine whether such a situation is indeed favourable for separating the BEC of
Figure 2: Difference in the revival times of the two isotopes \(\Delta T_{R}\), for various radii are shown. The solid line with squares is in the absence of interspecies interaction, whereas the dotted line with circles and triangles is in the presence of interspecies interaction \(a_{12}=0.5a_{11}\) and \(a_{12}=1.0a_{11}\), respectively. Here, radius \(r_{0}\), revival time \(T_{R}\), and interspecies interaction \(a_{12}\) are in the units of \(a_{\perp}\), \(1/\omega_{\perp}\), and \(a_{11}\), respectively.
two miscible isotopes, we choose a set of preferred parameters: \(r_{0}=12a_{\perp}\) and interspecies interaction strength, \(a_{12}=0.3a_{11}\). We repeat the AC function plot with these parameters in Fig. 4 for \({}^{85}Rb\) (solid line) and \({}^{87}Rb\) (dotted line) interacting condensates, where every peak corresponds to a fractional revival time, and we compare it with the previous AC function (Fig.1(d)) plot, given in the inset of Fig. 4. AC peaks are seen to separate, unlike in the previous case. The FR times of both the species are indicated by the notation, \((t_{1},t_{2})\), such that \((\frac{3}{4},\frac{3}{4})\) corresponds to \(\frac{3}{4}^{th}\) revivals of both, \({}^{85}Rb\) and \({}^{87}Rb\). The spacing of the peaks increases with time, thereby making the \({}^{85}Rb\)-\
it possible to identify time instances where the difference in the AC functions of the two species is maximum. As discussed earlier, the greater the difference in the two AC functions, the lesser the spatial overlap between two isotopes. Two condensate wavefunctions will become distinguished or separately measured when the pair of peaks are separated from each other. In other words, the maximum of one AC peak should coincide with the minimum of the other AC peak. This, being a dynamical system, such typical situation has to be maintained for separation. The pair of peaks \((1,1)\) is one such instance. Hence, our methodology is indeed helpful in choosing the optimal values of \(r_{0}\) and \(a_{12}\) for which the isotope separation is possible.
## 5 Separability of the Isotopes
From the autocorrelation functions of the two species, one could notice the separation of the isotopes. However, it requires careful analysis of how much they are separated and at what times for efficient implementation of the isotope separation scheme. To obtain the degree of separation, we define a quantity called 'Separability (\(S\))' between the two isotopes in the mixed BEC:
\[\begin{array}{l}S=1-\Delta,\\ \Delta=\frac{[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}|\psi_{1}(x,y)|^{2 }|\psi_{2}(x,y)|^{2}dxdy]^{2}}{\int_{-\infty}^{\infty}|\psi_{1}(x,y)|^{4}dxdy \int_{-\infty}^{\infty}|\psi_{2}(x,y)|^{4}dxdy}.\end{array} \tag{8}\]
Figure 5: Variation of the isotope Separability with Time. The prominent separability peaks are indicated, such as \(S_{\frac{1}{2}}\) denotes the time near the \(1/2\)-th fractional revival of \({}^{87}Rb\). The inter-species interaction is taken as \(a_{12}=0.3a_{11}\) and the ring radius is \(r_{0}=12a_{\perp}\). \(t\) is in the unit of \(1/\omega_{\perp}=1.224\) ms.
The term, \(\Delta\), is given by the square of the inner product of the probability densities of \({}^{85}Rb\) and \({}^{87}Rb\), weighted by the product of \(4^{th}\) order moment of the inner product. The numerator helps in amplifying the tiny variations in the overlap of the wavefunctions of two species. The separability takes the value from 0 to 1, where 0 implies zero separation and 1 corresponds to a 100% separation. We choose the parameters of Fig. 4 for calculating \(S\) and presented in Fig. 5. A separability peak close to 1 will be the desired situation. For times up to 1000 in the unit of \(\frac{1}{\omega_{\perp}}\), we could find four prominent peaks, \(S_{\frac{1}{2}}\), \(S_{1}\), \(S_{\frac{3}{2}}\) and \(S_{2}\), occurring closer to \(\frac{1}{2}\), 1, \(\frac{3}{2}\) and 2 of the revival time of \({}^{87}Rb\). At these times, the separation of the isotopes is maximum, i.e., \(S\approx 1\), compared to the other times.
The times corresponding to \(S_{\frac{1}{2}}\), \(S_{1}\), \(S_{\frac{3}{2}}\) and \(S_{2}\) are \(0.286s\), \(0.575s\), \(0.855s\) and, \(1.138s\), respectively. At these times, the condensate densities of the two species have \(>95\%\) separation. The exact values of the percentage of separation of the two isotopes for the above Separability peaks are given in Table.1.
In addition, a more favourable instance will be relying on the following factors too: 1) at higher evolution time, one could observe an overall decay of the autocorrelation function (Fig.4) due to dispersion and hence, sooner is better for isotope separation; 2) a broader temporal width of the separability peak is preferable, as it will provide better time window in which the isotopes remains separated in the experiment.
_Parameter Contours for Maximal Isotope Separation_: In our work, the separability of the isotopes is tuned by two physical parameters, the radius of the ring \(r_{0}\) and the interspecies interaction \(a_{12}\). The separability peaks in Fig. 5 offers the times for maximal isotope separations, designated by \(S_{\frac{1}{2}}\), \(S_{1}\), \(S_{\frac{3}{2}}\) and \(S_{2}\) and having \(>95\%\) isotope yields. The exact values of the percentage of separation are given in Table.1. These, along with the above points (1 and 2), suggest one of the most favourable instances as \(S_{1}\) with a wider separability peak and \(98.9\%\) yield. We identify the parameter contours, comprising of \(r_{0}\) and \(a_{12}\), for the instance, \(S_{1}\), and depict it in Fig. 6.
The diagram highlights the regions above 90% separability. Though the whole highlighted region provides a wide parameter range for isotope separation, one can further improve it by choosing greater yields, as shown by different contours. The white dotted line in the middle indicates the maximum separability value for both parameters. It is interesting to note that the maximum separability line lies within the window \(0.2a_{11}>a_{12}>0.4a_{11}\) of interspecies interaction strength. We also draw the regions for various percentage yields, 90%, 95% and 98%, by dashed contours.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Peak & \(S_{\frac{1}{2}}\) & \(S_{1}\) & \(S_{\frac{3}{2}}\) & \(S_{2}\) \\ \hline Time (s) & 0.286 & 0.575 & 0.855 & 1.138 \\ \hline Separability & 95.7\% & 98.9\% & 98.7\% & 94.5\% \\ \hline \end{tabular}
\end{table}
Table 1: Percentage of separation of the two isotopes for the four Separability peaks and their corresponding times from Fig. 5. The interspecies interaction is taken as \(a_{12}=0.3a_{11}\) and the ring radius is \(r_{0}=12a_{\perp}\).
For a desired separation line, the favourable values for the radius and interspecies interaction can be chosen in the experiments.
_Condensate Densities Upon Isotope Separation_: One can visualize the individual condensate of the isotopes for any set of parameters, described in the parameter contour plot in Fig. 6 to verify their spatial separation. Here, we will choose one of the values, such as the interspecies interaction is \(a_{12}=0.3a_{11}\) and the ring trap radius is taken as \(r_{0}=12a_{\perp}\) in Fig. 7. Condensate densities of both the isotopes, \({}^{85}Rb\) and \({}^{87}Rb\), separately plotted at times, \(0.286\ \mathrm{s},\,0.575\ \mathrm{s},\,0.855\ \mathrm{s}\), and \(1.138\ \mathrm{s}\), corresponding to \(S_{\frac{1}{2}}\), \(S_{1}\), \(S_{\frac{3}{2}}\) and \(S_{2}\), respectively. These instances, near multiples of half revival, do not further split the initially considered dual clouds and, hence, show nice separation in the preferred parameter window. At \(S_{\frac{1}{2}}\), the isotopes are separated with respect to their common centre of mass and remain separated for a shorter time interval, as reflected from the narrow width of the separability peak for \(S_{\frac{1}{2}}\) in Fig. 5. It is fascinating to observe that each of the other pairs of plots for \({}^{85}Rb\) and \({}^{87}Rb\) condensates in Fig. 7 manifests orthogonal positioning with respect to each other, implying a clear isotope separation inside the ring trap.
## 6 Conclusion
We have reported a method to separate isotopes from a mixed-binary BEC of \(Rb\) isotopes, which is made to evolve along a ring waveguide. The difference in the fractional revival time scales of the two interacting isotopes is employed in spatially separating them. We have quantified the degree of spatial separation of isotopes by a quantity
Figure 6: The preferred parameter regions for isotope separation in experiments, following the Separability, \(S_{1}\), as an example. The interspecies interaction \(a_{12}\) and ring radius \(r_{0}\) can be chosen as per the need, where the white dotted line indicates the maximum separability. We also draw the regions for various percentage yields, \(90\%,\ 95\%\) and \(98\%\), by dashed contours. The ring radius is in the unit of \(a_{\perp}=0.675\ \mu\mathrm{m}\), and the interspecies interaction is in the unit of \(a_{11}=2.698\times 10^{-9}\ \mathrm{m}\).
called Separability. The plot of separability and time gives us specific times when there is a maximum separation of isotopes. The times corresponding to \(S_{\frac{1}{2}}\), \(S_{1}\), \(S_{\frac{3}{2}}\) and \(S_{2}\) are identified, where the condensate densities of the two species have \(>95\%\) separation. The percentage of separation is addressed up to \(\sim 99\%\), which is higher than the previously reported isotope separation methods [4, 50] by using the LIDIS technique [4, 51, 52, 53]. The physically supported parameter contours offered a wide range of trap and cross-interaction values under the present scheme. The domain of interspecies interaction strength is unique (\(a_{12}<a_{11}\)) in the context of high fidelity spatial separation of isotopes in comparison to the past works [6, 8, 9].
|
2309.16966 | The Mahler measure of a family of polynomials with arbitrarily many
variables | We present an exact formula for the Mahler measure of an infinite family of
polynomials with arbitrarily many variables. The formula is obtained by
manipulating the Mahler measure integral using certain transformations,
followed by an iterative process that reduces this computation to the
evaluation of certain polylogarithm functions at sixth roots of unity. This
yields values of the Riemann zeta function and the Dirichlet $L$-function
associated to the character of conductor 3. | Siva Sankar Nair | 2023-09-29T04:24:07Z | http://arxiv.org/abs/2309.16966v1 | # The Mahler measure of a family of polynomials with arbitrarily many variables
###### Abstract.
We present an exact formula for the Mahler measure of an infinite family of polynomials with arbitrarily many variables. The formula is obtained by manipulating the Mahler measure integral using certain transformations, followed by an iterative process that reduces this computation to the evaluation of certain polylogarithm functions at sixth roots of unity. This yields values of the Riemann zeta function and the Dirichlet \(L\)-function associated to the character of conductor \(3\).
Key words and phrases:Mahler measure; zeta values; \(L\)-values; polylogarithms 2020 Mathematics Subject Classification: Primary 11R06; Secondary 11M06, 11G55
## 1. Introduction
For a non-zero rational function in \(n\) variables, \(P\in\mathbb{C}(x_{1},\ldots,x_{n})\), the (logarithmic) Mahler measure of \(P\) is defined to be
\[\mathrm{m}(P):=\frac{1}{(2\pi i)^{n}}\int_{\mathbb{T}^{n}}\log|P(x_{1},\ldots,x_{n})|\frac{\mathrm{d}x_{1}}{x_{1}}\cdots\frac{\mathrm{d}x_{n}}{x_{n}},\]
where the integration path is taken along the unit \(n\)-torus \(\mathbb{T}^{n}=\{(x_{1},\ldots,x_{n})\in\mathbb{C}^{n}:|x_{1}|=\cdots=|x_{n}| =1\}\) with respect to the Haar measure.
While Jensen had already studied this integral for single-variable holomorphic functions in the late 19\({}^{\mathrm{th}}\) century, it was in Lehmer's work [10] related to Mersenne numbers where this quantity first appeared in the context of single-variable polynomial functions. One may note that using Jensen's formula, the Mahler measure of a univariate polynomial can be expressed in terms of the absolute value of its roots that lie outside the unit circle. The generalization to multivariate polynomials was made by Mahler [11] when he studied the quantity \(\mathrm{M}(P)=\exp(\mathrm{m}(P))\) in his work related to polynomial heights. In the 1980s, it was Smyth [13, 1] who first observed a relation between Mahler measures and values of certain \(L\)-functions when he showed the following:
\[\mathrm{m}(x+y+1)=\frac{3\sqrt{3}}{4\pi}L(\chi_{-3},2), \tag{1}\]
\[\mathrm{m}(x+y+z+1)=\frac{7}{2\pi^{2}}\zeta(3), \tag{2}\]
where \(L(\chi_{-3},s)\) is the Dirichlet \(L\)-function in the character of conductor \(3\) and \(\zeta(s)\) is the Riemann zeta function.
Interest in Mahler measures grew following these results and several such relations were observed with \(L\)-functions associated to various arithmetic objects. For instance, Deninger [3] and Boyd [2] conjectured the following relation involving an elliptic curve, which was later proven by Rogers and Zudilin [12]:
\[\mathrm{m}\left(x+\frac{1}{x}+y+\frac{1}{y}+1\right)=\frac{15}{4\pi^{2}}L(E_{ 15a8},2)=L^{\prime}(E_{15a8},0).\]
Here \(L(E_{15a8},s)\) denotes the \(L\)-function associated to the elliptic curve with Cremona label \(15a8\). While much remains unknown regarding this mysterious link between Mahler measures and \(L\)-functions, there has been remarkable work that sheds light on this phenomenon. In the 1990s, Deninger [3] related Mahler measures to certain regulator values from \(K\)-theory. In view of Beilinson's conjectures, these regulators are expected to appear as values of \(L\)-functions, as well as polylogarithm functions evaluated at algebraic numbers. In turn, certain combinations of these polylogarithms yield values of the Riemann zeta function and various other Dirichlet \(L\)-functions, explaining their presence in Mahler measure computations. For example, in the univariate case, the Mahler measure is expressed as the sum of certain logarithms (or unilogarithms); Smyth's first example (1) is the measure of a two-variable polynomial expressed in terms of a dilogarithm; and his second example (2) is that of a three-variable polynomial in terms of a trilogarithm. On a slightly different but related note, one may view this in context of Zagier's grand conjecture [14] - the value \(\zeta_{K}(n)\) of the Dedekind zeta function corresponding to a number field \(K\) evaluated at an integer \(n>1\) can be expressed as the determinant of a matrix whose entries are linear combinations of polylogarithms evaluated at certain elements of \(K\). Once again, this has connections to the (Borel) regulator, as seen in the proof for the case \(n=3\) by Goncharov [4].
A particularly interesting result was given by Lalin [7] involving rational functions with arbitrarily many variables whose Mahler measure can be computed explicitly. These formulae arise by evaluating polylogarithms at the fourth root of unity which yield values of the Dirichlet \(L\)-function \(L(\chi_{-4},s)\) associated to the character of conductor \(4\). This is one of the few results in the literature where the exact Mahler measure of multivariable polynomials with arbitrarily many variables has been calculated. The result is as follows. Let
\[R_{m}(z_{1},\ldots,z_{m},y):=y+\left(\frac{1-z_{1}}{1+z_{1}}\right)\cdots\left( \frac{1-z_{m}}{1+z_{m}}\right),\]
and for \(a_{1},\ldots a_{m}\in\mathbb{C}\), define
\[s_{\ell}(a_{1},\ldots,a_{m})=\left\{\begin{array}{ll}1&\mbox{if $\ell=0$},\\ \sum_{i_{1}<\cdots<i_{\ell}}a_{i_{1}}\cdots a_{i_{\ell}}&\mbox{if $0<\ell\leq m$},\\ 0&\mbox{if $m<\ell$}.\end{array}\right.\]
We have an explicit formula for the Mahler measure of the rational polynomial \(R_{m}\)
**Theorem**.: _([7, 8]) For \(n\geq 1\),_
\[\mathrm{m}\left(R_{2n}\right)=\sum_{h=1}^{n}\frac{s_{n-h}(2^{2},4^{2},\ldots,( 2n-2)^{2})}{(2n-1)!}\left(\frac{2}{\pi}\right)^{2h}\mathcal{A}(h),\]
_where_
\[\mathcal{A}(h):=(2h)!\left(1-\frac{1}{2^{2h+1}}\right)\zeta(2h+1).\]
_For \(n\geq 0\),_
\[\mathrm{m}\left(R_{2n+1}\right)=\sum_{h=0}^{n}\frac{s_{n-h}(1^{2},3^{2},\ldots,(2n-1)^{2})}{(2n)!}\left(\frac{2}{\pi}\right)^{2h+1}\mathcal{B}(h),\]
_where_
\[\mathcal{B}(h):=(2h+1)!L(\chi_{-4},2h+2).\]
The purpose of this article is to express the Mahler measure of another family of polynomials with arbitrarily many variables in terms of zeta values and certain \(L\)-values. The method of obtaining these results is similar to that of Lalin [7] applied to a more complicated family of
functions, however, the polynomials we consider have complex coefficients rather than integral. The interesting outcome is that in this case, the calculations involve the polylogarithm evaluated at sixth roots of unity and we get values of the \(L\)-function \(L(\chi_{-3},s)\) associated to the character of conductor \(3\). This is the first example of an infinite family of polynomials with arbitrarily many variables yielding Mahler measures that involve \(L(\chi_{-3},s)\). In particular, we consider the rational polynomial
\[Q_{n}(z_{1},\ldots,z_{n},y)=y+\left(\frac{\overline{\omega}z_{1}+\omega}{z_{1} +1}\right)\cdots\left(\frac{\overline{\omega}z_{n}+\omega}{z_{n}+1}\right),\]
where
\[\omega=e^{2\pi i/3}=-\frac{1}{2}+\frac{\sqrt{3}i}{2}\]
is a third root of unity. This time, we let
\[\mathcal{A}(h)=(2h)!\left(1-\frac{1}{3^{2h+1}}\right)\left(1-\frac{1}{2^{2h+1 }}\right)\zeta(2h+1),\]
and
\[\mathcal{B}(h)=(2h+1)!\left(1+\frac{1}{2^{2h+2}}\right)L(\chi_{-3},2h+2).\]
Then
**Theorem 1**.: _For \(n\geq 1\),_
\[\mathrm{m}(Q_{2n})=\frac{2}{12^{n}}\Bigg{(}\sum_{h=1}^{n}a_{n,h-1}\left(\frac{ 3}{\pi}\right)^{2h}\mathcal{A}(h)\ +\ \sum_{h=0}^{n-1}b_{n,h}\left(\frac{3}{\pi}\right)^{2h+1} \mathcal{B}(h)\Bigg{)},\]
_and for \(n\geq 0\) we have_
\[\mathrm{m}(Q_{2n+1})=\frac{1}{12^{n}\sqrt{3}}\Bigg{(}\sum_{h=1}^{n}c_{n,h-1} \left(\frac{3}{\pi}\right)^{2h}\mathcal{A}(h)\ +\ \sum_{h=0}^{n}d_{n,h}\left(\frac{3}{\pi}\right)^{2h+1} \mathcal{B}(h)\Bigg{)},\]
_where the coefficients \(a_{r,s}\,,b_{r,s}\,,c_{r,s},d_{r,s}\) are real numbers given recursively by equations (46)-(51) starting from the initial value \(d_{0,0}=1\)._
The first few examples of this family are given by
\begin{tabular}{|l|l|} \hline \(\mathrm{m}\left(y+\left(\frac{\overline{\omega}z_{1}+\omega}{z_{1}+1}\right) \right)\) & \(\frac{5\sqrt{3}}{4\pi}\,L(\chi_{-3},2)\) \\ \hline \(\mathrm{m}\left(y+\left(\frac{\overline{\omega}z_{1}+\omega}{z_{1}+1}\right) \left(\frac{\overline{\omega}z_{2}+\omega}{z_{2}+1}\right)\right)\) & \(\frac{91}{18\pi^{2}}\zeta(3)+\frac{5}{4\sqrt{3}\pi}\,L(\chi_{-3},2)\) \\ \hline \(\mathrm{m}\left(y+\left(\frac{\overline{\omega}z_{1}+\omega}{z_{1}+1}\right) \cdots\left(\frac{\overline{\omega}z_{3}+\omega}{z_{3}+1}\right)\right)\) & \(\frac{91}{36\pi^{2}}\zeta(3)+\frac{5}{4\sqrt{3}\pi}\,L(\chi_{-3},2)+\frac{153 \sqrt{3}}{16\pi^{3}}L(\chi_{-3},4)\) \\ \hline \(\mathrm{m}\left(y+\left(\frac{\overline{\omega}z_{1}+\omega}{z_{1}+1}\right) \cdots\left(\frac{\overline{\omega}z_{4}+\omega}{z_{4}+1}\right)\right)\) & \(\frac{91}{36\pi^{2}}\zeta(3)+\frac{3751}{108\pi^{4}}\zeta(5)+\frac{35}{36\sqrt {3}\pi}\,L(\chi_{-3},2)+\frac{51\sqrt{3}}{8\pi^{3}}L(\chi_{-3},4)\). \\ \hline \end{tabular}
To evaluate these Mahler measures, we first derive a general formula for certain integrals involving arbitrary powers of the logarithm using contour integration. This is done in Section 3. The multiple integral appearing in the Mahler measure computation can then be evaluated using these formulae one after the other for each variable. To systematically capture the details of this process, we lay out an iterative procedure in Section 4. This lets us express the Mahler measure as an integral of a single variable, and these integrals in turn can be related to values of certain polylogarithms at sixth roots of unity via hyperlogarithms, which we introduce in Section 2. The polylogarithm values can then be expressed in terms of the Riemann Zeta function and Dirichlet \(L\)-functions giving us the desired result.
## Acknowledgements
The author would like to thank his Ph.D. supervisor, Matilde Lalin, for her valuable guidance and support, both throughout the course of this project and beyond. The author is also grateful for support from the Institut des sciences mathematiques, the Centre de recherches mathematiques, the Natural Sciences and Engineering Research Council of Canada, and the Fonds de recherche du Quebec - Nature et technologies.
## 2. Hyperlogarithms and multiple polylogarithms
We first give a brief introduction to hyperlogarithms and multiple polylogarithms and relate them via iterated integrals. One may refer to [5],[6] and [7] for more details. We follow the same notation to denote an iterated integral
\[\int_{0}^{a_{k+1}}\frac{\mathrm{d}t}{t-a_{1}}\circ\cdots\circ\frac{\mathrm{d} t}{t-a_{k}}:=\int\limits_{P}\cdots\int\!\frac{\mathrm{d}t_{1}}{t_{1}-a_{1}} \cdots\frac{\mathrm{d}t_{k}}{t_{k}-a_{k}},\]
where the \(a_{j}\)'s are complex numbers and the path of integration \(P\) is a path \(0\to t_{1}\to t_{2}\to\cdots\to t_{k}\to a_{k+1}\) joining \(0\) and \(a_{k+1}\). The value of this integral depends on the homotopy class of \(P\) in \(\mathbb{C}\setminus\{a_{1},a_{2},\ldots,a_{m}\}\). With this notation in mind, the _hyperlogarithm_ is defined as
\[\mathrm{I}_{n_{1},\ldots,n_{m}}(a_{1}:a_{2}:\cdots:a_{m+1}):=\\ \int_{0}^{a_{m+1}}\underbrace{\frac{\mathrm{d}t}{t-a_{1}}\circ \frac{\mathrm{d}t}{t}\circ\cdots\circ\frac{\mathrm{d}t}{t}}_{n_{1}\text{ times}}\circ\underbrace{\frac{\mathrm{d}t}{t-a_{2}}\circ\frac{\mathrm{d}t}{t} \circ\cdots\circ\frac{\mathrm{d}t}{t}}_{n_{2}\text{ times}}\circ\cdots\circ \underbrace{\frac{\mathrm{d}t}{t-a_{m}}\circ\frac{\mathrm{d}t}{t}\circ\cdots \circ\frac{\mathrm{d}t}{t}}_{n_{m}\text{ times}},\]
where \(n_{1},\ldots,n_{m}\) are positive integers. We also define the _multiple polylogarithm_ of length \(m\) and height \(w=n_{1}+n_{2}+\cdots+n_{m}\) as
\[\operatorname{Li}_{n_{1},\ldots,n_{m}}(x_{1},\ldots,x_{m}):=\sum_{1\leq k_{1} <k_{2}<\cdots<k_{m}}\frac{x_{1}^{k_{1}}x_{2}^{k_{2}}\cdots x_{m}^{k_{m}}}{k_{ 1}^{n_{2}}^{n_{2}}\cdots k_{m}^{k_{m}}}.\]
The sum is absolutely convergent for \(|x_{j}|<1\), and also for \(|x_{j}|\leq 1\) if \(n_{m}\geq 2\). One can show (see [6]) the following identities
\[\operatorname{Li}_{n_{1},\ldots,n_{m}}(x_{1},\ldots,x_{m}) =(-1)^{m}\operatorname{I}_{n_{1},\ldots,n_{m}}\left(\frac{1}{x_{1 }\cdots x_{m}}:\frac{1}{x_{2}\cdots x_{m}}:\cdots:\frac{1}{x_{m}}:1\right), \tag{4}\] \[\operatorname{I}_{n_{1},\ldots,n_{m}}(a_{1}:\cdots:a_{m+1}) =(-1)^{m}\operatorname{Li}_{n_{1},\ldots,n_{m}}\left(\frac{a_{2} }{a_{1}},\frac{a_{3}}{a_{2}},\ldots,\frac{a_{m+1}}{a_{m}}\right). \tag{3}\]
Note that identity (3) enables an analytic continuation of the multiple polylogarithm using the definition of the hyperlogarithm. The value of the hyperlogarithm function depends on the homotopy class of the path joining \(0\) and \(a_{m+1}\) in \(\mathbb{C}\setminus\{a_{1},\ldots,a_{m}\}\). This means that these multivalued functions may not be defined uniquely. However, we will always encounter linear combinations of these functions that yield single-valued functions. In Section 4.2, we will use the above identities to write the Mahler measure integral as polylogarithms, which would then give us certain \(L\)-values.
## 3. Some general integrals
We begin with the rational polynomial
\[Q_{n}(z_{1},\ldots,z_{n},y)=\left(\frac{\overline{\omega}z_{1}+\omega}{z_{1}+1 }\right)\cdots\left(\frac{\overline{\omega}z_{n}+\omega}{z_{n}+1}\right)+y,\]
where
\[\omega=e^{2\pi i/m}=\cos\frac{2\pi}{m}+i\sin\frac{2\pi}{m}\]
is an \(m^{\text{th}}\)-root of unity. We wish to evaluate \(\text{m}(Q_{n})\), the Mahler measure of \(Q_{n}\). Note that the Mahler measure of the polynomial \(P_{\gamma}(y)=\gamma+y\) is given by \(\log^{+}|\gamma|\), and we may write the Mahler measure integral of \(Q_{n}\) as
\[\frac{1}{(2\pi)^{n}}\int_{-\pi}^{\pi}\cdots\int_{-\pi}^{\pi}\text{m}\left(P_{ \left(\frac{\overline{\omega}z_{1}+\omega}{z_{1}+1}\right)\cdots\left(\frac{ \overline{\omega}z_{n}+\omega}{z_{n}+1}\right)}\right)\;\text{d}\theta_{1} \cdots\;\text{d}\theta_{n},\]
where \(z_{j}=e^{i\theta_{j}}.\) Let
\[\tan\frac{\theta_{j}}{2}=\frac{x_{j}-\cos\frac{2\pi}{m}}{\sin\frac{2\pi}{m}}, \tag{5}\]
so that
\[x_{j}=\frac{\overline{\omega}z_{j}+\omega}{z_{j}+1}.\]
Differentiating equation (5) gives
\[\left(1+\tan^{2}\frac{\theta_{j}}{2}\right)\frac{\text{d}\theta_{j }}{2} =\frac{\text{d}x_{j}}{\sin\frac{2\pi}{m}},\] \[\text{d}\theta_{j} =\frac{2\sin\frac{2\pi}{m}}{x_{j}^{2}-2\cos\frac{2\pi}{m}x_{j}+1} \;\text{d}x_{j}\] \[=\frac{2\sin\frac{2\pi}{m}}{(x_{j}-\omega)(x_{j}-\overline{\omega })}\;\text{d}x_{j}.\]
Thus, the Mahler measure is given by
\[\left(\frac{\sin\frac{2\pi}{m}}{\pi}\right)^{n}\int_{-\infty}^{\infty}\cdots \int_{-\infty}^{\infty}\text{m}(P_{x_{1}\cdots x_{n}})\frac{\text{d}x_{1}}{(x _{1}-\omega)(x_{1}-\overline{\omega})}\cdots\frac{\text{d}x_{n}}{(x_{n}-\omega )(x_{n}-\overline{\omega})}.\]
In this discussion, we take \(\omega\) to be the third root of unity, \(e^{2\pi i/3}\). Taking \(m=3\), the above integral can be written as
\[\left(\frac{\sqrt{3}}{2\pi}\right)^{n}\int_{-\infty}^{\infty}\cdots\int_{- \infty}^{\infty}\text{m}(P_{x_{1}\cdots x_{n}})\frac{\text{d}x_{1}}{(x_{1}^{2} +x_{1}+1)}\cdots\frac{\text{d}x_{n}}{(x_{n}^{2}+x_{n}+1)},\]
and we make the transformation \(y_{1}=x_{1},\;y_{2}=x_{1}x_{2},\;\ldots,y_{n}=x_{1}\cdots x_{n}\) to obtain
\[\left(\frac{\sqrt{3}}{2\pi}\right)^{n}\int_{y_{n}}^{*}\cdots\int_{y_{1}}^{*} \operatorname{m}(P_{y_{n}})\frac{y_{1}\;\mathrm{d}y_{1}}{(y_{1}^{2}+y_{1}+1)} \cdot\frac{y_{2}\;\mathrm{d}y_{2}}{(y_{2}^{2}+y_{2}y_{1}+y_{1}^{2})}\cdots \frac{\mathrm{d}y_{n}}{(y_{n}^{2}+y_{n}y_{n-1}+y_{n-1}^{2})}. \tag{6}\]
The limits of integration for the variables \(y_{j}\) have not been described above. We will discuss them in more detail in Section 4.1. Throughout this article, we will denote these limits using the symbol \(\int_{y_{j}}^{*}\).
In order to proceed with the computation of the multiple integral in (6), we will first evaluate certain single-variable integrals involving arbitrary powers of the logarithm. In Section 4 we will compute the integral (6) using these formulae for each variable \(y_{j}\) one by one, until the last variable \(y_{n}\), after which we may express them as polylogarithm values.
### The integral \(f_{1}(k)\)
We now wish to evaluate the following integral which we denote by \(f_{1}(k)\):
\[f_{1}(k)=\int_{0}^{\infty}\frac{t\log^{k}t}{(t^{2}+at+a^{2})(t^{2}+bt+b^{2})} \;\mathrm{d}t,\]
for any integer \(k\geq 0\) and \(a,b\in\mathbb{R}^{+}\). For this we carry out a contour integral
\[\oint_{C}\frac{z\log^{k+1}z}{(z^{2}+az+a^{2})(z^{2}+bz+b^{2})}\;\mathrm{d}z,\]
with contour as given by Figure 1, comprising of the paths \(C_{\varepsilon},\gamma_{1},C_{R}\) and \(\gamma_{2}\). Note that the contour is drawn in this way in order to skip the point \(z=0\). The integrals corresponding to paths \(C_{\varepsilon}\) and \(C_{R}\) can be shown to vanish as \(\varepsilon\to 0\) and \(R\to\infty\). What remains is the combination of \(\gamma_{1}\) and \(\gamma_{2}\) which give
\[\int_{0}^{\infty}\frac{t\log^{k+1}t}{(z^{2}+az+a^{2})(z^{2}+bz+b^{2})}\; \mathrm{d}t+\int_{\infty}^{0}\frac{t(\log t+2\pi i)^{k+1}}{(z^{2}+az+a^{2})(z ^{2}+bz+b^{2})}\;\mathrm{d}t.\]
Writing \(\theta=2\pi/3\) so that \(2\pi=3\theta\), and using the binomial expansion, we have the above to be equal to
\[-\int_{0}^{\infty}\frac{t\sum_{j=1}^{k+1}\binom{k+1}{j}\log^{k+1-j}t\;(3\theta )^{j}i^{j}}{(z^{2}+az+a^{2})(z^{2}+bz+b^{2})}\;\mathrm{d}t. \tag{7}\]
By the residue theorem, this expression will be equal to the sum of residues inside the contour of integration. To calculate the value of \(f_{1}(k)\), we will compare the imaginary part of equation (7) with the imaginary part of the sum of residues. Collecting the purely imaginary terms in the contour integration (7) gives
\[-\sum_{j=1,\;\mathrm{odd}}^{k+1}\binom{k+1}{j}(3\theta)^{j}\,i^{ j}\,\int_{0}^{\infty}\frac{t\log^{k+1-j}t}{(z^{2}+az+a^{2})(z^{2}+bz+b^{2})}\; \mathrm{d}t\] \[= -\sum_{j=1,\;\mathrm{odd}}^{k+1}\binom{k+1}{j}(3\theta)^{j}\,i^{ j}\,f_{1}(k+1-j) \tag{8}\] \[= -3i\theta(k+1)f_{1}(k)-\sum_{j>1,\;\mathrm{odd}}^{k+1}\binom{k+1} {j}(3\theta)^{j}\,i^{j}\,f_{1}(k+1-j).\]
Next, we compute the sum of residues at the poles of the integrand in \(f_{1}(k)\) lying inside the contour. These poles are denoted by black dots at \(z=a\omega,a\omega^{2},b\omega\) and \(b\omega^{2}\) in Figure 1. The residue calculation is given by
\[2\pi i\Bigg{[}\frac{a\omega(\log a+i\theta)^{k+1}}{(a\omega-a\omega ^{2})(a\omega-b\omega)(a\omega-b\omega^{2})}+\frac{a\omega^{2}(\log a+2i\theta) ^{k+1}}{(a\omega^{2}-a\omega)(a\omega^{2}-b\omega)(a\omega^{2}-b\omega^{2})}\\ +\frac{b\omega(\log b+i\theta)^{k+1}}{(b\omega-a\omega)(b\omega- a\omega^{2})(b\omega-b\omega^{2})}+\frac{b\omega^{2}(\log b+2i\theta)^{k+1}}{(b \omega^{2}-a\omega)(b\omega^{2}-a\omega^{2})(b\omega^{2}-b\omega)}\Bigg{]} \tag{9}\] \[= 2\pi i\left[\frac{(a\omega^{2}-b\omega)\Big{(}(\log a+i\theta) ^{k+1}-(\log b+2i\theta)^{k+1}\Big{)}+(a\omega-b\omega^{2})\Big{(}(\log b+i \theta)^{k+1}-(\log a+2i\theta)^{k+1}\Big{)}}{(\omega-\omega^{2})(a^{3}-b^{3 })}\right].\]
Figure 1. The contour for \(f_{j}(k)\) - the black dots denote the poles for \(f_{1}(k)\), while the red dots denote the poles for \(f_{2}(k)\).
Again, isolating the purely imaginary terms in the sum of residues in (9) we get
\[\frac{-\pi}{\sqrt{3}(a^{3}-b^{3})}\Bigg{[}\sqrt{3}(a+b)\sum_{j=0,\; \text{even}}^{k+1}\binom{k+1}{j}(2^{j}+1)i^{j+1}\theta^{j}(\log^{k+1-j}a-\log^{k +1-j}b)\\ +(a-b)\sum_{j=1,\;\text{odd}}^{k+1}\binom{k+1}{j}(2^{j}-1)i^{j} \theta^{j}(\log^{k+1-j}a+\log^{k+1-j}b)\Bigg{]}. \tag{10}\]
Finally, we equate (8) and (10):
\[-3i\theta(k+1)f_{1}(k)-\sum_{j>1,\;\text{odd}}^{k+1}\binom{k+1}{j }(3\theta)^{j}\,i^{j}\,f_{1}(k+1-j)\\ =\frac{-\pi}{\sqrt{3}(a^{3}-b^{3})}\Bigg{[}\sqrt{3}(a+b)\sum_{j=0,\;\text{even}}^{k+1}\binom{k+1}{j}(2^{j}+1)i^{j+1}\theta^{j}(\log^{k+1-j}a- \log^{k+1-j}b)\\ +(a-b)\sum_{j=1,\;\text{odd}}^{k+1}\binom{k+1}{j}(2^{j}-1)i^{j} \theta^{j}(\log^{k+1-j}a+\log^{k+1-j}b)\Bigg{]}.\]
Recall again that \(3\theta=2\pi\). Dividing throughout by \(3\theta i(k+1)\) and isolating for \(f_{1}(k)\), we obtain
\[f_{1}(k) =\frac{1}{k+1}\sum_{j>1,\;\text{odd}}^{k+1}(-1)^{\frac{j+1}{2}} \binom{k+1}{j}(3\theta)^{j-1}f_{1}(k+1-j)\] \[+\frac{(a+b)}{2(k+1)(a^{3}-b^{3})}\sum_{j=0,\;\text{even}}^{k+1}( -1)^{\frac{j}{2}}\theta^{j}\binom{k+1}{j}(2^{j}+1)(\log^{k+1-j}a-\log^{k+1-j}b)\] \[+\frac{1}{2\sqrt{3}(k+1)(a^{2}+ab+b^{2})}\sum_{j=1,\;\text{odd}} ^{k+1}(-1)^{\frac{j+1}{2}}\theta^{j}\binom{k+1}{j}(2^{j}-1)(\log^{k+1-j}a+ \log^{k+1-j}b). \tag{11}\]
To write the above expression in a more systematic way, we will define two polynomials in a recursive manner. Observe that in the expression for \(f_{1}(k)\) in (11), there are three parts as \(j\) varies from \(1\) to \(k+1\) - one involving lower terms \(f_{1}(k+1-j)\):
\[\frac{1}{k+1}\sum_{j>1,\;\text{odd}}^{k+1}(-1)^{\frac{j+1}{2}}\binom{k+1}{j} (3\theta)^{j-1}f_{1}(k+1-j),\]
where \(k+1-j\) is strictly smaller than \(k\); the second part which corresponds to the even values of \(j\):
\[\frac{(a+b)}{2(k+1)(a^{3}-b^{3})}\sum_{j=0,\;\text{even}}^{k+1}(-1)^{\frac{j}{ 2}}\theta^{j}\binom{k+1}{j}(2^{j}+1)(\log^{k+1-j}a-\log^{k+1-j}b);\]
and finally the third part that corresponds to the odd values of \(j\):
\[\frac{1}{2\sqrt{3}(k+1)(a^{2}+ab+b^{2})}\sum_{j=1,\;\text{odd}}^{k+1}(-1)^{ \frac{j+1}{2}}\theta^{j}\binom{k+1}{j}(2^{j}-1)(\log^{k+1-j}a+\log^{k+1-j}b).\]
We will define two polynomials \(R_{k}(x)\) and \(S_{k}(x)\), one for the even values of \(j\) (the second part), and one for the odd values (the third part), respectively. The first part will define the recursive property of both of these polynomials. The idea is to write \(\frac{\log a}{\theta}\) as \(x\), so that we may replace
\[\theta^{j}\cdot(\log^{k+1-j}a)\quad\text{by}\quad\theta^{k+1}x^{k+1-j},\]
and similarly for \(\frac{\log b}{\theta}\). Consequently, we define
\[R_{k}(x)=\frac{1}{k+1}\sum_{j>1,\text{ odd}}^{k+1}(-1)^{\frac{j+ 1}{2}}\binom{k+1}{j}3^{j-1}R_{k+1-j}(x)\\ +\frac{1}{2(k+1)}\sum_{j=0,\text{ even}}^{k+1}(-1)^{\frac{j}{2}} \binom{k+1}{j}(2^{j}+1)x^{k+1-j}, \tag{12}\]
and
\[S_{k}(x)=\frac{1}{k+1}\sum_{j>1,\text{ odd}}^{k+1}(-1)^{\frac{j+ 1}{2}}\binom{k+1}{j}3^{j-1}S_{k+1-j}(x)\\ +\frac{1}{2(k+1)}\sum_{j=1,\text{ odd}}^{k+1}(-1)^{\frac{j+1}{2} }\binom{k+1}{j}(2^{j}-1)x^{k+1-j}. \tag{13}\]
Usually, we need to specify initial values to completely determine a family of recursive polynomials. However, in our case we do not need to do so since there is also a non-recursive part in the definition of the polynomials which serves to define the initial values (at \(k=0\)). For instance, we have
\[R_{0}(x)=\frac{1}{2}(2)x=x,\]
and
\[S_{0}(x)=\frac{1}{2}(-1)(1)=-\frac{1}{2}.\]
One may also note that the degree of \(R_{k}(x)\) is \(k+1\) while that of \(S_{k}(x)\) is \(k\). Lastly, we write the final expression for \(f_{1}(k)\) from (11),
\[f_{1}(k)=\frac{\theta^{k+1}(a+b)}{a^{3}-b^{3}}\Bigg{[}R_{k}\left( \frac{\log a}{\theta}\right)-R_{k}\left(\frac{\log b}{\theta}\right)\Bigg{]}\\ +\frac{\theta^{k+1}}{\sqrt{3}(a^{2}+ab+b^{2})}\Bigg{[}S_{k}\left( \frac{\log a}{\theta}\right)+S_{k}\left(\frac{\log b}{\theta}\right)\Bigg{]}. \tag{14}\]
### The integral \(f_{2}(k)\)
Next, we evaluate an integral similar to \(f_{1}(k)\) from Section 3.1 but with a minor modification. We take \(a,b\in\mathbb{R}^{+}\) and consider the integral
\[f_{2}(k)=\int_{0}^{\infty}\frac{t\log^{k}t}{(t^{2}-at+a^{2})(t^{2}-bt+b^{2})} \;\mathrm{d}t.\]
Note the changes in sign in the expression in the denominator as compared to \(f_{1}(k)\). The contour integral we evaluate in this case is
\[\oint_{C}\frac{z\log^{k+1}z}{(z^{2}-az+a^{2})(z^{2}-bz+b^{2})}\;\mathrm{d}z,\]
with contour as given by Figure 1. As before, combining the contributions along \(\gamma_{1}\) and \(\gamma_{2}\) and collecting the imaginary terms using the binomial expansion we get a term similar to (8):
\[-3i\theta(k+1)f_{2}(k)-\sum_{j>1,\;\text{odd}}^{k+1}\binom{k+1}{j}(3\theta)^{j} \,i^{j}\,f_{2}(k+1-j). \tag{15}\]
Now we compute the residues which are obtained at \(z=-a\omega,-a\omega^{2},-b\omega\) and \(-b\omega^{2}\). The only difference here is that the arguments will be \(\delta=\pi/3\) and \(5\delta=5\pi/3\) this time. The residue calculation is given by
\[2\pi i\Bigg{[}\frac{-a\omega(\log a+5i\delta)^{k+1}}{(-a\omega+ a\omega^{2})(-a\omega+b\omega)(-a\omega+b\omega^{2})}+\frac{-a\omega^{2}( \log a+i\delta)^{k+1}}{(-a\omega^{2}+a\omega)(-a\omega^{2}+b\omega)(-a\omega^ {2}+b\omega^{2})}\\ +\frac{-b\omega(\log b+5i\delta)^{k+1}}{(-b\omega+a\omega)(-b \omega+a\omega^{2})(-b\omega+b\omega^{2})}+\frac{-b\omega^{2}(\log b+i\delta)^ {k+1}}{(-b\omega^{2}+a\omega)(-b\omega^{2}+b\omega)}\Bigg{]}\] \[= 2\pi i\left[\frac{(a\omega^{2}-b\omega)\Big{(}(\log a+5i\delta)^ {k+1}-(\log b+i\delta)^{k+1}\Big{)}+(a\omega-b\omega^{2})\Big{(}(\log b+5i \delta)^{k+1}-(\log a+i\delta)^{k+1}\Big{)}}{(\omega-\omega^{2})(a^{3}-b^{3} )}\right].\]
Collecting imaginary terms together gives
\[\frac{-\pi}{\sqrt{3}(a^{3}-b^{3})}\Bigg{[}\sqrt{3}(a+b)\sum_{j=0, \;\text{even}}^{k+1}\binom{k+1}{j}(5^{j}+1)i^{j+1}\delta^{j}(\log^{k+1-j}a- \log^{k+1-j}b)\\ +(a-b)\sum_{j=1,\;\text{odd}}^{k+1}\binom{k+1}{j}(5^{j}-1)i^{j} \delta^{j}(\log^{k+1-j}a+\log^{k+1-j}b)\Bigg{]} \tag{16}\]
Equating (15) and (16) and rearranging, we obtain
\[f_{2}(k) =\frac{1}{k+1}\sum_{j>1,\;\text{odd}}^{k+1}(-1)^{\frac{j+1}{2}} \binom{k+1}{j}(3\theta)^{j-1}f_{2}(k+1-j)\] \[+\frac{(a+b)}{2(k+1)(a^{3}-b^{3})}\sum_{j=0,\;\text{even}}^{k+1}( -1)^{\frac{j}{2}}\delta^{j}\binom{k+1}{j}(5^{j}+1)(\log^{k+1-j}a-\log^{k+1-j}b)\] \[+\frac{1}{2\sqrt{3}(k+1)(a^{2}+ab+b^{2})}\sum_{j=1,\;\text{odd}}^ {k+1}(-1)^{\frac{j-1}{2}}\delta^{j}\binom{k+1}{j}(5^{j}-1)(\log^{k+1-j}a+\log ^{k+1-j}b). \tag{17}\]
Once again, we use two polynomials - \(P_{k}(x)\) corresponding to the even indices and \(Q_{k}(x)\) to the odd indices in (17), both having a recursive part corresponding to the recursive part in (17). Note that \(3\theta=6\delta=2\pi\). Once again, we would like to replace \(\frac{\log a}{\delta}\) by \(x\) so that
\[\delta^{j}\cdot(\log^{k+1-j}a)=\delta^{k+1}x^{k+1-j},\]
and similarly for \(\frac{\log b}{\delta}\). We define
\[P_{k}(x)=\frac{1}{k+1}\sum_{j>1,\;\text{odd}}^{k+1}(-1)^{\frac{j+1 }{2}}\binom{k+1}{j}6^{j-1}P_{k+1-j}(x)\\ +\frac{1}{2(k+1)}\sum_{j=0,\;\text{even}}^{k+1}(-1)^{\frac{j}{2}} \binom{k+1}{j}(5^{j}+1)x^{k+1-j}, \tag{18}\]
and
\[Q_{k}(x)=\frac{1}{k+1}\sum_{j>1,\;\text{odd}}^{k+1}(-1)^{\frac{j +1}{2}}\binom{k+1}{j}6^{j-1}Q_{k+1-j}(x)\\ +\frac{1}{2(k+1)}\sum_{j=1,\;\text{odd}}^{k+1}(-1)^{\frac{j-1}{2 }}\binom{k+1}{j}(5^{j}-1)x^{k+1-j}. \tag{19}\]
Thus, using the above definitions in (17), we obtain
\[f_{2}(k)=\frac{\delta^{k+1}(a+b)}{a^{3}-b^{3}}\Bigg{[}P_{k} \left(\frac{\log a}{\delta}\right)-P_{k}\left(\frac{\log b}{\delta}\right) \Bigg{]}\\ +\frac{\delta^{k+1}}{\sqrt{3}(a^{2}+ab+b^{2})}\Bigg{[}Q_{k}\left( \frac{\log a}{\delta}\right)+Q_{k}\left(\frac{\log b}{\delta}\right)\Bigg{]}. \tag{20}\]
### The integral \(g_{1}(k)\)
Now we look at evaluating
\[g_{1}(k)=\int_{0}^{\infty}\frac{t(t+a)\log^{k}t}{(t^{3}-a^{3})(t^{2}+bt+b^{2})},\]
with \(a,b\in\mathbb{R}^{+}\). As before, we consider the integral
\[\oint_{C}\frac{z(z+a)\log^{k+1}z}{(z^{3}-a^{3})(z^{2}+bz+b^{2})},\]
with contour as given in Figure 2. Note that in this case, the contour skips the points \(z=0\) and \(z=a\).
The integral along \(C_{1}\) gives
\[-2\pi i\frac{\log^{k+1}a}{3(a^{2}+ab+b^{2})},\]
and over \(C_{2}\) gives
\[-2\pi i\frac{(\log a+2\pi i)^{k+1}}{3(a^{2}+ab+b^{2})}.\]
The contribution towards the imaginary terms from the two expressions above is
\[\frac{-2\pi i}{3(a^{2}+ab+b^{2})}\left[\log^{k+1}a+\sum_{j=0,\;\text{even}}^{k +1}(-1)^{\frac{j}{2}}\binom{k+1}{j}(2\pi)^{j}\log^{k+1-j}a\right].\]
The integrals along \(C_{R}\) and \(C_{\varepsilon}\) will be zero and the purely imaginary terms corresponding to the remaining paths \(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4}\), gives an expression similar to (8), but with \(g_{1}(k)\):
\[-3i\theta(k+1)g_{1}(k)-\sum_{j>1,\;\text{odd}}^{k+1}\binom{k+1}{j}(3\theta)^{j }\,i^{j}\,g_{1}(k+1-j). \tag{21}\]
The next step is calculating the residues which are obtained at \(a\omega,a\omega^{2},b\omega\) and \(b\omega^{2}\) (denoted by black dots in Figure 2. The residue is given by
\[\frac{2\pi i}{3(a^{3}-b^{3})}\Bigg{[}(a\omega^{2}-b\omega)(\log a+i\theta)^{k +1}-(a\omega-b\omega^{2})(\log a+2i\theta)^{k+1}\\ +(b\omega+a)(\omega^{2}-1)(\log b+i\theta)^{k+1}+(b\omega^{2}+a) (\omega-1)(\log b+2i\theta)^{k+1}\Bigg{]}.\]
Figure 2. The contour for \(g_{j}(k)\) - the black dots denote the poles for \(g_{1}(k)\), while the red dots denote the poles for \(g_{2}(k)\). Note that \(z=a\) is outside the contour.
Collecting the imaginary terms together, we obtain from above
\[\frac{2\pi i}{6(a^{3}-b^{3})}\Bigg{[}(a-b)\sum_{j=0,\text{ even}}^{k+1}(-1)^{\frac{j}{2}}\theta^{j}\binom{k+1}{j}(2^{j}+1)\Big{(}\!\log^{k+1-j}a-3 \log^{k+1-j}b\Big{)}\\ +\sqrt{3}(a+b)\sum_{j=1,\text{ odd}}^{k+1}(-1)^{\frac{j-1}{2}} \theta^{j}\binom{k+1}{j}(2^{j}-1)\Big{(}\!\log^{k+1-j}a-\log^{k+1-j}b\Big{)} \Bigg{]}. \tag{22}\]
Finally, equating (21) and (22) and rearranging, we have
\[g_{1}(k)=\frac{1}{k+1}\sum_{j>1,\text{ odd}}^{k+1}(-1)^{\frac{j+ 1}{2}}\binom{k+1}{j}(3\theta)^{j-1}g_{1}(k+1-j)\\ -\frac{1}{6(k+1)(a^{2}+ab+b^{2})}\left[\sum_{j=0,\text{ even}}^{k+ 1}(-1)^{\frac{j}{2}}\theta^{j}\binom{k+1}{j}(2^{j}+1)(\log^{k+1-j}a-3\log^{k+1 -j}b)\right]\\ -\frac{a+b}{2\sqrt{3}(k+1)(a^{3}-b^{3})}\left[\sum_{j=1,\text{ odd}}^{k+1}(-1)^{\frac{j-1}{2}}\theta^{j}\binom{k+1}{j}(2^{j}-1)(\log^{k+1-j}a- \log^{k+1-j}b)\right]\\ -\frac{1}{3(k+1)(a^{2}+ab+b^{2})}\left[\log^{k+1}a+\sum_{j=0,\text { even}}^{k+1}(-1)^{\frac{j}{2}}\binom{k+1}{j}(2\pi)^{j}\log^{k+1-j}a\right]. \tag{23}\]
Note that this time in (23), we have an additional sum along with the usual three parts. To incorporate this extra sum, we need to define another polynomial at this stage:
\[Y_{k}(x)=\frac{1}{k+1}\sum_{j>1,\text{ odd}}^{k+1}(-1)^{\frac{j +1}{2}}\binom{k+1}{j}3^{j-1}Y_{k+1-j}(x)\\ -\frac{1}{(k+1)}\left[x^{k+1}+\sum_{j=0,\text{ even}}^{k+1}(-1)^ {\frac{j}{2}}\binom{k+1}{j}3^{j}x^{k+1-j}\right].\]
This means, using the definitions of the polynomials \(R_{k}\) in (12) and \(S_{k}\) in (13), we can write
\[g_{1}(k)=\frac{\theta^{k+1}}{3(a^{2}+ab+b^{2})}\left[-R_{k}\left( \frac{\log a}{\theta}\right)+3R_{k}\left(\frac{\log b}{\theta}\right)+Y_{k} \left(\frac{\log a}{\theta}\right)\right]\\ +\frac{\theta^{k+1}(a+b)}{\sqrt{3}(a^{3}-b^{3})}\left[S_{k}\left( \frac{\log a}{\theta}\right)-S_{k}\left(\frac{\log b}{\theta}\right)\right]. \tag{24}\]
In our calculations, \(a\) will invariably be equal to \(1\), so that all the \(\log a\) terms above vanish.
### The integral \(g_{2}(k)\)
The final integral we will consider is
\[g_{2}(k)=\int_{0}^{\infty}\frac{t(t-a)\log^{k}t}{(t^{3}+a^{3})(t^{2}-bt+b^{2})},\]
again with \(a,b\in\mathbb{R}^{+}\), and the corresponding contour integral
\[\oint_{C}\frac{z(z-a)\log^{k+1}z}{(z^{3}+a^{3})(z^{2}-bz+b^{2})}.\]
In this case the integral along \(C_{1}\) and \(C_{2}\) is zero, however there is an additional residue contribution from the pole at \(z=-a\). The contour integral evaluates, as before, to
\[-3i\theta(k+1)g_{2}(k)-\sum_{j>1,\;\text{odd}}^{k+1}\binom{k+1}{j}(3\theta)^{ j}\,i^{j}\,g_{2}(k+1-j), \tag{25}\]
and the residues at \(z=-a\omega,-a\omega^{2},-b\omega,-b\omega^{2}\) and \(-a\) (denoted by red dots in Figure 2) are given by
\[\frac{2\pi i}{3(a^{3}-b^{3})}\Bigg{[}- (\log a+5i\delta)^{k+1}(a\omega^{2}-b\omega)-(\log a+i\delta)^{k+ 1}(a\omega-b\omega^{2})\] \[+(\log b+5i\delta)^{k+1}(\omega^{2}-1)(b\omega+a)+(\log b+i \delta)^{k+1}(\omega-1)(b\omega^{2}+a)\Bigg{]}\] \[+\frac{4\pi i(\log a+\pi i)^{k+1}}{3(a^{2}+ab+b^{2})}.\]
Again, we collect the purely imaginary terms in the above expression
\[\frac{2\pi i}{6(a^{3}-b^{3})}\Bigg{[}(a-b)\sum_{j=0,\;\text{even }}^{k+1}\binom{k+1}{j}i^{j}\delta^{j}(5^{j}+1)(\log^{k+1-j}a-3\log^{k+1-j}b)\\ -\sqrt{3}(a+b)\sum_{j=1,\;\text{odd}}^{k+1}\binom{k+1}{j}i^{j-1} \delta^{j}(5^{j}-1)(\log^{k+1-j}a-\log^{k+1-j}b)\Bigg{]}\\ +\frac{4\pi i}{3(a^{2}+ab+b^{2})}\sum_{j=0,\;\text{even}}^{k+1} \binom{k+1}{j}i^{j}\pi^{j}\log^{k+1-j}a. \tag{26}\]
Therefore, by equating (25) and (26), we get
\[g_{2}(k)=\frac{1}{k+1}\sum_{j>1,\;\text{odd}}^{k+1}(-1)^{\frac{ i+1}{2}}\binom{k+1}{j}(3\theta)^{j-1}g_{2}(k+1-j)\\ -\frac{1}{6(k+1)(a^{2}+ab+b^{2})}\left[\sum_{j=0,\;\text{even}}^{ k+1}(-1)^{\frac{j}{2}}\delta^{j}\binom{k+1}{j}(5^{j}+1)(\log^{k+1-j}a-3\log^{k+1-j}b)\right]\\ +\frac{a+b}{2\sqrt{3}(k+1)(a^{3}-b^{3})}\left[\sum_{j=1,\;\text{ odd}}^{k+1}(-1)^{\frac{j-1}{2}}\delta^{j}\binom{k+1}{j}(5^{j}-1)(\log^{k+1-j}a- \log^{k+1-j}b)\right]\\ -\frac{2}{3(k+1)(a^{2}+ab+b^{2})}\left[\sum_{j=0,\;\text{even}}^ {k+1}(-1)^{\frac{j}{2}}\binom{k+1}{j}\pi^{j}\log^{k+1-j}a\right]. \tag{27}\]
Once again, owing to the presence of an additional sum in (27),we define another recursive polynomial:
\[Z_{k}(x)=\frac{1}{k+1}\sum_{j>1,\;\mathrm{odd}}^{k+1}(-1)^{\frac{j +1}{2}}\binom{k+1}{j}6^{j-1}Z_{k+1-j}(x)\\ -\frac{2}{(k+1)}\sum_{j=0,\;\mathrm{even}}^{k+1}(-1)^{\frac{j}{2} }\binom{k+1}{j}3^{j}x^{k+1-j}.\]
Finally, using this and the polynomials \(P_{k}\) and \(Q_{k}\) defined in (18) and (19) respectively, we have
\[g_{2}(k)=\frac{\delta^{k+1}}{3(a^{2}+ab+b^{2})}\left[-P_{k}\left( \frac{\log a}{\delta}\right)+3P_{k}\left(\frac{\log b}{\delta}\right)+Z_{k} \left(\frac{\log a}{\delta}\right)\right]\\ +\frac{\delta^{k+1}(a+b)}{\sqrt{3}(a^{3}-b^{3})}\left[Q_{k}\left( \frac{\log a}{\delta}\right)-Q_{k}\left(\frac{\log b}{\delta}\right)\right]. \tag{28}\]
## 4. Computing the Mahler measure
### The iteration step
Now that we have explicit formulae for the integrals in the previous section, we proceed to calculate the Mahler measure. Recall that we have the following multi-variable integral:
\[\int_{x_{n}=-\infty}^{\infty}\cdots\int_{x_{1}=-\infty}^{\infty}\mathrm{m}(P_{ x_{1}\cdots x_{n}})\frac{\mathrm{d}x_{1}}{(x_{1}^{2}+x_{1}+1)}\cdots\frac{ \mathrm{d}x_{n}}{(x_{n}^{2}+x_{n}+1)},\]
and we make the transformation \(y_{1}=x_{1},\;y_{2}=x_{1}x_{2},\;\ldots,y_{n}=x_{1}\cdots x_{n}\) to obtain
\[\int_{y_{n}}^{*}\cdots\int_{y_{1}}^{*}\mathrm{m}(P_{y_{n}})\frac{y_{1}\; \mathrm{d}y_{1}}{(y_{1}^{2}+y_{1}+1)}\cdot\frac{y_{2}\;\mathrm{d}y_{2}}{(y_{2} ^{2}+y_{2}y_{1}+y_{1}^{2})}\cdots\frac{\mathrm{d}y_{n}}{(y_{n}^{2}+y_{n}y_{n-1 }+y_{n-1}^{2})}, \tag{29}\]
where \(\mathrm{m}(P_{x})=\log^{+}|x|\) and \(\int_{y_{j}}^{*}\) is the symbol we use to denote the limits for each of the variables \(y_{j}\). For \(k\geq 2\), since \(y_{k}=x_{k}y_{k-1}\), we cannot specify the exact limits of \(y_{k}\) without knowing the limits of the preceding variable \(y_{k-1}\). However, we know \(y_{1}=x_{1}\) varies from \(-\infty\) to \(\infty\), which will determine the limits of the variable \(y_{2}\), and in turn of \(y_{3},y_{4}\) and so on. Starting with \(y_{1}\), we evaluate each integral with respect to the variable \(y_{k}\), using the formulae obtained in Section 3 at each step. For \(1\leq k\leq n-1\), let the \(k^{\mathrm{th}}\) step denote the stage when \((k-1)\) integrals have been performed. Then we make the following claim:
**Claim 1**.: _On evaluating the \(n\)-fold integral (29) starting with the variable \(y_{1}\), the integral at the \(k^{\mathrm{th}}\) step can be written as sums of integrals \(J\) given below:_
\[\int_{y_{n}}^{*}\cdots\underbrace{\left(\int_{y_{k+1}}^{*}\int_{y _{k}=-\infty}^{\infty}G(y_{k},y_{k+1})\;\mathrm{d}y_{k}\;\;\frac{y_{k+1}\; \mathrm{d}y_{k+1}}{(y_{k+1}^{2}+y_{k+1}y_{k+2}+y_{k+2}^{2})}\right)}_{J}\cdots\\ \cdots\frac{y_{n-1}\;\mathrm{d}y_{n-1}}{(y_{n-1}^{2}+y_{n-1}y_{n }+y_{n}^{2})}\;\mathrm{m}(P_{y_{n}})\;\mathrm{d}y_{n}, \tag{30}\]
where \(G(t,u)\) has one of the following forms:
\[\frac{t\log^{m}|t|}{(t^{2}+t+1)(t^{2}+ut+u^{2})},\]
or
\[\frac{t(t+1)\log^{m}|t|}{(t^{3}-1)(t^{2}+ut+u^{2})},\]
for some non-negative integer \(m\geq 0\).
Proof of Claim.: We proceed by induction. First note that since \(y_{1}=x_{1}\), the interval of integration for \(y_{1}\) is from \(-\infty\) to \(\infty\). Thus, by taking
\[G(y_{1},y_{2})=\frac{y_{1}}{(y_{1}^{2}+y_{1}+1)(y_{2}^{2}+y_{1}y_{2}+y_{1}^{2} )},\]
the claim is verified for the base case when \(k=1\). Now suppose the claim is true for all \(l\) such that \(1\leq l\leq k\). We will show it is true for \(l=k+1\) by evaluating integral \(J\) with respect to \(y_{k}\)
\[J=\int_{y_{k+1}}^{*}\left(\int_{y_{k}=-\infty}^{\infty}G(y_{k},y_{k+1})\; \mathrm{d}y_{k}\right)\frac{y_{k+1}\;\mathrm{d}y_{k+1}}{(y_{k+1}^{2}+y_{k+1}y_ {k+2}+y_{k+2}^{2})}. \tag{31}\]
Consider the case when \(G(y_{k},y_{k+1})\) is of the form
\[G(t,u)=\frac{t\log^{m}|t|}{(t^{2}+t+1)(t^{2}+ut+u^{2})}.\]
The evaluation of this integral depends on the sign of the variables \(y_{k}\) and \(y_{k+1}\). Recall that
\[y_{k+1}=x_{k+1}\cdot y_{k},\]
and both \(x_{k+1}\) and \(y_{k}\) vary from \(-\infty\) to \(\infty\). To determine the limits of \(y_{k+1}\), we fix a value of \(y_{k}\) in the given range and see how \(y_{k+1}=x_{k+1}y_{k}\) varies as \(x_{k+1}\) varies. We have the following four intervals:
1. When \(x_{k+1}\geq 0\) and \(y_{k}\geq 0\). If \(y_{k}=c\geq 0\), then as \(x_{k+1}\) varies from \(0\) to \(\infty\), \(y_{k+1}\) must also vary from \(0\) to \(\infty\). The limits in this case are \[\int_{y_{k+1}=0}^{\infty}\;\int_{y_{k}=0}^{\infty}.\]
2. When \(x_{k+1}\leq 0\) and \(y_{k}\leq 0\). We fix \(y_{k}=c\leq 0\), then as \(x_{k+1}\) is negative and varies from \(-\infty\) to \(0\), \(y_{k+1}=cx_{k+1}\) is positive and must vary from \(\infty\) to \(0\). Here, the limits are \[\int_{y_{k+1}=\infty}^{0}\;\int_{y_{k}=-\infty}^{0}.\]
3. When \(x_{k+1}\leq 0\) and \(y_{k}\geq 0\). Here \(y_{k}=c\geq 0\), and \(x_{k+1}\) varies from \(-\infty\) to \(0\). This means that \(y_{k+1}\) must vary from \(-\infty\) to \(0\) as well and the limits in this case are \[\int_{y_{k+1}=-\infty}^{0}\;\int_{y_{k}=0}^{\infty}.\]
4. When \(x_{k+1}\geq 0\) and \(y_{k}\leq 0\). Here, if \(y_{k}=c\leq 0\) then as \(x_{k+1}\) varies from \(0\) to \(\infty\), we have \(y_{k+1}\) to be negative varying from \(0\) to \(-\infty\). The limits are \[\int_{y_{k+1}=0}^{-\infty}\;\int_{y_{k}=-\infty}^{0}.\]
Evaluating integral \(J\) over all four intervals of \(y_{k}\) and \(y_{k+1}\) (after making the transformation \(y_{k}\to-y_{k}\) if needed) gives
\[\int_{y_{k+1}=0}^{\infty}\Bigg{(}\overbrace{\int_{y_{k}=0}^{\infty }G(y_{k},y_{k+1})\;\mathrm{d}y_{k}}^{I_{1}}+\overbrace{\int_{y_{k}=0}^{\infty }-G(-y_{k},y_{k+1})\;\mathrm{d}y_{k}}^{I_{2}}\Bigg{)}\frac{y_{k+1}\;\mathrm{d}y _{k+1}}{(y_{k+1}^{2}+y_{k+1}y_{k+2}+y_{k+2}^{2})}\\ +\int_{y_{k+1}=-\infty}^{0}\Bigg{(}\underbrace{\int_{y_{k}=0}^{ \infty}G(y_{k},y_{k+1})\;\mathrm{d}y_{k}}_{I_{3}}+\underbrace{\int_{y_{k}=0}^{ \infty}-G(-y_{k},y_{k+1})\;\mathrm{d}y_{k}}_{I_{4}}\Bigg{)}\frac{y_{k+1}\; \mathrm{d}y_{k+1}}{(y_{k+1}^{2}+y_{k+1}y_{k+2}+y_{k+2}^{2})}, \tag{32}\]
where integral \(I_{j}\) corresponds to interval \((j)\) in the list of four intervals discussed above. We will show that after evaluating each of these integrals with respect to \(y_{k}\), the sum of the integrals corresponding to \(I_{1}\) and \(I_{2}\) is the same as that for \(I_{3}\) and \(I_{4}\). This would mean that after integrating with respect to \(y_{k}\), we can write the above sum of integrals as a single integral with respect to \(y_{k+1}\) varying from \(-\infty\) to \(\infty\). Indeed, if we can calculate
\[f(m)=\int_{0}^{\infty}\frac{t\log^{m}t}{(t^{2}+at+a^{2})(t^{2}+bt+b^{2})}\; \mathrm{d}t,\]
for \(a,b\in\mathbb{R}\) and \(m\in\mathbb{Z}_{\geq 0}\), then we can evaluate the integral of \(G(t,u)\) with respect to \(t\) by plugging in \(a=\pm 1\) and \(b=\pm y_{k+1}\) as the case may be. Note that integrals \(f_{1}(m)\) and \(f_{2}(m)\) computed in Sections 3.1 and 3.2 give us \(f(m)\) for the cases \(a,b>0\) and \(a,b<0\) respectively. We will now write a more general expression for \(f(m)\) for any \(a\) and \(b\) using the same contour as in Figure 1 and the contour integral
\[\oint_{C}\frac{z\log^{m+1}z}{(z^{2}+az+a^{2})(z^{2}+bz+b^{2})}\;\mathrm{d}z.\]
The purely imaginary term in the contour integration above is given by an expression similar to equation (8) but with \(f(m)\):
\[-3i\theta(m+1)f(m)-\sum_{j>1,\;\mathrm{odd}}^{m+1}\binom{m+1}{j}(3\theta)^{j} \,i^{j}\,f(m+1-j), \tag{33}\]
which is independent of the values of \(a\) and \(b\). Thus, when one of \(a\) and \(b\) is negative, the only change in our computations from Section 3.1 occurs in the arguments of the poles in the residue calculation. Going back to equation (9), this residue calculation can be written for general \(a,b\in\mathbb{R}\) as
\[2\pi i\left[\frac{(a\omega^{2}-b\omega)\Big{(}(\log|a|+i\theta_{1})^{k+1}-( \log|b|+i\gamma_{2})^{k+1}\Big{)}}{(\omega-\omega^{2})(a^{3}-b^{3})}\\ +\,\frac{(a\omega-b\omega^{2})\Big{(}(\log|b|+i\gamma_{1})^{k+1}- (\log|a|+i\theta_{2})^{k+1}\Big{)}}{(\omega-\omega^{2})(a^{3}-b^{3})}\right],\]
where \(\theta_{1},\theta_{2}\) are the arguments of \(a\omega,a\omega^{2}\) respectively, and \(\gamma_{1},\gamma_{2}\) are those of \(b\omega,b\omega^{2}\). For example, if \(a\) is positive, we have
\[\theta_{1}=2\pi/3\quad\text{and}\quad\theta_{2}=4\pi/3,\]
and if \(a\) is negative, then
\[\theta_{1}=5\pi/3\quad\text{and}\quad\theta_{2}=\pi/3,\]
and similarly for \(b\) and \(\gamma_{j}\). We isolate the purely imaginary terms in the expression above to get
\[\frac{-\pi}{\sqrt{3}(a^{3}-b^{3})}\Bigg{[}\sqrt{3}(a+b)\sum_{j=0, \text{ even}}^{m+1}\binom{m+1}{j}i^{j+1}\bigg{(}(\theta_{1}^{j}+\theta_{2}^{j}) \log^{m+1-j}a-(\gamma_{1}^{j}+\gamma_{2}^{j})\log^{m+1-j}b\bigg{)}\\ +(a-b)\sum_{j=1,\text{ odd}}^{m+1}\binom{m+1}{j}i^{j}\bigg{(}( \theta_{1}^{j}-\theta_{2}^{j})\log^{m+1-j}a+(\gamma_{1}^{j}-\gamma_{2}^{j}) \log^{m+1-j}b\bigg{)}\Bigg{]}. \tag{34}\]
This forms the purely imaginary term in the sum of residues. We equate (34) to equation (33), rearrange and isolate for \(f(m)\) to obtain
\[f(m)=\frac{1}{m+1}\sum_{j>1,\text{ odd}}^{m+1}(-1)^{\frac{j+1}{ 2}}\binom{m+1}{j}(3\theta)^{j-1}f(m+1-j)\\ +\frac{(a+b)}{2(m+1)(a^{3}-b^{3})}\sum_{j=0,\text{ even}}^{m+1}( -1)^{\frac{j}{2}}\binom{m+1}{j}\bigg{(}(\theta_{1}^{j}+\theta_{2}^{j})\log^{m+ 1-j}a-(\gamma_{1}^{j}+\gamma_{2}^{j})\log^{m+1-j}b\bigg{)}\\ +\frac{1}{2\sqrt{3}(m+1)(a^{2}+ab+b^{2})}\sum_{j=1,\text{ odd}}^{m +1}(-1)^{\frac{j+1}{2}}\binom{m+1}{j}\bigg{(}(\theta_{1}^{j}\!-\!\theta_{2}^{ j})\log^{m+1-j}a\!+\!(\gamma_{1}^{j}\!-\!\gamma_{2}^{j})\log^{m+1-j}b\bigg{)}. \tag{35}\]
Recall that to compute the integrals \(I_{j}\) in (32), we have \(a=\pm 1\) and \(b=\pm y_{k+1}\) as shown in Table 1 along with the corresponding values of \(\theta_{1},\theta_{2},\gamma_{1}\) and \(\gamma_{2}\). Plugging in these values of \(a\) and \(b\) into equation (35), we may note that
\[f(m)\bigg{|}_{I_{1}}+f(m)\bigg{|}_{I_{2}}=f(m)\bigg{|}_{I_{3}}+f(m)\bigg{|}_{ I_{4}}.\]
This means that the result of the integrations \(I_{1}+I_{2}\) and that of \(I_{3}+I_{4}\) are identical. Therefore, we can simply evaluate the sum of integrals \(I_{1}\) and \(I_{2}\) with respect to \(y_{k}\) and then evaluate this
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & \(a,b\) & \(\theta_{1}\) & \(\theta_{2}\) & \(\gamma_{1}\) & \(\gamma_{2}\) \\ \hline \(I_{1}\) & \(a=1,b=y_{k+1}>0\) & \(2\pi/3\) & \(4\pi/3\) & \(2\pi/3\) & \(4\pi/3\) \\ \hline \(I_{2}\) & \(a=-1,b=-y_{k+1}<0\) & \(5\pi/3\) & \(\pi/3\) & \(5\pi/3\) & \(\pi/3\) \\ \hline \(I_{3}\) & \(a=1,b=y_{k+1}<0\) & \(2\pi/3\) & \(4\pi/3\) & \(5\pi/3\) & \(\pi/3\) \\ \hline \(I_{4}\) & \(a=-1,b=-y_{k+1}>0\) & \(5\pi/3\) & \(\pi/3\) & \(2\pi/3\) & \(4\pi/3\) \\ \hline \end{tabular}
\end{table}
Table 1. The values of \(a\) and \(b\) and the respective arguments
integral with respect to \(y_{k+1}\) over a single interval varying from \(-\infty\) to \(\infty\). The same argument follows in the case where \(G(y_{k},y_{k+1})\) is of the form
\[G(t,u)=\frac{t(t+1)\log^{m}|t|}{(t^{3}-1)(t^{2}+ut+u^{2})}.\]
Plugging in \(a=\pm 1\) and \(b=\pm y_{k}\) into the formulae in Section 3, and after multiplication by the remaining term
\[\frac{y_{k+1}}{(y_{k+1}^{2}+y_{k+1}y_{k+2}+y_{k+2}^{2})},\]
the resulting integrand in both these cases will again be of the form \(G(y_{k+1},y_{k+2})\). This means that integral \(J\) in (31) can be evaluated with respect to \(y_{k}\) and written as a sum of integrals of the following form
\[\int_{y_{k+1}=-\infty}^{\infty}G(y_{k+1},y_{k+2})\;\mathrm{d}y_{k+1}.\]
Going back to the original integral in (30), we have shown that after the \((k+1)^{\mathrm{th}}\) step, this integral can be further written as a sum of integrals of the form
\[\int_{y_{n}}^{*}\cdots\left(\int_{y_{k+2}}^{*}\int_{y_{k+1}=- \infty}^{\infty}G(y_{k+1},y_{k+2})\;\mathrm{d}y_{k+1}\;\;\frac{y_{k+2}\; \mathrm{d}y_{k+2}}{(y_{k+2}^{2}+y_{k+2}y_{k+3}+y_{k+3}^{2})}\right)\cdots\\ \cdots\frac{y_{n-1}\;\mathrm{d}y_{n-1}}{(y_{n-1}^{2}+y_{n-1}y_{n }+y_{n}^{2})}\;\mathrm{m}(P_{y_{n}})\;\mathrm{d}y_{n},\]
proving the claim for the case \(l=k+1\). Therefore, by induction, the claim is true for all \(1\leq k\leq n-1\).
We may thus apply the above procedure iteratively up to the last variable. Indeed, when \(k=n-1\), we will have terms of the form
\[\int_{y_{n}}^{*}\left(\int_{y_{n-1}=-\infty}^{\infty}G(y_{n-1},y_{n})\; \mathrm{d}y_{n-1}\right)\mathrm{m}(P_{y_{n}})\;\mathrm{d}y_{n}.\]
We may integrate once again with respect to \(y_{n-1}\), as done in the proof of Claim 1. This will leave us with just one variable \(y_{n}\) and the limits of the integral will simply be \(\int_{y_{n}=-\infty}^{\infty}\). More precisely, if we denote by \(F(k)\) the following integral over \(k\) variables,
\[F(k):=\int_{y_{k}}^{*}\cdots\int_{y_{1}}^{*}\mathrm{m}(P_{y_{k}})\frac{y_{1} \;\mathrm{d}y_{1}}{(y_{1}^{2}+y_{1}+1)}\cdot\frac{y_{2}\;\mathrm{d}y_{2}}{(y_ {2}^{2}+y_{2}y_{1}+y_{1}^{2})}\cdots\frac{\mathrm{d}y_{k}}{(y_{k}^{2}+y_{k}y_{ k-1}+y_{k-1}^{2})}, \tag{36}\]
then, after the final iteration, we can write for \(n\geq 1\) and \(k=2n\),
\[F(2n)=\sum_{h=1}^{n}a_{n,h-1}\;\left(\frac{\pi}{3}\right)^{2n-2h }\int_{-\infty}^{\infty}\mathrm{m}(P_{y_{k}})\frac{y_{k}+1}{y_{k}^{3}-1}\log^ {2h-1}|y_{k}|\;\mathrm{d}y_{k}\\ +\sum_{h=0}^{n-1}b_{n,h}\;\left(\frac{\pi}{3}\right)^{2n-2h-1} \int_{-\infty}^{\infty}\mathrm{m}(P_{y_{k}})\frac{\log^{2h}|y_{k}|}{y_{k}^{2} +y_{k}+1}\;\mathrm{d}y_{k}, \tag{37}\]
and for \(n\geq 0\) and \(k=2n+1\), we write
\[F(2n+1)=\sum_{h=1}^{n}c_{n,h-1}\ \left(\frac{\pi}{3}\right)^{2n-2h+1} \int_{-\infty}^{\infty}\mathrm{m}(P_{y_{k}})\frac{y_{k}+1}{y_{k}^{3}-1}\log^{2 h-1}|y_{k}|\ \mathrm{d}y_{k}\\ +\sum_{h=0}^{n}d_{n,h}\ \left(\frac{\pi}{3}\right)^{2n-2h}\int_{- \infty}^{\infty}\mathrm{m}(P_{y_{k}})\frac{\log^{2h}|y_{k}|}{y_{k}^{2}+y_{k}+1 }\,\mathrm{d}y_{k}, \tag{38}\]
where \(a_{r,s}\,,b_{r,s}\,,c_{r,s},d_{r,s}\) are real numbers which will be defined recursively in Section 4.3.
### Expressing the integrals in terms of \(L\)-functions
Here we write the Mahler measure integrals obtained in equations (37) and (38) in terms of polylogarithm values which can in turn be written as special values of the Riemann zeta function and a Dirichlet \(L\)-function.
Note that \(\mathrm{m}(P_{y_{k}})=\log^{+}|y_{k}|\), and we can write
\[\int_{-\infty}^{\infty}\mathrm{m}(P_{y_{k}})\frac{y_{k}+1}{y_{k}^{3}-1}\log^ {2h-1}|y_{k}|\ \mathrm{d}y_{k}=\int_{0}^{1}\log^{2h}t\,\frac{1+t}{1-t^{3}}\ \mathrm{d}t+\int_{0}^{1}\log^{2h}t\,\frac{1-t}{1+t^{3}}\ \mathrm{d}t,\]
and
\[\int_{-\infty}^{\infty}\mathrm{m}(P_{y_{k}})\frac{\log^{2h}|y_{k}|}{y_{k}^{2 }+y_{k}+1}\ \mathrm{d}y_{k}=-\int_{0}^{1}\frac{\log^{2h+1}t}{t^{2}+t+1}\ \mathrm{d}t-\int_{0}^{1}\frac{\log^{2h+1}t}{t^{2}-t+1}\ \mathrm{d}t.\]
Using the following expansions
\[\frac{1+t}{1-t^{3}} =\frac{1}{3}\left(\frac{2}{1-t}+\frac{1}{t-\omega}+\frac{1}{t- \omega^{2}}\right),\] \[\frac{1-t}{1+t^{3}} =\frac{1}{3}\left(\frac{2}{t+1}-\frac{1}{t+\omega}-\frac{1}{t+ \omega^{2}}\right),\] \[\frac{1}{t^{2}+t+1} =\frac{1}{i\sqrt{3}}\left(\frac{1}{t-\omega}-\frac{1}{t-\omega^{2 }}\right),\] \[\frac{1}{t^{2}-t+1} =\frac{1}{i\sqrt{3}}\left(\frac{1}{t+\omega^{2}}-\frac{1}{t+ \omega}\right),\]
where \(\omega=e^{2\pi i/3}\) is a third root of unity, we may write the Mahler measure in terms of hyperlogarithms as follows. We can write
\[\int_{-\infty}^{\infty}\mathrm{m}(P_{y_{k}})\frac{y_{k}+1}{y_{k}^ {3}-1}\log^{2h-1}|y_{k}|\ \mathrm{d}y_{k}=\frac{1}{3}\int_{0}^{1}\log^{2h}t \left(-\frac{2}{t-1}+\frac{1}{t-\omega}+\frac{1}{t-\omega^{2}}\right)\ \mathrm{d}t\\ +\frac{1}{3}\int_{0}^{1}\log^{2h}t\left(\frac{2}{t+1}-\frac{1}{t +\omega}-\frac{1}{t+\omega^{2}}\right)\ \mathrm{d}t.\]
Using the definition of hyperlogarithms as given in Section 2 and identity (4), for \(a\in\mathbb{C}^{*}\), we have
\[\int_{0}^{1}\log^{2h}t\,\frac{1}{t-a}\ \mathrm{d}t =(-1)^{2h}(2h)!\int_{0}^{1}\frac{\mathrm{d}t}{t-a}\circ\overbrace{ \frac{\mathrm{d}t}{t}\circ\cdots\circ\frac{\mathrm{d}t}{t}}^{2h\ \mathrm{times}}\] \[=(2h)!\cdot\mathrm{I}_{2h+1}(a,1)=-(2h)!\operatorname{Li}_{2h+1}( 1/a).\]
This gives, for \(h\geq 1\),
\[\int_{-\infty}^{\infty}\mathrm{m}(P_{y_{k}})\frac{y_{k}+1}{y_{k}^{3} -1}\log^{2h-1}|y_{k}|\;\mathrm{d}y_{k}=-\frac{(2h)!}{3}\Big{(}-2\mathrm{Li}_{2h +1}(1)+\mathrm{Li}_{2h+1}(\omega^{2})+\mathrm{Li}_{2h+1}(\omega)\\ +2\mathrm{Li}_{2h+1}(-1)-\mathrm{Li}_{2h+1}(-\omega^{2})-\mathrm{ Li}_{2h+1}(-\omega)\Big{)}.\]
We have the following identities:
\[\mathrm{Li}_{2h+1}(1) =\zeta(2h+1),\] \[\mathrm{Li}_{2h+1}(-1) =-\zeta(2h+1)\left(1-\frac{1}{4^{h}}\right),\] \[\mathrm{Li}_{2h+1}(\omega)+\mathrm{Li}_{2h+1}(\omega^{2}) =-\zeta(2h+1)\left(1-\frac{1}{9^{h}}\right),\] \[\mathrm{Li}_{2h+1}(-\omega)+\mathrm{Li}_{2h+1}(-\omega^{2}) =\zeta(2h+1)\left(1-\frac{1}{9^{h}}\right)\left(1-\frac{1}{4^{h} }\right).\]
We will show the third identity, and the rest can be proved in a similar manner. Since \(2h+1\geq 3\), the sum
\[\mathrm{Li}_{2h+1}(z)=\sum_{n=1}^{\infty}\frac{z^{n}}{n^{2h+1}}\]
converges absolutely for \(|z|\leq 1\) and we are free to change the order of the terms. We write
\[\mathrm{Li}_{2h+1}(\omega)+\mathrm{Li}_{2h+1}(\omega^{2}) =\sum_{j=1}^{\infty}\frac{\omega^{j}+\omega^{2j}}{j^{2h+1}}\] \[=\sum_{j=1}^{\infty}\frac{2\cos(2\pi j/3)}{j^{2h+1}}\] \[=-\frac{1}{1^{2h+1}}-\frac{1}{2^{2h+1}}+\frac{2}{3^{2h+1}}-\frac{ 1}{4^{2h+1}}-\cdots\] \[=-\left(\frac{1}{1^{2h+1}}+\frac{1}{2^{2h+1}}+\frac{1}{3^{2h+1}} +\cdots\right)+3\left(\frac{1}{3^{2h+1}}+\frac{1}{6^{2h+1}}+\frac{1}{9^{2h+1}} +\cdots\right)\] \[=-\zeta(2h+1)+\frac{1}{3^{2h}}\cdot\zeta(2h+1)\] \[=-\zeta(2h+1)\left(1-\frac{1}{9^{h}}\right),\]
as desired.
Using these identities, we obtain
\[\int_{-\infty}^{\infty}\mathrm{m}(P_{y_{k}})\frac{y_{k}+1}{y_{k}^{3}-1}\log^{2 h-1}|y_{k}|\;\mathrm{d}y_{k}=2(2h)!\zeta(2h+1)\left(1-\frac{1}{3^{2h+1}} \right)\left(1-\frac{1}{2^{2h+1}}\right).\]
Similarly, we have for \(h\geq 0\),
\[\int_{-\infty}^{\infty}\mathrm{m}(P_{y_{k}})\frac{\log^{2h}|y_{k}|}{ y_{k}^{2}+y_{k}+1}\;\mathrm{d}y_{k} =-\frac{1}{i\sqrt{3}}\int_{0}^{1}\log^{2h+1}t\left(\frac{1}{t- \omega}-\frac{1}{t-\omega^{2}}+\frac{1}{t+\omega^{2}}-\frac{1}{t+\omega}\right) \;\mathrm{d}t\] \[=-\frac{(2h+1)!}{i\sqrt{3}}\Big{(}\mathrm{Li}_{2h+2}(\omega^{2})- \mathrm{Li}_{2h+2}(\omega)+\mathrm{Li}_{2h+2}(-\omega)-\mathrm{Li}_{2h+2}(- \omega^{2})\Big{)},\]
where we use
\[\int_{0}^{1}\log^{2h+1}t\,\frac{1}{t-a}\;\mathrm{d}t =(-1)^{2h+1}(2h+1)!\int_{0}^{1}\frac{\mathrm{d}t}{t-a}\circ \overbrace{\frac{\mathrm{d}t}{t}\circ\cdots\circ\frac{\mathrm{d}t}{t}}^{2h+1 \;\mathrm{times}}\] \[=-(2h+1)!\,\mathrm{I}_{2h+2}(a,1)=(2h+1)!\,\mathrm{Li}_{2h+2}(1/ a).\]
In this case, we will use the following identities
\[\mathrm{Li}_{2h+2}(\omega^{2})-\mathrm{Li}_{2h+2}(\omega) =-\sqrt{3}iL(\chi_{-3},2h+2),\] \[\mathrm{Li}_{2h+2}(-\omega)-\mathrm{Li}_{2h+2}(-\omega^{2}) =-\sqrt{3}i\left(1+\frac{1}{2^{2h+1}}\right)L(\chi_{-3},2h+2).\]
This means
\[\int_{-\infty}^{\infty}\mathrm{m}(P_{y_{k}})\frac{\log^{2h}|y_{k}|}{y_{k}^{2}+ y_{k}+1}\;\mathrm{d}y_{k}=2(2h+1)!L(\chi_{-3},2h+2)\left(1+\frac{1}{2^{2h+2}} \right). \tag{39}\]
Therefore, plugging the above relations into equations (37) and (38) we obtain for the even case with \(n\geq 1\)
\[F(2n)=\sum_{h=1}^{n}a_{n,h-1}\;\left(\frac{\pi}{3}\right)^{2n-2h} 2(2h)!\zeta(2h+1)\left(1-\frac{1}{3^{2h+1}}\right)\left(1-\frac{1}{2^{2h+1}}\right) \\ +\sum_{h=0}^{n-1}b_{n,h}\;\left(\frac{\pi}{3}\right)^{2n-2h-1}2(2h+ 1)!L(\chi_{-3},2h+2)\left(1+\frac{1}{2^{2h+2}}\right), \tag{40}\]
and for the odd case with \(n\geq 0\)
\[F(2n+1)=\sum_{h=1}^{n}c_{n,h-1}\;\left(\frac{\pi}{3}\right)^{2n- 2h+1}2(2h)!\zeta(2h+1)\left(1-\frac{1}{3^{2h+1}}\right)\left(1-\frac{1}{2^{2h+ 1}}\right)\\ +\sum_{h=0}^{n}d_{n,h}\;\left(\frac{\pi}{3}\right)^{2n-2h}2(2h+1)!L(\chi_{-3},2h+2)\left(1+\frac{1}{2^{2h+2}}\right). \tag{41}\]
### The recursive coefficients
What remains is to find a relation for the coefficients \(a_{r,s}\,,b_{r,s}\,,c_{r,s},d_{r,s}\) appearing in the formulae above. The idea is the following: we begin with the \((2n+2)\)-fold multiple integral \(F(2n+2)\) in (36). As seen in equation (37), after the final iteration one obtains sums of single integrals with coefficients involving \(a_{r,s}\,,b_{r,s}\). We may go back one step to the penultimate iteration, where we will have a sum of double integrals, this time with coefficients involving the \(c_{r,s},d_{r,s}\). Using the formulae for integrals obtained in the previous section, we can then compare coefficients and write \(a_{r,s}\) and \(b_{r,s}\) in terms of \(c_{r,s}\) and \(d_{r,s}\). In turn, we do the same with \(F(2n+1)\) to get expressions for \(c_{r,s}\) and \(d_{r,s}\) in terms of \(a_{r,s}\) and \(b_{r,s}\).
We need to define some notation for the polynomials \(R_{k},S_{k},P_{k},Q_{k},Y_{k}\) and \(Z_{k}\). Note that we either have only odd powers of \(x\) or only even powers of \(x\) occuring in each of these polynomials. Moreover, the degree of \(R_{k},P_{k},Y_{k}\) and \(Z_{k}\) is \(k+1\) while it is \(k\) for \(S_{k}\) and \(Q_{k}\). For convenience, we define some notation in the following table: we write the polynomials which have only odd powers of \(x\) in the left column and those with even powers of \(x\) in the right as follows
\[\begin{array}{|c|c|}\hline\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\
This is obtained by iteratively using the formulae from Section 3 along with the procedure explained in Claim 1 upto the last variable \(y_{2n}=y_{k}\). At the penultimate step, the integral can be written as
\[\begin{split} F(2n+2)&=\int_{y_{k}}^{*}\int_{y_{k-1}}^ {*}\cdots\int_{y_{1}}^{*}\,\mathrm{m}(P_{y_{k}})\frac{y_{1}\;\mathrm{d}y_{1}}{ (y_{1}^{2}+y_{1}+1)}\cdots\frac{\mathrm{d}y_{k}}{(y_{k}^{2}+y_{k}y_{k-1}+y_{k- 1}^{2})}\\ &=\int_{y_{k}}^{*}\mathrm{m}(P_{y_{k}})\Bigg{(}\int_{y_{k-1}}^{*} \cdots\int_{y_{1}}^{*}\,\frac{y_{1}\;\mathrm{d}y_{1}}{(y_{1}^{2}+y_{1}+1)} \cdots\frac{y_{k-1}\;\mathrm{d}y_{k-1}}{(y_{k}^{2}+y_{k}y_{k-1}+y_{k-1}^{2})} \Bigg{)}\;\mathrm{d}y_{k}\\ &=\int_{y_{k}}^{*}\mathrm{m}(P_{y_{k}})\Bigg{(}\sum_{l=1}^{n}c_{ n,l-1}\;\left(\frac{\pi}{3}\right)^{2n-2l+1}\int_{y_{k-1}}^{*}\frac{y_{k-1}(y_{k-1}+1) \log^{2l-1}|y_{k-1}|}{(y_{k-1}^{3}-1)(y_{k}^{2}+y_{k}y_{k-1}+y_{k-1}^{2})}\; \mathrm{d}y_{k-1}\\ &\hskip 142.26378pt+\sum_{i=0}^{n}d_{n,l}\;\left(\frac{\pi}{3} \right)^{2n-2l}\int_{y_{k-1}}^{*}\frac{y_{k-1}\log^{2l}|y_{k-1}|}{(y_{k-1}^{2} +y_{k-1}+1)(y_{k}^{2}+y_{k}y_{k-1}+y_{k-1}^{2})}\;\mathrm{d}y_{k-1}\Bigg{)}\; \mathrm{d}y_{k}.\end{split} \tag{43}\]
Now
\[\int_{y_{k-1}}^{*}\frac{y_{k-1}(y_{k-1}+1)\log^{2l-1}|y_{k-1}|}{(y_{k-1}^{3}-1 )(y_{k}^{2}+y_{k}y_{k-1}+y_{k-1}^{2})}\;\mathrm{d}y_{k-1}\]
will be given by the sum of (24) and (28) with \(a=1\) and \(b=y_{k}\), that is, it will be given by \(g_{1}(2l-1)+g_{2}(2l-1)\). As discussed in the proof of Claim 1 in Section 4.1, this sum is enough to incorporate all intervals of \(y_{k}\) and \(y_{k-1}\). We get
\[\begin{split}\frac{\theta^{2l}}{3(y_{k}^{2}+y_{k}+1)}\left[-& R_{2l-1}(0)+3R_{2l-1}\left(\frac{\log|y_{k}|}{\theta}\right)+Y_{2l-1}(0) \right]\\ +&\frac{\theta^{2l}(y_{k}+1)}{\sqrt{3}(y_{k}^{3}-1)} \left[S_{2h-1}\left(\frac{\log|y_{k}|}{\theta}\right)-S_{2h-1}(0)\right]\\ +&\frac{\delta^{2l}}{3(y_{k}^{2}+y_{k}+1)}\left[-P_{2 l-1}(0)+3P_{2l-1}\left(\frac{\log|y_{k}|}{\delta}\right)+Z_{2l-1}(0)\right]\\ +&\frac{\delta^{2l}(y_{k}+1)}{\sqrt{3}(y_{k}^{3}-1)} \left[Q_{2l-1}\left(\frac{\log|y_{k}|}{\delta}\right)-Q_{2l-1}(0)\right], \end{split}\hskip 142.26378pt\right\}g_{2}(2l-1)\]
where the first two lines correspond to \(g_{1}(2l-1)\) and the next two lines to \(g_{2}(2l-1)\). Using the notation for the polynomials \(R_{k},S_{k},P_{k},Q_{k},Y_{k}\) and \(Z_{k}\) defined above (and also noting that \(S_{2h-1}(x)\)
and \(Q_{2h-1}(x)\) have no constant term), we can write this as
\[\frac{\theta^{2l}}{3(y_{k}^{2}+y_{k}+1)}\left[-r_{2l-1,0}+3\sum_{j=0 }^{l}r_{2l-1,j}\left(\frac{\log|y_{k}|}{\theta}\right)^{2j}+y_{2l-1,0}\right]\\ +\frac{\theta^{2l}(y_{k}+1)}{\sqrt{3}(y_{k}^{3}-1)}\left[\sum_{j =1}^{l}s_{2l-1,j}\left(\frac{\log|y_{k}|}{\theta}\right)^{2j-1}\right]\\ +\frac{\delta^{2l}}{3(y_{k}^{2}+y_{k}+1)}\left[-p_{2l-1,0}+3\sum_ {j=0}^{l}p_{2l-1,j}\left(\frac{\log|y_{k}|}{\delta}\right)^{2j}+z_{2l-1,0}\right] \\ +\frac{\delta^{2l}(y_{k}+1)}{\sqrt{3}(y_{k}^{3}-1)}\left[\sum_{j =1}^{l}q_{2l-1,j}\left(\frac{\log|y_{k}|}{\delta}\right)^{2j-1}\right].\]
Collecting the terms with \(\frac{1}{(y_{k}^{2}+y_{k}+1)}\) together and those with \(\frac{(y_{k}+1)}{(y_{k}^{3}-1)}\) and replacing \(\theta=2\pi/3\) by \(2\delta\) we have
\[\frac{\delta^{2l}}{3(y_{k}^{2}+y_{k}+1)}\left[2^{2l}(-r_{2l-1,0}+ y_{2l-1,0})-p_{2l-1,0}+z_{2l-1,0}+3\sum_{j=0}^{l}(2^{2l-2j}r_{2l-1,j}+p_{2l-1,j}) \left(\frac{\log|y_{k}|}{\delta}\right)^{2j}\right]\\ +\frac{\delta^{2l}(y_{k}+1)}{\sqrt{3}(y_{k}^{3}-1)}\left[\sum_{j =1}^{l}(2^{2l-2j+1}s_{2l-1,j}+q_{2l-1,j})\left(\frac{\log|y_{k}|}{\delta} \right)^{2j-1}\right]. \tag{44}\]
Next, the second inner integral in (43),
\[\int_{y_{k-1}}^{*}\frac{y_{k-1}\log^{2l}|y_{k-1}|}{(y_{k-1}^{2}+y_{k-1}+1)(y_ {k}^{2}+y_{k}y_{k-1}+y_{k-1}^{2})}\;\mathrm{d}y_{k-1}\]
is given this time by the sum of (14) and (20), that is, \(f_{1}(2l)+f_{2}(2l)\), again with \(a=1\) and \(b=y_{k}\):
\[\left.+\frac{\delta^{2l+1}(y_{k}+1)}{(y_{k}^{3}-1)}\left[P_{2l} \left(\frac{\log|y_{k}|}{\delta}\right)-P_{2l}(0)\right]\\ +\frac{\delta^{2l+1}}{\sqrt{3}(y_{k}^{2}+y_{k}+1)}\left[Q_{lh} \left(\frac{\log|y_{k}|}{\delta}\right)+Q_{2l}(0)\right].\]
Rewriting the polynomials and collecting terms together we obtain
\[\frac{\delta^{2l+1}(y_{k}+1)}{(y_{k}^{3}-1)}\left[\sum_{j=1}^{l+1}(2 ^{2l-2j+2}r_{2l,j}+p_{2l,j})\left(\frac{\log|y_{k}|}{\delta}\right)^{2j-1}\right]\\ +\frac{\delta^{2l+1}}{\sqrt{3}(y_{k}^{2}+y_{k}+1)}\left[2^{2l+1}s _{2l,0}+q_{2l,0}+\sum_{j=0}^{h}(2^{2l-2j+1}s_{2l,j}+q_{2l,j})\left(\frac{\log|y _{k}|}{\delta}\right)^{2j}\right]. \tag{45}\]
Finally, we plug the expressions (44) and (45) back into equation (43) and compare coefficients with the initial expression for \(F(2n+2)\) given by (42). Comparing coefficients of \(\frac{(y_{k}+1)}{(y_{k}^{3}-1)}\log^{2h-1}|y_{k}|\) on both sides gives us
\[a_{n+1,h-1}\delta^{2n+2-2h}=\sum_{l=h}^{n}c_{n,l-1}\delta^{2n-2 l+1}\cdot\left(\frac{\delta^{2l-2h+1}}{\sqrt{3}}(2^{2l-2h+1}s_{2l-1,h}+q_{2l-1,h})\right)\\ +\sum_{l=h-1}^{n}d_{n,l}\delta^{2n-2l}\cdot\left(\delta^{2l-2h+2 }(2^{2l-2h+2}r_{2l,h}+p_{2l,h})\right),\]
which means, for \(n\geq 0\),
\[a_{n+1,h-1}=\frac{1}{\sqrt{3}}\sum_{l=h}^{n}c_{n,l-1}(2^{2l-2h+1}s_{2l-1,h}+q_ {2l-1,h})+\sum_{l=h-1}^{n}d_{n,l}(2^{2l-2h+2}r_{2l,h}+p_{2l,h}). \tag{46}\]
Similarly, comparing coefficients of \(\frac{\log^{2h}|y_{k}|}{y_{k}^{2}+y_{k}+1}\), we obtain an expression for \(b_{n+1,h}\). Since equations (44) and (45) have some additional constant terms corresponding to the case when \(h=0\), we write them separately. When \(h\geq 1\), we have
\[b_{n+1,h}\delta^{2n-2h+1}=\sum_{l=h}^{n}c_{n,l-1}\delta^{2n-2l+1 }\cdot\left(\frac{\delta^{2l-2h}}{3}\cdot 3(2^{2l-2h}r_{2l-1,h}+p_{2l-1,h})\right)\\ +\sum_{l=h-1}^{n}d_{n,l}\delta^{2n-2l}\cdot\left(\frac{\delta^{2l -2h+1}}{\sqrt{3}}\cdot(2^{2l-2h+1}s_{2l,h}+q_{2l,h})\right),\]
giving us
\[b_{n+1,h}=\sum_{l=h}^{n}c_{n,l-1}(2^{2l-2h}r_{2l-1,h}+p_{2l-1,h})+\frac{1}{ \sqrt{3}}\sum_{l=h-1}^{n}d_{n,l}(2^{2l-2h+1}s_{2l,h}+q_{2l,h}). \tag{47}\]
When \(h=0\), we have
\[b_{n+1,0}=\frac{1}{3}\sum_{l=1}^{n}c_{n,l-1}\Big{(}2^{2l}(2r_{2l-1,0}+y_{2l-1,0 })+2p_{2l-1,0}+z_{2l-1,0}\Big{)}+\frac{2}{\sqrt{3}}\sum_{l=0}^{n}d_{n,l}(2^{2l +1}s_{2l,0}+q_{2l,0}). \tag{48}\]
We can use the same procedure to obtain expressions for \(c_{n,h-1}\) and \(d_{n,h}\) by starting with the expression for \(F(2n+1)\) for \(n\geq 1\) and expanding the integral from the penultimate iteration. This gives us the following for \(h\geq 1\) and \(n\geq 1\),
\[c_{n,h-1}=\frac{1}{\sqrt{3}}\sum_{l=h}^{n}a_{n,l-1}(2^{2l-2h+1}s_{2l-1,h}+q_{2 l-1,h})+\sum_{l=h-1}^{n-1}b_{n,l}(2^{2l-2h+2}r_{2l,h}+p_{2l,h}), \tag{49}\]
and
\[d_{n,0} =\frac{1}{3}\sum_{l=1}^{n}a_{n,l-1}\Big{(}2^{2l}(2r_{2l-1,0}+y_{2l-1,0})+2p_{2l-1,0}+z_{2l-1,0}\Big{)}+\frac{2}{\sqrt{3}}\sum_{l=0}^{n-1}b_{n,l}(2^{ 2l+1}s_{2l,0}+q_{2l,0}) \tag{51}\] \[d_{n,h} =\sum_{l=h}^{n}a_{n,l-1}(2^{2l-2h}r_{2l-1,h}+p_{2l-1,h})+\frac{1}{ \sqrt{3}}\sum_{l=h}^{n-1}b_{n,l}(2^{2l-2h+1}s_{2l,h}+q_{2l,h}). \tag{50}\]
Finally, to complete the recursive formula, we must calculate initial values. To do this, we evaluate the Mahler measure of the first polynomial in the family as follows:
\[\mathrm{m}(Q_{1})=\mathrm{m}\left(y+\left(\frac{\overline{\omega} z_{1}+\omega}{z_{1}+1}\right)\right) =\frac{\sqrt{3}}{2\pi}\cdot F(1)\] \[=\frac{\sqrt{3}}{2\pi}\int_{-\infty}^{\infty}\mathrm{m}(P_{y_{1} })\frac{\mathrm{d}y_{1}}{y_{1}^{2}+y_{1}+1}\] \[=\frac{\sqrt{3}}{2\pi}\cdot 2\cdot L(\chi_{-3},2)\left(1+\frac{1}{ 2^{2}}\right)=\frac{5\sqrt{3}}{4\pi}\,L(\chi_{-3},2),\]
using equation (39). This means that we can set \(d_{0,0}=1\). From this base value, we can obtain all subsequent values of \(a_{r,s}\,,b_{r,s}\,,c_{r,s},d_{r,s}\) using equations (46)-(51) above, with appropriate choices of \(n\) and \(h\). For example, with \(n=0,h=1\) in (46) and \(n=0\) in (48), we have \(a_{1,0}=2\) and \(b_{1,0}=\frac{2}{\sqrt{3}}\) respectively, giving us the Mahler measure of \(Q_{2}\):
\[\mathrm{m}(Q_{2})=\frac{91}{18\pi^{2}}\zeta(3)+\frac{5}{4\sqrt{3\pi}}\,L(\chi_ {-3},2).\]
One can also confirm that these values of \(a_{1,0}\) and \(b_{1,0}\) agree with the results of Section 3.
### The final Mahler measure
We are now ready to combine all the above details and complete the proof of our result.
Proof of Theorem 1.: Observe that from the definition of \(F(k)\) given in (36), the Mahler measure in (6) is given by
\[\mathrm{m}(Q_{n})=\left(\frac{\sqrt{3}}{2\pi}\right)^{n}\cdot F(n).\]
Thus, plugging in equations (40) and (41) into the above and simplifying, we obtain for \(n\geq 1\)
\[\mathrm{m}(Q_{2n})=\frac{2}{12^{n}}\Bigg{(}\sum_{h=1}^{n}a_{n,h-1 }\;9^{h}(2h)!\left(1-\frac{1}{3^{2h+1}}\right)\left(1-\frac{1}{2^{2h+1}}\right) \frac{\zeta(2h+1)}{\pi^{2h}}\\ +3\sum_{h=0}^{n-1}b_{n,h}\;9^{h}(2h+1)!\left(1+\frac{1}{2^{2h+2} }\right)\frac{L(\chi_{-3},2h+2)}{\pi^{2h+1}}\Bigg{)},\]
and for \(n\geq 0\),
\[\mathrm{m}(Q_{2n+1})=\frac{1}{12^{n}\sqrt{3}}\Bigg{(}\sum_{h=1}^{ n}c_{n,h-1}\;9^{h}(2h)!\left(1-\frac{1}{3^{2h+1}}\right)\left(1-\frac{1}{2^{2h+1} }\right)\frac{\zeta(2h+1)}{\pi^{2h}}\\ +3\sum_{h=0}^{n}d_{n,h}\;9^{h}(2h+1)!\left(1+\frac{1}{2^{2h+2}} \right)\frac{L(\chi_{-3},2h+2)}{\pi^{2h+1}}\Bigg{)},\]
where the coefficients \(a_{r,s}\,,b_{r,s}\,,c_{r,s},d_{r,s}\) are real numbers given recursively by (46)-(51) starting from the initial value \(d_{0,0}=1\). Rearranging the powers, we can write this compactly as done in Theorem 1 as follows:
\[\mathrm{m}(Q_{2n})=\frac{2}{12^{n}}\Bigg{(}\sum_{h=1}^{n}a_{n,h-1}\left(\frac{3} {\pi}\right)^{2h}\mathcal{A}(h)\ +\ \sum_{h=0}^{n-1}b_{n,h}\left(\frac{3}{\pi}\right)^{2h+1} \mathcal{B}(h)\Bigg{)},\]
and for \(n\geq 0\) we have
\[\mathrm{m}(Q_{2n+1})=\frac{1}{12^{n}\sqrt{3}}\Bigg{(}\sum_{h=1}^{n}c_{n,h-1} \left(\frac{3}{\pi}\right)^{2h}\mathcal{A}(h)\ +\ \sum_{h=0}^{n}d_{n,h}\left(\frac{3}{\pi}\right)^{2h+1} \mathcal{B}(h)\Bigg{)},\]
where
\[\mathcal{A}(h)=(2h)!\left(1-\frac{1}{3^{2h+1}}\right)\left(1-\frac{1}{2^{2h+ 1}}\right)\zeta(2h+1),\]
and
\[\mathcal{B}(h)=(2h+1)!\left(1+\frac{1}{2^{2h+2}}\right)L(\chi_{-3},2h+2).\]
## 5. Conclusion
These results show that the techniques employed in [7] can be extended to a more general family of polynomials. It is interesting to note that our formulae have a combination of zeta-values and \(L\)-values corresponding to the Dirichlet character of conductor 3 in both the odd and even cases. The appearance of \(L(\chi_{-3},s)\) is due to the evaluation of polylogarithms at the third and sixth root of unity, as opposed to the fourth root of unity in [7] which yields values of \(L(\chi_{-4},s)\). This motivates the question of whether these techniques can be further generalized to higher roots of unity to obtain \(L\)-values corresponding to higher conductors. The integrals involving powers of the logarithm as seen in Section 3 and the limits of the integrals in the iteration step discussed in 4 give rise to some complicated calculations. It would be interesting to see if these calculations can be captured in a more general setting. Another intriguing question would be to investigate whether the recursive coefficients appearing in the formulae have an elegant closed formula, like the coefficients that appear in [7].
In Section 3, we used the fact that the polynomial \(P_{\gamma}(y)\) has Mahler measure \(\log^{+}|\gamma|\), and is thus dependent only on the absolute value of \(\gamma\). In [7], other families of polynomials were also studied - those that stemmed from simpler polynomials (like \(P_{\gamma}(y)\)) whose Mahler measures only depended on the absolute value of a parameter. For instance, we have the following identities:
\[\mathrm{m}(1+\gamma x+(1-\gamma)y)=|\mathrm{arg}\gamma|\log|1-\gamma|+| \mathrm{arg}(1-\gamma))|\cdot\log|\gamma|+\left\{\begin{array}{ll}D(\gamma)& \mathrm{if}\ \mathrm{Im}(\gamma)\geq 0,\\ \\ D(\overline{\gamma})&\mathrm{if}\ \mathrm{Im}(\gamma)<0,\end{array}\right.\]
where \(D(z)\) is the Bloch-Wigner dilogarithm (see [15]); and
\[\mathrm{m}(1+x+\gamma(1+y)z)=\left\{\begin{array}{ll}\frac{2}{\pi^{2}}\mathcal{L} _{3}(|\gamma|)&\text{for }|\gamma|\leq 1,\\ \\ \log|\gamma|+\frac{2}{\pi^{2}}\mathcal{L}_{3}(|\gamma|^{-1})&\text{for }|\gamma|>1, \end{array}\right.\]
where
\[\mathcal{L}_{3}(\gamma)=-2\int_{0}^{\gamma}\frac{\mathrm{d}s}{s^{2}-1}\circ \frac{\mathrm{d}s}{s}\circ\frac{\mathrm{d}s}{s}.\]
It would be worth exploring whether similar families can be constructed from such \(P_{\gamma}\) and their Mahler measures studied in context of our results.
Finally, we note that in [9], certain transformations are described which when applied on a polynomial, preserve the Mahler measure. In Section 4 of the same article, these transformations are applied on the polynomials appearing in [7] to obtain infinitely many new rational functions with the same Mahler measure. We remark that one can use these transformations to the family of polynomials considered in Theorem 1 as well, and obtain many more polynomials with the same Mahler measure.
|
2309.14181 | Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level
Vision | The rapid evolution of Multi-modality Large Language Models (MLLMs) has
catalyzed a shift in computer vision from specialized models to general-purpose
foundation models. Nevertheless, there is still an inadequacy in assessing the
abilities of MLLMs on low-level visual perception and understanding. To address
this gap, we present Q-Bench, a holistic benchmark crafted to systematically
evaluate potential abilities of MLLMs on three realms: low-level visual
perception, low-level visual description, and overall visual quality
assessment. a) To evaluate the low-level perception ability, we construct the
LLVisionQA dataset, consisting of 2,990 diverse-sourced images, each equipped
with a human-asked question focusing on its low-level attributes. We then
measure the correctness of MLLMs on answering these questions. b) To examine
the description ability of MLLMs on low-level information, we propose the
LLDescribe dataset consisting of long expert-labelled golden low-level text
descriptions on 499 images, and a GPT-involved comparison pipeline between
outputs of MLLMs and the golden descriptions. c) Besides these two tasks, we
further measure their visual quality assessment ability to align with human
opinion scores. Specifically, we design a softmax-based strategy that enables
MLLMs to predict quantifiable quality scores, and evaluate them on various
existing image quality assessment (IQA) datasets. Our evaluation across the
three abilities confirms that MLLMs possess preliminary low-level visual
skills. However, these skills are still unstable and relatively imprecise,
indicating the need for specific enhancements on MLLMs towards these abilities.
We hope that our benchmark can encourage the research community to delve deeper
to discover and enhance these untapped potentials of MLLMs. Project Page:
https://q-future.github.io/Q-Bench. | Haoning Wu, Zicheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Annan Wang, Chunyi Li, Wenxiu Sun, Qiong Yan, Guangtao Zhai, Weisi Lin | 2023-09-25T14:43:43Z | http://arxiv.org/abs/2309.14181v3 | # Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-Level Vision
###### Abstract
The rapid evolution of Multi-modality Large Language Models (MLLMs) has catalyzed a shift in computer vision from specialized models to general-purpose foundation models. Nevertheless, there is still an inadequacy in assessing the abilities of MLLMs on **low-level visual perception and understanding**. To address this gap, we present **Q-Bench**, a holistic benchmark crafted to systematically evaluate potential abilities of MLLMs on three realms: low-level visual perception, low-level visual description, and overall visual quality assessment. _a)_ To evaluate the low-level _perception_ ability, we construct the **LLVisionQA** dataset, consisting of 2,990 diverse-sourced images, each equipped with a human-asked question focusing on its low-level attributes. We then measure the correctness of MLLMs on answering these questions. _b)_ To examine the _description_ ability of MLLMs on low-level information, we propose the **LLD**escribe dataset consisting of long expert-labelled _golden_ low-level text descriptions on 499 images, and a GPT-involved comparison pipeline between outputs of MLLMs and the _golden_ descriptions. _c)_ Besides these two tasks, we further measure their visual quality _assessment_ ability to align with human opinion scores. Specifically, we design a softmax-based strategy that enables MLLMs to predict _quantifiable_ quality scores, and evaluate them on various existing image quality assessment (IQA) datasets. Our evaluation across the three abilities confirms that MLLMs possess preliminary low-level visual skills. However, these skills are still unstable and relatively imprecise, indicating the need for specific enhancements on MLLMs towards these abilities. We hope that our benchmark can encourage the research community to delve deeper to discover and enhance these untapped potentials of MLLMs.
## 1 Introduction
The emergent large language models (LLMs) such as ChatGPT and Bard, as well as their excellent open-source counterparts (_e.g._, LLaMA (Touvron et al., 2023), MPT (Team, 2023)), have served as powerful general-purpose assistants, which opens a new era for artificial intelligence (AI) from targeting specific tasks towards general intelligence. Following the advancements of LLMs, multi-modality large language models (MLLMs), as represented by LLaVA (Liu et al., 2023a), MiniGPT-4 (Zhu et al., 2023), InstructBLIP (Dai et al., 2023), and Otter (Li et al., 2023a), have brought exciting progresses on the vision field as well. They are capable of providing robust general-level abilities on visual perception/understanding and can even seamlessly dialog and interact with humans through natural language. While such abilities of MLLMs have been explored and validated on several vision-language tasks such as image captioning (Chen et al., 2015), visual question answering (Antol et al., 2015), cross-modality grounding (Peng et al., 2023), and traditional vision tasks such as image classification or segmentation (Lai et al., 2023), most attention is paid to the high-level perception and understanding of visual contents. Meanwhile, the ability of MLLMs remains not clear on **low-level visual perception and understanding**, which play significant roles in image quality assessment (IQA) (Hosu et al., 2020; Fang et al., 2020) and its associated tasks on perceiving visual distortions (_noises, blurs_) (Su et al., 2021; Wu et al., 2023c) and other low-level attributes (_color, lighting, composition, style, etc_) (Kong et al., 2016) that may relate to aesthetics and emotions of natural photos (Murray et al., 2012) and human preferences on emerging computer
graphics generated (Zhang et al., 2023a) or AI-generated images (Li et al., 2023c; Xu et al., 2023). These low-level visual abilities are strongly associated with a wide range of applications, such as recommendation (Wu et al., 2023b), guidance on camera systems (Zhang et al., 2022), or visual quality enhancement (Zhang et al., 2018). Henceforth, it is crucial to evaluate the current abilities of these general-purpose foundation models in low-level visual perception and understanding, to ideally relieve extensive human resources to give feedback on every specific low-level task.
In our work, we propose the first systematic benchmark to measure the low-level visual perception and understanding abilities of MLLMs. Our benchmark is constructed around a key question:
_How do MLLMs emulate human ability related to low-level visual perception and understanding?_
A simple answer is **language**, which is the fundamental property of MLLMs. Specifically, we define two emerging language abilities of MLLMs on low-level vision as follows:
* _Ability 1 (A1): **Perception** of Low-level Attributes._ As shown in Fig. 1(a), like a human, an MLLM should be able to respond accurately to simple questions related to low-level attributes, _e.g_ answering _'No'_ for a blurry image when queried with _'Is this image clear?'_
* _Ability 2 (A2): **Description** via Natural Language._ As shown in Fig. 1(b), like a human, an MLLM should be able to describe the quality and other low-level information for an image with natural language. The descriptions should be both complete and accurate.
To systematically evaluate the low-level **perception** ability (**A1**) on various low-level attributes under diverse circumstances, we construct the **LLVisionQA** dataset, including 2,990 images from 10 diverse sources. Aligned with existing practices (Liu et al., 2023; Lu et al., 2023), each image in LLVMisaQA is equipped with a question, alongside a correct answer and false candidate answers. In LLVMisaQA, we design three diverse types of questions: _Yes-or-No_ questions, _What_ questions, and _How_ questions. Moreover, we divide low-level concerns into four quadrants, via two axes: (**1**) distortions (_blur, noises, etc_) _vs_ other low-level attributes (_color, lighting, composition, etc_) (Guha et al.,
Figure 1: In the proposed **Q-Bench**, we build the first benchmark on emerging abilities of MLLMs on low-level vision, including **perception** of low-level attributes (_by correctly answering diverse queries_) and **description** of low-level quality-related information via natural language. Furthermore, the Q-bench also evaluates the quantitative **assessment** ability of MLLMs on traditional IQA tasks.
2020). **(2)** global perception (_e.g., sharpness of the whole picture_) _vs_ local content-related in-context perception (_e.g., whether the red flower is in focus_) (Li et al., 2019). With three types of questions and four quadrants of concerns, the proposed **LLVisionQA** dataset provides a holistic, diverse, and balanced benchmark for the **perception** ability on low-level visual attributes of MLLMs.
For the **description** ability (**A2**), given that the output description is expected to be complex (without fixed formats), we propose the **LLDescribe** dataset by inviting experts to write long _golden_ low-level descriptions (_average **58** words per description_) for 499 images, which serve as the reference texts for the single-modal GPT to evaluate MLLM output descriptions. The quality of MLLM descriptions is evaluated through three dimensions: completeness (_punish missing information_), preciseness (_punish outputs controversial with reference_), as well as relevance (_punish outputs irrelevant to low-level attributes_). With _golden_ descriptions and the multi-dimensional evaluation process participated by GPT, we comprehensively evaluate the low-level description ability of MLLMs.
Besides the two emerging language abilities, we also evaluate MLLMs on the traditional IQA task, a more abstract task that requires understanding on human opinions of low-level attributes, as follows:
* _Ability 3 (A3): Precise Assessment_ _Aligned with Human Opinions_. As depicted in Fig. 1(c), an MLLM should be able to predict quantifiable quality scores for images, which can be aligned with the human-rated mean opinion scores (MOS) on low-level visual appearances.
For the **assessment** ability (**A3**), we utilize plenty of existing IQA databases (Hosu et al., 2020; Lin et al., 2019; Li et al., 2023c) that focus on various low-level appearances of images, to benchmark MLLMs within conventional IQA settings. Specifically, we notice that MLLMs encounter difficulties in providing sufficiently quantifiable outputs, whether instructed to directly rate with texts or provide numerical outputs. To solve this challenge, we propose to extract the softmax pooling result on the logits of the two most frequent tokens (_good_ and _poor_) under the response template of MLLMs (Fig 1(c)) as their quality predictions. Our studies prove that the proposed softmax-based strategy is generally better correlated with human perception than direct token outputs of MLLMs (via argmax), which bridges between these emergent MLLMs and the traditional IQA task settings. Under this strategy, we evaluate all MLLMs on their precise **assessment** ability by measuring the correlations between their predictions and human opinion scores in various IQA databases.
In summary, we systematically explore the potential of MLLMs on three low-level visual abilities: perception, description, and assessment. The three realms compose into the proposed **Q-Bench**, a MLLM benchmark on low-level visual tasks. Our contributions can be summarized as three-fold:
* We build a benchmark for MLLMs on low-level **perception** ability. To achieve this, we construct a first-of-its-kind balanced and comprehensive **LLVisionQA** dataset with 2,990 images with one low-level-related question-answer pair for each image. The LLVMisonQA includes three question types and four quadrants of low-level concerns to ensure diversity.
* We define a benchmark process to evaluate the low-level **description** ability of MLLMs, including an **LLDescription** dataset of 499 images with expert-labelled long _golden_ quality descriptions, and a GPT-assisted evaluation to rate MLLM-descriptions in terms of completeness, preciseness, and relevance compared with _golden_ descriptions.
* To evaluate precise quality **assessment** ability, we propose a unified **softmax-based** quality prediction strategy for all MLLMs based on their probability outputs. With its effectiveness validated in our experiments, the proposed strategy sets up a bridge between general-purpose MLLMs and traditional IQA tasks that requires _quantifiable_ scores as outputs.
## 2 Constructing the Q-Bench
### General Principles
**Focusing on Low-level Visual Abilities of MLLMs.** Unlike existing MLLM benchmarks (Li et al., 2023b; Liu et al., 2023b; Lu et al., 2023) that aim at all-round abilities, the tasks in **Q-Bench** are constrained with two basic principles: **(1)** Requiring perception and/or understanding on low-level attributes of images; **(2)** Not requiring reasoning (_i.e. why_) or **outside** knowledge (Marino et al., 2019). We adhere to the principles in designing the **perception**, **description**, and **assessment** tasks, making the proposed **Q-bench** a focused reflection on the low-level visual abilities of MLLMs.
**Covering Diverse Low-level Appearances.** To cover diverse low-level appearances, we collect multi-sourced images for each task, as depicted in Tab. 1. Among all images in the **perception** and **description** tasks, _two-thirds_ are in-the-wild images directly collected from social media posts, smartphones or professional photography. The rest _one-third_ images are collected after various artificial distortions, or via generative processes (CGI, AIGC). Furthermore, we employ k-means clustering for the low-level attribute indicators to certify that the sub-sampled images retain high diversity. In the **assessment** task, full images of 7 IQA datasets within all three source types are evaluated through traditional IQA metrics. The diverse and multiple sources of images morph the **Q-bench** into a holistic and balanced benchmark to fairly evaluate low-level-related abilities.
### Benchmark on Low-level **Perception** Ability
In the first task of Q-Bench, we evaluate the low-level **perception** ability of MLLMs to examine whether they can answer simple natural queries related to low-level attributes. For this purpose, we first collect 2,990 images (1) from multiple sources (see Table 1) with diverse low-level concerns. Then, we collect one low-level-related question (Q), one correct answer to the question (C), and 1-3 candidate false answers (F) for each image. The 2,990 (I,Q,C,F) tuples compose into the **LLVisionQA** dataset (as illustrated in Fig. 2), the first visual question answering (VQA) dataset in the low-level computer vision field. Specifically, the questions in **LLVisionQA** cover four quadrants of distinct low-level concerns (in Sec. 2.2.1) and three question types (in Sec. 2.2.2). After constructing the dataset, the (I,Q,C,F) are together fed into MLLMs for evaluation, while their outputs are further examined by GPT to judge correctness (in Sec. 2.2.3). The details are elaborated as follows.
#### 2.2.1 Quadrants for Low-level Visual Concerns
**Axis 1: Distortions _vs_ Other Low-level Attributes.** The primary axis differentiates two categories of low-level perceptual attributes: **1)** technical **distortions** (Su et al., 2021), seen as the low-level characteristics that directly degrade the quality of images (Ying et al., 2020), and **2)** aesthetic-related **other low-level attributes** (Kong et al., 2016; Hou et al., 2023) which are discernible to human perception and evoke varied emotions. Several studies (Talebi and Milanfar, 2018; Ying et al., 2020; Guha et al., 2020) follow this paradigm and categorize them through a relative golden standard, that whether the attributes _directly improve or degrade picture quality (Yes\(\rightarrow\)Distortions; No\(\rightarrow\)Others)_. Despite this standard, we also enumerate common types of **distortions**_vs_**other low-level attributes as extra guidance for constructing the LLVisionQA dataset, as listed in Sec. A.1.2.
**Axis 2: Global Perception _vs_ Local In-context Perception.** In recent research on low-level vision, it is observed that human perceptions of low-level visual often intertwine with higher-level contextual comprehension (Li et al., 2019; Wang et al., 2021; Wu et al., 2022b). For instance, a **clear sky** might lack complex textures yet display exceptional clarity. Furthermore, localized low-level appearances can deviate from their overall counterparts, as observed by Wu et al. (2022a); Ying et al. (2021). Acknowledging these differences, we curate **local in-context perception** (Fig. 2_right_) questions, that require MLLMs to grasp the content or other context to answer correctly, while other questions are categorized as **global perception** (Fig. 2_left_). (More analysis in Sec. A.1.2.)
\begin{table}
\begin{tabular}{c|l|c|c|c} \hline \hline \multirow{2}{*}{**Type**} & \multirow{2}{*}{**Image Source Dataset**} & \multicolumn{2}{c}{Sampled Size} & \multicolumn{2}{c}{Full Dataset Size} \\ & & **in LLVMQA** & in **LLVisionQA** & **in LLV describe** & for **Assessment Task** \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & KONQ-10K (Hosu et al., 2020) & 600 & 100 & 10K \\ & SPAQ (Fang et al., 2020) & 800 & 130 & 10,073 \\ & LIVE-FB (Ying et al., 2020) & 300 & 50 & 39,810 \\ & LIVE-itu (Ghudiyaram and Bovik, 2016) & 300 & 50 & 1,169 \\ & CGO-4K (Zhang et al., 2023a) & 200 & 30 & 6,000 \\ Generated & AIGQA-3K (Li et al., 2023a) & 198 & 30 & 2,982 \\ & ImageRewardDB (Xu et al., 2023) & 194 & 29 & _not included in_ (A3) \\ \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & KADID-10K (Lin et al., 2019) & 81 & 20 & \(\sim\) 10,125 \\ & LIVEMultisection (Jayanan et al., 2012) & 15 & 10 & _not included in_ (A3) \\ & _Corrupted_ COCO (Chen et al., 2015) & 302 & 50 & _not included in_ (A3) \\ \hline \multirow{4}{*}{
\begin{tabular}{} \end{tabular} } & CoFResponding Ability/Task in **Q-Bench** & (A1) **Perception** & (A2) **Description** & (A3) **Assessment** \\ & Total Benchmark Size for Respective Task & 2,990 & 499 & 81,284 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of the 10 diverse image source datasets in the **Q-Bench**, and the respective benchmark dataset size for each low-level ability among **perception**, **description** and **assessment**. The _Corrupted_ COCO denotes COCO-Captions images corrupted by Michaelis et al. (2019).
#### 2.2.2 Question Types
In the **LLVisionQA** dataset, we curate three question types, _Yes-or-No_, _What_, and _How_ to simulate multiple query forms from humans. The details of the three question types are defined as follows.
**Type 1: _Yes-or-No_ Questions.** The fundamental type of questions is _Yes-or-No_, _i.e._, judgments. Specifically, we notice that some MLLMs especially prefer to respond with _yes_ rather than _no_. To reduce such biases in our benchmark, though designing questions with answers as _yes_ is easier, we ensure that around 40% of all judgments are with correct answers as _no_, via querying on **contrastive** low-level attributes or **non-existing** low-level attributes. We further measure the bias levels of different MLLMs and present a further de-biased evaluation among them, as discussed in Sec. A.3.1.
**Type 2: _What_ Questions. Despite _Yes-or-No_ judgments, the _what_ questions are also a common type of queries in recent MLLM benchmarks such as Lu et al. (2023). In Q-bench, they classify low-level attributes in pictures (_e.g., What distortion occurs in the image?_), or associated context given specific low-level appearances (for in-context perception questions, _e.g., Which object in the image is under-exposed?_). Unlike _Yes-or-No_ questions, the _What_ questions examine more comprehensive low-level attribute understanding of MLLMs, by requiring correct perception on **multiple** attributes.
**Type 3: _How_ Questions.** Despite the two common types, we also include a special type, the _How_ questions, to cover non-extreme appearances (Wu et al., 2023c) of low-level attribute dimensions into our benchmark, as an extension to _Yes-or-No_ questions. As shown in Fig. 2, we can query _How is the clarity of the image?_ for the image with both clear and blurry areas, and answer with **Medium**. With this special question type, we broaden the Q-bench into **finer-grained** low-level perception.
#### 2.2.3 GPT-assisted Evaluation Process
After constructing the LLVMsqA dataset, we feed it to multiple MLLMs to evaluate their abilities on low-level visual **perception**. The input format to query MLLMs is exemplified as follows:
_#User: How is the clarity of the image?_ (Question) _[IMAGE_TOKEN]_ (Image) _Choose between one of the following options: A. High_ (Correct) _B. Medium_ (Wrong) _C. Low_ (Wrong)
The correct and wrong answers are shuffled during the actual evaluation. Moreover, while traditional visual question answering (Antol et al., 2015; Marino et al., 2019) tasks typically employ traditional language metrics (BLEU-4, CIDEr) to compare performance, as observed by recent studies (Ye et al., 2023) and validated by us, most MLLMs cannot consistently provide outputs on **instructed formats**. Given the question above, different MLLMs may reply _"A."_, _"High"_, _"The clarity of the image is high."_, _"The image is of high clarity."_ (all correct), which are difficult to be exhaustively
Figure 2: A dataset card of **LLVisionQA** that evaluates the low-level **perception** ability of MLLMs. 2,990 (I,Q,C,F) tuples are collected to cover three question types and four quadrants of low-level visual concerns, providing an all-around evaluation of low-level visual perception for MLLMs.
included under traditional metrics. To solve this problem, we design, validate, and employ a **5-round** GPT-assisted evaluation process inspired by Liu et al. (2023b). Under this process, the question, correct answers, and MLLM replies are fed into GPT for evaluation (See Sec. A.2.1 for its details).
### Benchmark on Low-level **Description** Ability
In the second task of Q-Bench, we evaluate the language **description** ability of MLLMs on low-level information. This task is a sibling task of image captioning (Chen et al., 2015; Young et al., 2014; Agrawal et al., 2019) that describes image content with natural language, with a specific concern on the low-level appearance of images. To evaluate this ability automatically, we first derive a _golden_ low-level description dataset, denoted as **LLDescribe** (Sec. 2.3.1), including one long (_average 40 words_) _golden_ description provided by experts for each of 499 images. With these _golden_ text descriptions, we are able to measure the quality of output low-level descriptions from MLLMs with a single-modal GPT, under the three dimensions: **completeness**, **preciseness**, as well as **relevance** (Sec 2.3.2). The discussions of the _golden_ descriptions and the evaluation process are as follows.
#### 2.3.1 Defining _Golden_ Low-level Descriptions for Images
For the description ability, MLLMs should accurately and completely describe low-level visual information of images. Thus, the _ground truths_ for these MLLMs are also built within a basic principle to cover as many low-level concerns as possible, so long as they are enumerated in Sec. 2.2.1 and occur in images. The resulting _golden_ descriptions in **LLDescribe** have an average duration of **58** words, notably longer than common high-level image caption datasets (**11** for Agrawal et al. (2019), **10** for Chen et al. (2015)). Similar to the **LLVisionQA** dataset for the perception task, the 499 images in **LLDescribe** dataset also include all 10 sources (as in Tab. 1) to cover images with diverse low-level appearances. The _golden_ descriptions on different sources of images are depicted in Fig. 3.
#### 2.3.2 Evaluation with Single-modal GPT
Recent studies (Zheng et al., 2023) have proved single-modal GPT (OpenAI, 2023) to be a reliable evaluation tool for pure language tasks. Via the **LLDescribe** dataset, we convert the multi-modality problem into a text-only setting, by matching the MLLM outputs with the _golden_ descriptions with single-modal GPT under three dimensions: **(1) Completeness**. More matched information with the _golden_ description is encouraged. **(2) Preciseness**. The controversial information with the _golden_ description is punished. **(3) Relevance**. More proportions of MLLM outputs should be related to low-level information, instead of others. Each dimension is scored among [0,1,2]. Similar as Sec. 2.2.3, we repeat **5 rounds** for each single evaluation and collect the weighted average as the final score. The detailed settings for GPT to evaluate the three dimensions are in Sec. A.2.2.
Figure 3: A dataset card of **LLDescribe** that evaluates the low-level **description** ability of MLLMs. 499 images from 10 diverse sources are labeled with _golden_ descriptions, to serve as **text** references for single-modal GPT to evaluate the completeness, preciseness, and relevance of MLLM outputs.
### Benchmark on Precise Quality Assessment Ability
In the third task, we benchmark the ability of MLLMs to provide _quantitative_ **assessment** on the overall low-level appearance of images. Unlike the two tasks above, we utilize existing IQA datasets that are collected across a variety of low-level appearances to evaluate how MLLMs can predict quantitative quality scores **aligned with human opinions**. All the three types of IQA datasets (_in-the-wild_, _generated_, _artificially-distorted_) as mentioned in Sec. 2.1 are evaluated, to provide a broad range measurement of the assessment ability of MLLMs. Nevertheless, how to collect _quantifiable_ quality scores from MLLMs remains challenging as their outputs only have weak measurability (Sec. 2.4.1). Noticing that MLLMs can provide probabilities of tokens, we employ softmax pooling on the logits of _good_ and _poor_ under a simple and direct prompt template, deriving into _quantifiable_ quality score predicted by MLLMs (Sec. 2.4.2), as illustrated in Fig. 4. Details are as follows.
#### 2.4.1 Weak Measurability of MLLM Outputs
In Q-Bench, we aim to fairly compare the **assessment** ability between different MLLMs on diverse low-level appearances. Henceforth, our principle is to define a unified, **simplest** instruction that is applicable for all MLLMs on all IQA datasets. Under this principle, we conduct toy experiments on LLVMisionQA on Shikra and LLAVA-v1, with two simple instruction strategies: **(A) Direct Instruction,** in which the prompt is designed as simple as _"Rate the quality of the image"_. The top-frequency answers are _good_ (78%), and _poor_ (20%), with other outputs almost negligible. **(B) Numerical Instruction,** in which we specifically instruct numerical ratings, with the prompt: _"Score the quality of the image from 1 to 5, with 1 as lowest and 5 as highest."_. Under the numerical strategy, the top-frequency answers are **5** (84%), **1** (9%), and **3** (5%); though within the score range, the frequencies of scores **2** and **4** are both less than 1%. The toy experiments imply the weak measurability of MLLM outputs, given that the answers are statistically **1)** biased towards _positive_, **2)** biased towards _extreme_, and **3)** with _only two_ effective scales. Therefore, it is necessary to explore extended strategies for MLLMs to provide truly _quantifiable_ outputs for low-level **assessment**.
#### 2.4.2 A Softmax-based Evaluation Strategy
Given the above observations, we design the softmax-based evaluation strategy (Fig. 4) to reduce the negative impacts of the biases and lack of scales. To start with, we design our strategy within the **Direct Instruction**, which is more general and less biased than the **Numerical Instruction**. The strategy is based on the observation that two top-frequency outputs, _good_ and _poor_, can be considered as anchors for better and worse human perception, and the **Direct Strategy** can be approximated into a binray classification problem on the _[SCORE_TOKEN]_ position, or technically, an argmax between the logits of _good_ (\(x_{SCORE\_TOKEN}^{\textbf{good}}\)) and _poor_ (\(x_{SCORE\_TOKEN}^{\textbf{poor}}\)) on this position. In our revised strategy, we modify the argmax into softmax to collect better _quantifiable_ scores:
\[q_{\text{pred}}=\frac{e^{x_{\text{SCORE\_TOKEN}^{\textbf{good}}}}}{e^{x_{ \text{SCORE\_TOKEN}^{\textbf{good}}}}+e^{x_{\text{SCORE\_TOKEN}^{\textbf{ poor}}}}} \tag{1}\]
This simple and generally-applicable strategy enables us to collect _quantifiable_ outputs (\(q_{\text{pred}}\)) from MLLMs with higher correlation to human ratings, as verified in our experimental analysis (Tab. 8).
Figure 4: The proposed softmax-based quality **assessment** strategy for MLLMs. Instead of directly decoding tokens from the _[SCORE_TOKEN] position_, the strategy extracts log probabilities (logits) of _good_ and _poor_, and predicts _quantifiable_ score via a softmax pooling between the two logits.
## 3 Results on Q-Bench
### Experimental Settings
In Q-Bench, we evaluate **10** variants on **9** up-to-date popular and competitive open-source MLLMs. As compared in Tab. 2, these MLLMs are with varying vision and language architectures, as well as the alignment strategies between the two modalities. All MLLMs are evaluated under **zero-shot** settings without tuning on any datasets in the Q-Bench. More evaluation results are shown in Sec. A.3.
### Results and Observations on Perception
For a holistic examination on the **perception** ability of MLLMs, we evaluate the multi-choice correctness of MLLMs on different sub-categories of the **LLVision** dataset. We are glad that the majority of MLLMs can significantly outperform _random guess_ on all sub-categories. Considering that all participating MLLMs are without any explicit training on low-level visual attributes, these results show strong potentials for these general-purpose models when further fine-tuned with respective low-level datasets. Among all methods, _Flan-T5-based_ InstructBLIP reaches the best accuracy on this question-answering task (5% better than its _Vicuna-based_ counterpart), inheriting from the strong instruction-following ability of _encoder-decoder_-based Flan-T5 (Chung et al., 2022). Another key observation is that almost all methods **perceive worse on distortions** than other low-level attributes. One exception is LLaMA-Adapter-V2, which is the only MLLM that adopts **multi-scale** features as visual inputs. We also notice that all MLLMs prefer _yes_ than _no_ among **Yes-or-No** questions, as analyzed in Tab. 7; qualitative comparisons are illustrated in Fig. 10. In summary, we comprehensively evaluate MLLMs on their strengths and weaknesses in low-level **perception**.
score; on the contrary, almost all MLLMs reach an acceptable standard (0.8/2.0). In terms of the relevance dimension, some MLLMs can achieve very good capabilities (_e.g._ Kosmos-2, Otter-v1), but on the other hand, these models still suffer from **unsatisfactory precision**. In general, all MLLMs at present are only with relatively limited and primary ability to provide low-level visual descriptions. We also conduct a qualitative comparison for MLLM descriptions in Sec. A.3.2.
### Results and Observations on Assessment
To measure the **assessment** ability, we evaluate the performance of 10 MLLMs on 7 IQA datasets that are with at least **1,000** images and **15** human ratings per image (itu, 2000). Primarily, we notice that the majority of MLLMs are more robust than NIQE on **non-natural** circumstances (CGI, AIGC, artificial distortions), showing their potential towards general-purpose evaluators on a broader range of low-level appearances. Moreover, without explicit alignment with human opinions during training, some approaches (_e.g._, mPLUG-Owl) can already achieve better or similar results compared with CLIP-ViT-Large-14, the visual backbone of most MLLMs. Nevertheless, current MLLMs are still not stable enough (_e.g._, Otter-v1 on _LIVE-iw_) and weaker in finer-grained situations (_LIVE-FB_, _CGIQA-6K_) for the visual quality **assessment** tasks, which could be enhanced in the future.
## 4 Conclusion
In this study, we construct the **Q-Bench**, a benchmark to examine the progresses of MLLMs on low-level visual abilities. Anticipating these large foundation models to be general-purpose intelligence that can ultimately relieve human efforts, we propose that MLLMs should achieve three important and distinct abilities: accurate **perception** on low-level visual attributes, precise and complete language **description** on low-level visual information, as well as quantitative **assessment** on image quality. To evaluate the abilities, we collect two multi-modality benchmark datasets for low-level vision, and propose a unified softmax-based quantitative IQA strategy on MLLMs. Our evaluation proves that even without any low-level-specific training, several extraordinary MLLMs still have decent low-level abilities. Nevertheless, there is still a long way to go for MLLMs to be truly-reliable general low-level visual assistants. We sincerely hope that the observations found in the Q-Bench can inspire future MLLMs to enhance the low-level perception and understanding abilities.
\begin{table}
\begin{tabular}{l|c c c c c c|c c c|c c c c|c} \hline \hline
**Dimensions** & \multicolumn{4}{c|}{**Completeness**} & \multicolumn{4}{c|}{**Precision**} & \multicolumn{4}{c|}{**Reference**} & \multicolumn{4}{c|}{**Sum.\(\uparrow\)} \\
**Model (uniform)** & \(7_{b}\) & \(-\) & \(P_{1}^{*}\) & \(P_{2}^{*}\) & \(\sim\) & \(\sim\) & \(P_{1}^{*}\) & \(P_{2}^{*}\) & \(\sim\) & \(\sim\) & \(\sim\) & \(\sim\) & \(\sim\) & \(\sim\) & \(\sim\) & \(\sim\) \\ \hline Shink (_Mean-7B_) & 21.16 & 68.39 & 10.58 & 0.89 & 6 & 30.34 & 28.33 & 41.46 & 1.11 & 6 & 154.76 & 72.12 & 2.95 & 0.97 & 9 & 2.97 \\ LiLvA-V1 (_Mean-13B_) & 34.14 & 40.59 & 20.47 & 0.91 & 5 & 30.06 & 15.52 & 54.58 & 1.25 & 3 & 1.18 & 30.86 & 69.09 & **1.60** & 3 & 3.76 \\ MiniCP+I (_Mean-13B_) & 34.09 & 33.26 & 33.83 & **1.00** & 3 & 29.25 & 15.35 & 55.55 & **1.26** & 2 & 6.99 & 45.66 & 47.59 & 1.41 & 6 & 3.67 \\ Komsoms-2 & 8.88 & 70.99 & 20.36 & **1.12** & 1 & 29.54 & 34.74 & 35.89 & 1.06 & 1.7 & 0.22 & 14.88 & 50.98 & 1.84 & 1 & **4.02** \\ LLMa-Adapter-V2 & 30.44 & 50.49 & 15.65 & 0.85 & 8 & 29.79 & 26.66 & 39.36 & 1.14 & 5 & 1.59 & 52.84 & 45.75 & 1.44 & 1.5 & 3.43 \\ InstructBLIP (_Fine-TSXL_) & 24.56 & 63.22 & 13.98 & 0.87 & 7 & 34.98 & 26.06 & 39.19 & 1.04 & 1.8 & 14.74 & 59.99 & 25.45 & 1.11 & 1.5 & 3.03 \\ InstructBLIP (_Fine-TSXL_) & 29.76 & 61.58 & 8.86 & 7 & 19.70 & 28.53 & 23.58 & 48.69 & 1.21 & 47.24 & 61.39 & 11.39 & 0.84 & 1.02 & 2.84 \\ More-v1 (_MPT-7B_) & 22.44 & 59.94 & 18.28 & 0.96 & 14 & 40.78 & 36.69 & 23.39 & 0.82 & 1.10 & 1.22 & 13.82 & **1.83** & 1 & 3.61 \\ IDEFCS-Instruct (_LLMaM-7B_) & 28.99 & 59.25 & 11.99 & 0.82 & 9 & 34.74 & 27.99 & 37.44 & 1.03 & 1.9 & 3.99 & 59.76 & 36.49 & 1.33 & 7 & 3.18 \\ nPLUG-Owl (_LLMaM-7B_) & 28.37 & 37.76 & 34.09 & **1.06** & 2 & 26.79 & 18.25 & 55.19 & **1.28** & 1 & 3.09 & 33.88 & 63.24 & **1.60** & 3 & **3.94** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results on the low-level **Description** ability of MLLMs. \(P_{i}\) denotes frequency for score \(i\).
\begin{table}
\begin{tabular}{l|c c c c c|c c c} \hline \hline
**Dataset Type** & \multicolumn{4}{c|}{In-the-wild} & \multicolumn{4}{c|}{Generated} & Artificial \\
**Model / Dataset** & _RNG-10K_ & _-SRQ_ & _-LVF-FB_ & _-LVF-No_ & _CGIQA-6K_ & _AG-6K_ & _AG-6K_ & \(K\) & \(K\) & \(K\) & \(K\) \\ \hline NIQE (Mittal et al., 2013) & 0.316/0.377 & _0.693/0.699_ & _0.211/0.288 & _0.480/0.451 & 0.075/0.056 & 0.562/0.517 & 0.374/0.428 \\ CLIP-ViT-Large-14 & **0.468/0.50** & **0.350/0.389** & 0.281/0.327 & 0.307/0.308 & **0.285/0.290** & 0.436/0.458 & 0.376/0.388 \\ Shink (_Mean-7B_) & 0.314/0.307 & 0.327/0.337 & 0.237/0.241 & 0.52/0.326 & 0.316/0.201 & **0.64**/0.661 & 0.324/0.332 \\ LLaV-v1 (_Mean-13B_) & **0.462/0.457** & 0.442/0.426 & **0.264/0.280** & 0.404/0.417 & 0.208/0.237 & 0.626/**0.684** & 0.349/0.372 \\ MiniGPt (_Mean-13B_) & 0.299/0.257 & 0.238/0.253 & 0.170/0.183 & 0.339/0.340 & 0.250/0.246 & 0.572/0.591 & 0.299/0.233 \\ Kosmos-2 & 0.255/0.281 & 0.644/0.614 & 0.169/0.195 & 0.195/0.350 & 0.386/0.386 & 0.21/0.225 & 0.489/0.491 & 0.359/0.365 \\ LLaMA-Adapter-V2 & 0.354/0.363 & 0.464/0.506 & **0.275/0.329** & 0.298/0.360 & **0.257/0.271** & 0.604/**0.666** & **0.412/0.25** \\ InstructBLIP (_Fine-TSXL_) & 0.334/0.362 & 0.582/0.599 & **0.248/0.267 & 0.113/0.113 & 0.167/0.188 & 0.378/0.400 & 0.211/0.179 \\ InstructBLIP (_Mean-7B_) & 0.359/**0.437** & **0.68/0.68/0.69** & 20.00/0.283 & 0.253/0.367 & **0.263/0.304** & **0.629/0.663** & 0.337/0.382 \\ Otter-v1 (_MPT-7B_) & 0.406/0.406 & 0.436/0.441 & 0.413/0.402 &
## Author Contributions
We will reveal the contributions of authors in the final edition.
### Acknowledgments
100% of the annotated labels in the LLVisionQA and LLDescribe datasets (_question-answers_ and long _golden_ descriptions) are conducted by human experts. We sincerely thank their efforts.
|
2310.20347 | Automatic Generators for a Family of Matrix Multiplication Routines with
Apache TVM | We explore the utilization of the Apache TVM open source framework to
automatically generate a family of algorithms that follow the approach taken by
popular linear algebra libraries, such as GotoBLAS2, BLIS and OpenBLAS, in
order to obtain high-performance blocked formulations of the general matrix
multiplication (GEMM). % In addition, we fully automatize the generation
process, by also leveraging the Apache TVM framework to derive a complete
variety of the processor-specific micro-kernels for GEMM. This is in contrast
with the convention in high performance libraries, which hand-encode a single
micro-kernel per architecture using Assembly code. % In global, the combination
of our TVM-generated blocked algorithms and micro-kernels for GEMM 1)~improves
portability, maintainability and, globally, streamlines the software life
cycle; 2)~provides high flexibility to easily tailor and optimize the solution
to different data types, processor architectures, and matrix operand shapes,
yielding performance on a par (or even superior for specific matrix shapes)
with that of hand-tuned libraries; and 3)~features a small memory footprint. | Guillermo Alaejos, Adrián Castelló, Pedro Alonso-Jordá, Francisco D. Igual, Héctor Martínez, Enrique S. Quintana-Ortí | 2023-10-31T10:36:26Z | http://arxiv.org/abs/2310.20347v1 | # Algorithm XXX: Automatic Generators for a Family of Matrix Multiplication Routines with Apache TVM
###### Abstract.
We explore the utilization of the Apache TVM open source framework to automatically generate a family of algorithms that follow the approach taken by popular linear algebra libraries, such as GotoBLAS2, BLIS and OpenBLAS, in order to obtain high-performance blocked formulations of the general matrix multiplication (gemm). In addition, we fully automatize the generation process, by also leveraging the Apache TVM framework to derive a complete variety of the processor-specific micro-kernels for gemm. This is in contrast with the convention in high performance libraries, which hand-encode a single micro-kernel per architecture using Assembly code. In global, the combination of our TVM-generated blocked algorithms and micro-kernels for gemm 1) improves portability, maintainability and, globally, streamlines the software life cycle; 2) provides high flexibility to easily tailor and optimize the solution to different data types, processor architectures, and matrix operand shapes, yielding performance on a par (or even superior for specific matrix shapes) with that of hand-tuned libraries; and 3) features a small memory footprint.
Porthability and maintainability, software lifecycle, matrix multiplication, BLIS framework, Apache TVM, blocking, SIMD vectorization, high performance +
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computercomputing
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
+
Footnote †: journal: Computer
effort has come from major hardware vendors, with some relevant products being Intel MKL, AMD AOCI, IBM ESSL, ARMPL and NVIDIA cuBLAS, as well as from the academic side, with software packages such as GotoBLAS2 [15], OpenBLAS [33], BLIS [30] and ATLAS [13].
The general matrix multiplication (gemm) is a crucial computational kernel upon which these LA libraries are built. In addition, gemm is also a key operation for deep learning applications that leverage transformers for natural language processing or convolutional deep neural networks for signal processing and computer vision [2; 29]. Unfortunately, these LA libraries present a few obstacles:
1. The optimized routines in these libraries are hardware-specific. This is the case for Intel, IBM, ARM and NVIDIA's products. To a lesser extent, it also applies to GotoBLAS2, OpenBLAS and BLIS,1 which leverage a collection of processor-customized micro-kernels [31]. Footnote 1: The same comment applies to AMD’s library, which is simply a customized version of BLIS.
2. Developing highly optimized micro-kernels for gemm requires a deep knowledge of high performance computing and computer architecture.
3. The code in these libraries is very large. Besides, it is hard to master due to the abundant use of productivity-enhancing macros, templates and high level programming techniques. Maintaining the libraries thus mostly lies in the hands of the original developers.
4. The software misses some relevant cases such as, for example, support for half (i.e., 16-bit) floating point precision or integer arithmetic.
5. The implementation of gemm in these libraries is sub-optimal under certain circumstances as the code is usually tuned for "squarish", large-scale cases.
6. The memory footprint of the libraries is often in the order of Mbytes.
In this paper we address the limitations of LA libraries by demonstrating that it is possible to automatically generate a family of blocked algorithms for gemm, together with a collection of micro-kernels for gemm, using Apache TVM (Tensor Virtual Machine) [8]. This alternative solution offers the following advantages:
1. At a high level, the library is "replaced" by a collection of TVM generators (in the form of Python routines), reducing the maintainability effort to a bare minimum and largely improving the portability of the solution.
2. Using the appropriate backend compiler, the generation/optimization can be easily specialized for distinct data types, further enhancing the portability and maintainability of the solution.
3. By adjusting the algorithm and micro-kernel to the problem, it is possible to outperform high performance realizations of gemm in commercial products as well as academic libraries.
4. The optimization process for each problem dimension is largely seamless, boiling down to the evaluation of a reduced number of micro-kernels. In other words, the optimization search space is limited.
5. The memory footprint of the resulting realization of gemm is very small and the entire framework library is also very small.
Our paper makes two significant contributions. Firstly, it offers a comprehensive and pedagogical tutorial on building a high-performance implementation of gemm using TVM, complete with extensive code examples and detailed explanations. Secondly, and perhaps more importantly, it provides compelling evidence for the advantages of automating the development process. These advantages include enhanced portability, reduced maintenance costs, insulation from hardware-specific optimization details, and ultimately, improved performance. This performance boost arises from the ability to explore multiple micro-kernels, which outperforms libraries relying on a single general-purpose micro-kernel.
The rest of the paper is structured as follows. Section 2 provides an in-depth review of the state-of-the-art in automatic realizations of high performance libraries in the fields of dense linear algebra in general, and deep learning in particular, and discusses different alternatives towards automatic code generation. In Section 3 and 4 we respectively review the modern realization of algorithms and micro-kernels for gemm. This is then followed, in Sections 5 and 6, by the introduction of the automatic generation of a baseline gemm algorithm and a variety of micro-kernels, respectively; and in Section 7, by the extension of the automatization techniques to cover a complete family of gemm algorithms. In Section 8 we evaluate the performance of the resulting solution in a platform equipped with ARM cores, and demonstrate its portability to other modern architectures from different vendors. Finally, in Section 9 we summarize the main results from this work.
## 2. Related Work
Automatic code generation is gaining interest in the last years as a means to attain performance portability across existing and new architectures, for deep learning (DL) models, with a minimal intervention from the programmer (Han et al., 2017). Recent languages and compiler frameworks, such as Halide (Halide, 2018) or TVM (Han et al., 2018), propose a clear separation of concerns between the definition of the operation and its optimization, in order to ease development of operators to a plethora of target architectures, including general-purpose processors, GPUs, digital signal processors (DSPs), and specific-purpose accelerators (Han et al., 2019). Starting from a _computational graph_ which defines the operator and the _data flow_, optimization techniques for performance portability are applied at the graph level, via operator fusions and transformations; and at the operator level, with hardware-specific optimizations. Some of these optimizations are framework-specific (e.g., TensorFlow XLA (Krizhevsky et al., 2015)) while others, developed by experts, are realized within specialized libraries (e.g., NVIDIA cuDNN(Han et al., 2017)).
From a technical perspective, DL compilers can be broadly classified as JIT (_Just-in-time_) or AOT (_Ahead-of-time_). JIT compilers generate executable code on-the-fly. Hence, they can exploit extended runtime knowledge to fine-tune the final executable, at the cost of a certain overhead due to the code generation logic. Many common DL frameworks and compilers rely or support JIT, including TFLite and its Micro variant2, XLA, MLIR (Krizhevsky et al., 2015) and TVM. Contrarily, AOT compilers generate executable code a priori, and execute common versions at runtime without further modification. The advantage of the AOT approach is two-fold: first, it can extend the analysis scope, hence accommodating more thorough optimizations; second, it eases the development of cross-compilation schemes for embedded architectures (Krizhevsky et al., 2015) or remote execution-only architectures (Krizhevsky et al., 2015). The use of external libraries for DL primitives (mainly convolutions and linear algebra primitives) can be also considered as AOT, since in general they are implemented statically and optimized prior to execution time.
Footnote 2: [http://www.tensorflow.org/lite](http://www.tensorflow.org/lite).
Armed with hardware-agnostic IRs (Intermediate Representations), these compiler frameworks effectively decouple schedule from computation, and enable the automatic exploration of the scheduling and configuration spaces by means of auto-tuning techniques. A clear example is AutoTVM (Han et al., 2017), integrated within TVM, which performs a complete exploration of the search space; at each iteration, the framework tests the generated code on the target device, and stores some feedback which is eventually leveraged to find the optimal combination of configuration parameters. This _brute force_ optimization scheme guarantees finding the optimal solution (configuration setup), but the number of tested configurations grows exponentially with the dimension of the design space. Hence, the use of these naive schemes is limited to problems with reduced search spaces, or where online testing is time-inexpensive. (Unfortunately, this is not the case of the architecture-aware adaptation of DL models to a specific hardware setup.) In order to alleviate the expensive search-and-test procedure, automatic learning schemes (such as XGBo
been enhanced (Steintein and Krapivsky, 2017) with Random Search (Krapivsky and Krapivsky, 2017), Bayesian optimization (Krapivsky and Krapivsky, 2017) and Genetic algorithms (Krapivsky and Krapivsky, 2017). More recently, Deep Reinforcement Learning techniques have been proposed to autonomously guide and extract policies for optimal execution configuration (Krapivsky and Krapivsky, 2017; Krapivsky and Krapivsky, 2017; Krapivsky et al., 2017; Steintein and Krapivsky, 2017). While all these efforts alleviate the cost of the hyperparameter search, they are still computationally expensive and, in many cases, yield solutions that are difficult to explain or reproduce for developers, or to port to embedded architectures or systems with reduced compute capabilities.
Auto-tuning techniques within DL compiler frameworks follow similar ideas to those of auto-tuning in dense LA routines. ATLAS (Alees et al., 2017) selects the cache configuration parameters and the micro-kernel at installation time via a comprehensive search procedure. Recently, in the framework of the BLIS project, the work in (Krapivsky and Krapivsky, 2017) demonstrated that deriving analytical models for optimal configuration parameters selection is an effective way to attain high performance without the necessity of an auto-tuning scheme for configuration parameter exploration. Replacing the auto-tuning scheme with model-based solutions was also successfully explored in (Krapivsky and Krapivsky, 2017; Krapivsky and Krapivsky, 2017).
Our approach combines the advantages of existing JIT compiler frameworks for easily deriving high performance codes for gemm-based DL primitives, with analytical models in order to avoid expensive auto-tuning. Our work differs from (Blees et al., 2017), which uses MLIR to describe early experiences exclusively with gemm, as well as from (Steintein and Krapivsky, 2017), which proposes advanced auto-tuning schemes for this primitive. Concretely, we leverage TVM to extend and further analyze the automatic generation of gemm-based primitives for DL, integrating analytical models to ease the optimization process without the need of expensive auto-tuning procedures. In addition, we apply our ideas to gemm (Section 5 and 6).
The use of analytical models to determine optimal execution parameters eliminates the time penalty introduced by auto-tuning, and avoids the lack of accuracy and reproducibility. While not discussed in this work, 1) extending the analytical models to address other algorithmic variants for gemm, which favor certain cache hierarchy setups (Blees et al., 2017), as additional configuration parameters; and 2) supporting optimal parallelization schemes (Steintein and Krapivsky, 2017) for gemm-based operators, and supporting this additional degrees of freedom within the compilation stage are also clear benefits of our proposal. In comparison, these extensions would introduce a non-affordable complexity to auto-tuning schemes.
## 3. A Family of Blocked Algorithms for gemm
In this section we review the modern realization of gemm in current high performance libraries, and generalize the ideas to discuss a full family of algorithms for this operation.
### The baseline algorithm
Consider the matrix multiplication \(C=C+AB\), abbreviated as \(C\neq AB\), where the matrix operands present the following dimensions: \(A\to m\times k\), \(B\to k\times n\), and \(C\to m\times n\). The realization of this kernel for general-purpose processor architectures with hierarchically-layered memories, in libraries such as OpenBLAS, BLIS, AMD AOML and, possibly, Intel MKL/oneMKL, follow the basic ideas of GotoBLAS2 (GotoBLAS, 2018) to decompose the computation into five nested loops, traversing the \(m,n,k\) dimensions of the problem in a specific order. Inside these loops, two packing routines copy (and re-arrange) certain blocks of the input operands \(A,\ B\) into two buffers, \(A_{c}\to m_{c}\times k_{c}\), \(B_{c}\to k_{c}\times n_{c}\), in order to favor an efficient utilization of the cache memories. (For simplicity, in the following we assume that \(m\), \(n\), and \(k\) are integer multiples of \(m_{c}\), \(n_{c}\), and \(k_{c}\), respectively.) Furthermore, the fifth loop comprises a micro-kernel that is often vectorized to exploit the SIMD units in most current general-purpose processors (Stein and Krapivsky, 2017). In short detail, the _baseline algorithm for gemm_ aims at maintaining a block \(B_{c}\) in the L3 cache and a block \(A_{c}\) in the L2 cache, while streaming \(C\) directly from the main memory into the processor registers. Furthermore, a small micro-panel of \(B_{c}\) is to reside in the L1 cache.
The orchestration of the data movements across the memory hierarchy in the baseline algorithm for gemm is endowed by the specific nesting of the algorithm loops, in combination with a careful choice of the loop strides/sizes of the buffers (Kumar et al., 2017). The operands' partitioning induced by the loops as well as the target level of the memory hierarchy for each operand block are illustrated in Figure 1 (left). We will also refer to the baseline algorithm as B3A2C0, where each letter denotes one of the three matrix operands, and the subsequent number specifies the cache level where a part of the operand is to reside, with 0 referring to the processor registers. As \(B\) is to reside both in the L1 and L3 caches, we do not specify the former in the notation.
### Other members of the gemm family
We continue the discussion on the high performance realization of gemm by noting that there exist five other algorithmic variants which can be obtained by re-organizing the loops of the baseline algorithm in a different manner (Birsch et al., 2016; Kumar et al., 2017; Kumar et al., 2018). For example, a "twin" version is directly obtained by swapping the roles of \(A\) and \(B\) in the baseline algorithm, yielding the A3B2C0 variant, where two blocks of \(A\) respectively occupy the L1 and L3 caches and a block of \(B\) resides in the L2 cache; see Figure 1 (right). In the same line, Figure 2 displays two additional variants of the gemm family: B3C2A0 (left) and A3C2B0 (right) that maintain a block of \(C\) in the L2 cache. The two missing variants, where \(C\) resides in the L3 cache, (C3B2A0 and C3A2B0, omitted for brevity,) are derived by swapping the roles of \(A/B\) with \(C\) in the two algorithms given in that figure; see (Birsch et al., 2016).
## 4. High performance micro-kernels for gemm
In this section, we connect the six blocked algorithms for gemm with three types of micro-kernels that differ in the matrix operand that resides in the processor registers (Birsch et al., 2016; Kumar et al., 2017; Kumar et al., 2018). For simplicity, we assume that the cache configuration parameters \(m_{c}\), \(n_{c}\), \(k_{c}\) are respectively integer multiples of the micro-kernel parameters \(m_{r}\), \(n_{r}\), \(k_{r}\) where, depending on the type of micro-kernel, two of the latter parameters specify the dimension of the micro-tile that resides in the processor registers.
### Operand Resident \(C\) (in the processor registers)
The baseline algorithm for gemm and its "twin" A3B2C0 (both in Figure 1) cast the innermost computation in terms of a micro-kernel that computes a smaller gemm, \(C_{r}\leftarrow\)\(A_{r}B_{r}\), where \(A_{r}\to m_{r}\times k_{c}\), \(B_{r}\to k_{c}\times n_{r}\) respectively denote two micro-panels of the buffers \(A_{c},B_{c}\); while \(C_{r}\to m_{r}\times n_{r}\) is a small micro-tile of \(C\) that resides in the processor registers during the execution of the micro-kernel. This corresponds to the operation performed inside loop L5 of the baseline algorithm B3A2C0 (see Figure 1), with
\[A_{r} =\texttt{Ac(ir:ir+mr-1,0:kc-1)},\] \[B_{r} =\texttt{Bc(0:kc-1,jr:jr+nr-1)},\] \[C_{r} =\texttt{C(ic+ir:ic+ir+mr-1,jc+jr:jc+jr+nr-1)}.\]
The realization of this micro-kernel iterates across the \(k_{c}\) dimension of the problem (as part of an additional loop, labeled as L6), at each step performing an outer product involving a single column of \(A_{r}\) and a single row of \(B_{r}\) to update the entire micro-tile \(C_{r}\); see Figure 3 (top).
For high performance, the data in \(A_{c},B_{c}\) are carefully packed as illustrated in Figure 4, in order to ensure access with unit stride to the columns of \(A_{r}\) and the rows of \(B_{r}\) from within the micro-kernel. This reduces the number of cache evictions during these accesses as well as accommodates the use of efficient SIMD instructions to load these elements into vector registers and operate with them.
### Other types of micro-kernels: Operand Resident \(A\) or Resident \(B\)
The four remaining variants of gemm (see, e.g., Figure 2) leverage two other types of micro-kernels: The first one maintains a micro-tile of \(A\) in the processor registers while performing an \(m_{r}\times k_{r}\)
Figure 1. Baseline (B3A2C0) and A3B2C0 algorithms (left and right, respectively) for gemm.
* [1]for(jc=0;jc<n;jc*mc)//LoopL1
* [2]for(pc=0;pc<k;pc*=kc){//L2
* [3]//PackB
* [4]Re:=B(pc:pc+kc-1,jc:jc*mc-1);
* [5]for(i=0;jc<n;i c*=mc){//L3
* [6]//PackC
* [7]Cc:=(Cic:ic*mc-1,jc:jc*mc-1);
* [8]for(pr=0;pr<kc;pr+=kr)//L4
* [9]for(ir=0;ir<mc;ir*=nr)//L5
* [10]//Micro-kernel
* [11]Cc(ir:ir*nr+1,0:nc-1)
* [12]*=A(ic*ir:ic*ir*nr-1,
* [13]pc*pr:pc+pr+kr-1)
* [14]*Bc(pr:pr+kr-1,0:nc-1);
* [15]//UnpackC
* [16]C(ic:ic*mc-1,jc:jc*mc-1):=Cc;
* [17]} ```
Fig. 2: B3C2A0 and A3C2B0 algorithms (left and right, respectively) for gemm.
matrix-vector product per iteration of loop L6. The second one keeps a micro-tile of Resident \(B\) in the processor registers, and carries out an \(k_{r}\times n_{r}\) vector-matrix product per iteration of loop L6. These two types of micro-kernels are illustrated in Figure 3 (middle and bottom).
To enable SIMD loads/stores, each type of micro-kernel requires a specialized packing scheme for two of the matrix operands. For the micro-kernel with Resident \(A\) (in the processor registers), \(C_{e}\) and \(B_{c}\) are packed following the same pattern as \(A_{c}\) in Figure 4 (with the entries of \(B_{c}\) arranged into micro-panels of \(k_{r}\) rows). For the micro-kernel with Resident \(B\), the buffers for \(C_{e}\) and \(A_{c}\) are packed as \(B_{c}\) in the same figure (with the entries of \(A_{c}\) arranged into micro-panels of \(k_{r}\) columns).
Figure 4: Packing in the baseline algorithm.
Figure 3: Micro-kernels with Resident \(C\), Resident \(A\) or Resident \(B\) in the processor registers (top, middle and bottom, respectively).
### High performance
A few rules of thumb guide the design of a high performance micro-kernel for a processor architecture with a multi-layered memory hierarchy (Krishnan et al., 2017). We discuss them for a micro-kernel with Resident \(C\), but they are easily to derive for the two other types of micro-kernel:
* Considering the \(k_{c}\) successive updates of the micro-tile \(C_{r}\) occurring in loop L6, the micro-kernel parameters \(m_{r},n_{r}\) should be chosen sufficiently large so as to avoid stalls due the latency between the issuance of two instructions that update the same entry of \(C_{r}\).
* Ideally, \(m_{r}\) should be equal to \(n_{r}\) as this maximizes the ratio of computation to data movement during the update of \(C_{r}\) in loop L6.
These two principles suggest maximizing the values for \(m_{r},n_{r}\) as part of a "large" micro-kernel. In practice though, the limited number of vector registers constrains the practical values of \(m_{r},n_{r}\) within a couple of dozens.
A comparison of the micro-kernel with Resident \(C\) and those with Resident \(A/B\) reveals some differences:
* The variants with Resident \(C\) presents a higher arithmetic intensity since, while all types of micro-kernels perform the same number of flops and number of loads from memory, the variants with Resident \(A/B\) have to write a column or row of \(C\) back into the memory at each iteration of the loop.
* Unlike the micro-kernel with Resident \(C\), the variants with Resident \(A/B\) do not present a RAW (read-after-write) dependency between consecutive iterations of the micro-kernel.
The implementation of the micro-kernels in general-purpose processor architectures equipped with SIMD arithmetic units is in practice done in Assembly code; vectorized using architecture-specific SIMD instructions (e.g., Intel SSE/AVX, ARM NEON, etc.); and enhanced with high performance computing techniques such as loop unrolling, software pipelining, data prefetching, etc. (Krishnan et al., 2017).
We close this section by noting that for large, squarish gemm problems, the optimal values for the micro-kernel parameters \(m_{r},n_{r},k_{r}\) can be determined, for a given processor architecture, via a few experimental tests. However, for problems with one dimension much larger than the others, the optimal values of these parameters may vary considerably, and they can only be determined experimentally by first implementing them. Unfortunately, developing a high performance micro-kernel is mostly a manual process, which requires an expert with a deep knowledge of high performance computing and computer architecture to attain optimal performance.
## 5. Automatic generation of the baseline algorithm for gemm
Apache TVM is an open source compiler framework that allows generating, optimizing, and executing machine learning routines for (general-purpose) multicore processors, GPUs (graphics processing units), and other accelerator backends (Bergmann et al., 2017). In our effort toward automatically generating high performance algorithms for gemm, we start from the TVM tutorial on how to optimize gemm on a general-purpose processor by blocking (tiling) and packing the matrix operands.3 Concretely, we build upon these instructions to assemble a TVM generator that automatically produces the baseline algorithm for gemm. Our approach extends the tutorial in that we apply the optimizations in a BLIS-like manner instead of following a general optimization approach. More concretely we introduce the cache-aware packings as well as software prefetching. In addition, we provide guidelines to derive different algorithmic variants for gemm.
### Basic gemm with TVM
Consider a basic realization of gemm consisting of three loops that compute each element of the output matrix by performing a reduction (i.e., an inner or dot product) across the \(k\) dimension of the problem:
```
for(i=0;i<m;i++) for(j=0;j<n;j++) for(p=0;p<k;p++) C(i,j)+=A(i,p)*B(p,j);
```
This specific realization of gemm, with the loops traversing the dimensions of the problem in the order \(i\Rightarrow j\Rightarrow p\), can be obtained using the (Python-based) TVM generator in Figure 5. We distinguish nine fundamental parts in the code (in this basic example, some of the parts are empty, but they will appear in the refined versions of the generator described in subsequent sections):
```
PartP0.Parameterlist
``` The function receives the matrix dimensions m, n, k, and the data type of the operands (dtype) as parameters. PartP1.Declaration of the input operands:Lines 3 and 4 define two operands, or placeholders, respectively for \(A\) and \(B\), of the appropriate dimensions and data type. PartP2.Definition of the operation:Lines 7-10 specify the operation to be computed in terms of the two placeholders. In particular, line 7 defines the computation in terms of a reduction (sum) across the dimension k. PartP3.Preparation of the schedule:Line 13 creates a schedule. In TVM, this corresponds to the order in which the loops within the program are executed and, in this particular case, to the three nested loops induced from the application of the lambda function for each \((i,j)\) entry across the reduction axis \(p\). By default, the schedule processes a tensor serially following a row-major order. PartP4.Specification of the loop ordering:The order in which the loops are generated with the previous schedule may not match that of the baseline algorithm. Lines 16-18 extract the desired axes induced from the computation loops defined in Part P2, and ensure that the loops follow the scheme in the basic gemm:\(i\Rightarrow j\Rightarrow p\). PartP5.Placement of the packings:No packing occurs in the basic gemm and, therefore, this part is empty. PartP6.Application of fine-grain optimizations:Fine-grain optimizations such as unrolling and SIMD vectorization are not included in this initial version. PartP7.Loop-level parallelization:Selection of the specific loop to parallelize, if necessary. PartP8.Generation of code:Finally, line 36 instructs TVM to generate the code, in this case, for an LLVM backend.
Other loop orderings are easily obtained using TVM by simply re-arranging differently the loop variables in line 18. (However, this is also straightforward to do in the basic triple nested loop code written in C.) More interestingly, this example also illustrates that code for distinct backends can be obtained by simply changing the target in line 36. Concretely, the generic llvm target there generates code for the machine in which the generator is executed, but the commented lines in Part P8 offer several examples which generate code for other architectures. To close the discussion of this first TVM generator, note that it is independent of the matrix operands' data types.
### Blocking for the baseline algorithm with TVM
Our next goal is to build a blocked algorithm that partitions the matrix operands mimicking the three outermost loops of the baseline algorithm (B3A2C0), labeled as L1, L2 and L3. For this purpose, remember that these loops divide \(A\) into blocks of dimension \(m_{c}\times k_{c}\), \(B\) into blocks of dimension
\(k_{c}\times n_{c}\), and \(C\) into blocks of dimension \(m_{c}\times n_{c}\); see Figure 1 (left). For clarity, consider the following blocked algorithm, which realizes the partitioning scheme in the baseline algorithm for gemm in order to decompose the matrix multiplication into a collection of finer-grain computations:
```
for(jc=0;jc=n;jc=nc) for(pc=0;pc<k;pc++=kc) for(ic=0;ic<m;ic<=mc) for(jr=0;jr<nc;jr++) for(ir=0;ir<mc;ir++) for(pr=0;pr<kc;pr++) for(pr=0;pr<kc;pr++) ((ic+ir,jc+jr) ==A(ic+ir,pc+pr) *B(pc+pc,jc+jr);
```
Figure 6 displays a TVM generator that produces a code for gemm that will perform the computation adopting the same blocking scheme. Compared with the TVM generator for the basic gemm in Figure 5, the routine presents the following differences:
``` PartP0.Parameterlist:In addition to the parameters from the basic gemm, the function receives the blocking parameters mc, nc, kc. PartP3.Preparation of the schedule (with blocking):In preparation for the operands' tiling, lines 9-13 prompt the sought-after splittings of the problem dimensions \(m\), \(n\), \(k\).
Figure 5: TVM generator for the basic gemm.
_Part P4. Specification of the loop reordering:_ Line 16 specifies a loop ordering which matches that of the baseline algorithm for gemm: \(j_{c}\Rightarrow p_{c}\Rightarrow i_{c}\Rightarrow j_{r}\Rightarrow i_{r} \Rightarrow p_{r}\).
Here and in the following, for brevity, we will omit the parts that remain the same with respect to the prior generators.
The code for gemm produced by TVM using the algorithm in Figure 6 differs from the high performance realizations of gemm in libraries such as GotoBLAS, BLIS and OpenBLAS, in two important aspects:
1. There are no packing routines that re-arrange the contents of the input matrices \(A,B\) into buffers in order to streamline the operation of the micro-kernel.
2. The innermost loop does not formulate the computation in terms of an outer product that updates a small \(m_{r}\times n_{r}\) micro-tile \(C_{r}\). Instead, the TVM generator produces a code which decomposes this last computation into fine-grain multiply-add operations involving individual elements of \(A,B,C\).
In the next subsection we deal with the first issue while the second one is discussed in Section 6.
### Packing for the baseline algorithm with TVM
Figure 7 refines the TVM generator for the blocked algorithm to include hints to TVM in order to produce a specialized version that packs \(A\) and \(B\) into two buffers, \(A_{c}\) and \(B_{c}\), with the same layout as that present in the baseline algorithm; see Figure 4. We note the following differences with respect to the TVM generator for the blocked algorithm:
_Part P0. Parameter list:_ The new routine receives two additional parameters: mr, nr.
_Part P2. Definition of the operation and packing schemes for \(A\) and \(B\):_ Lines 8-12 decla-re a 4D TVM tensor Ac which will hold a packed version of the complete operand \(A\). Ac is instantiated as a (2D) matrix of small (packed) matrices (_micro-panels_ of \(A\)), each of dimension \(m_{r}\times k_{c}\); see the red micro-panel labeled as \(A_{r}\) in Figure 4. Ac will potentially store _all_ packed
Figure 6. TVM generator for gemm mimicking the blocking scheme of the baseline algorithm.
micro-panels of \(A\) that will be generated during the iterations of loop L3 in Figure 1, and hence its global dimension initially matches that of \(A\). However, depending on the placement of the packing specified by the user, TVM will evaluate the potential reuse level of the buffer(s) and will accordingly adapt the buffer dimensions. (This will be refined in the following discussion of part P5.) The lambda function in lines 10-11 specifies the desired packing scheme by mapping the elements of Ac to their counterparts in \(A\). Concretely, the four arguments (i, j, q, r) of the function determine how the element Ac[i,j,q,r] is extracted from \(A\) conformally with the packing scheme for the baseline algorithm.
that accesses the entries of Ac with unit stride. Also the computation is defined in terms of a reduction (sum) across the dimension k.
_Part P5. Placement of the packings:_ Lines 38-39 place the computation of tensors Bc and Ac at the desired points of the loop nesting: Concretely, inside the loop indexed by pc for Bc and the loop indexed by ic for Ac; see the algorithm on the left of Figure 1. As a result of combining the correct placement of the tensor buffers, loop ordering, and access pattern in the blocked computation of C, the packed tensors Ac and Bc do not need to have the same size as A and B, respectively. Instead, they are computed at each iteration of loop L3 (indexed by ic) and loop L2 (indexed by pc). In the latter case, for example, the reuse pattern of elements for Ac guides the compiler to reduce the dimension of the packed buffer from \(m\times k\) to \(m_{c}\times k_{c}\) only (which matches the dimension of the buffer \(A_{c}\) in Figure 4). An analogous comment applies to matrix \(B\) and its packed tensor counterpart.
The purpose of the packing operations is to favor a higher number of cache hits when accessing the data in the L2 and L1 caches from the micro-kernel. The packing introduces a certain overhead, but this is usually low because the packed data is re-utilized multiple times [21]. As a side note, let us comment that, depending on the problem dimensions \((m,n,k)\) and the loop strides \((m_{c},n_{c},k_{c})\), in some cases the costs of the packing operations may exceed their benefits. Fortunately, with TVM we can easily eliminate any of the packing operations. For example, in the baseline algorithm in Figure 7, the two packing operations can be eliminated by 1) removing lines 8-19; changing the references from Ac and Bc to A and B in lines 23, 26; and 2) adapting accordingly the indexing on A and B to accommodate the dimensionality modification from Ac and Bc (from 4D to 2D).
The intermediate representation (IR) produced by TVM using the generator for the packed realization of gemm can be described as follows:
```
for(jc=8;jc=n;jc=nc){//LoopL1 for(pc=9;pc<k;pc+=kc){//L2 for(i=9;...){//PackingBc for(j=9;...) for(q=8;...) for(r=0;...) BC[...]=B[...]; } for(ic=8;ic<m;ic<=mc){//L3 for(i=9;...){//PackingAc for(j=9;...) for(r=9;...) Ac[...]=A[...]; } for(jr=0;jr<nc;jr++){//L4 for(ir=9;r<nc;ir++)//L5 for(pr=9;pr<kc;pr**)//L6 C[...].+=Ac[...]*Bc[...]; } }
```
(Here we omitted a few indexing details and introduced some formatting to ease the interpretation.)
This takes us one step forward toward automatically generating a high performance realization of gemm that mimics the baseline algorithm for gemm by removing one of the caveats that appeared in the previous blocking algorithm: the lack of packing routines. In the next section we address the second missing detail: _the formulation of the innermost loop of the algorithm as a micro-kernel that performs an outer product._
## 6. Automatic generation of high performance micro-kernels for gemm
The final stage in our journey to obtain a high performance realization of gemm has the goal of integrating the automatic generation of high performance micro-kernels within the TVM routine for gemm.
### Micro-kernel with Resident \(C\)
Starting from the TVM generator in Figure 7, we introduce two main changes with the result shown in Figure 8:
_Part P3. Preparation of the schedule (with additional blocking):_ The \(m_{c},n_{c}\) dimensions of the problem are respectively partitioned using factors \(m_{r}\), \(n_{r}\), in order to expose the loops that will operate with the individual elements of the micro-panel \(C_{r}\); see lines 13-14.
_Part P4. Specification of the loop ordering:_ Lines 17-18 place the newly created loops for the micro-kernel, it and jt, within the global schedule.
### Fine-grain optimizations for high performance and parallelization
The TVM generator can explore relevant optimizations at the micro-kernel level which pursue goals that are similar to those of the low-level optimizations applied by the expert Assembly programmer. By off-loading these optimizations to TVM, in exchange for less flexibility, there is a considerably more reduced programming effort and higher performance portability across different processor architectures.
Figure 8. TVM generator for gemm mimicking (the blocking and packing of) the baseline algorithm, and integrating the optimized micro-kernel with Resident \(C\).
Starting from the TVM generator in Figure 8, the enhanced alternative in Figure 9 illustrates the necessary additions to introduce some conventional optimization techniques:
_Part P6. Fine-grain optimizations:_ This invokes TVM methods to address three types of optimizations:
* _SIMD vectorization:_ The generator includes vectorization of three different stages of the blocked algorithm to leverage the SIMD units of the architectures: micro-kernel computation (lines 8-10); packing \(A\) into \(A_{\text{c}}\) (lines 13-16); and packing \(B\) into \(B_{\text{c}}\) (lines 19-22). The basic scheme remains the same for all of them: the innermost loop (\(jt\) for the former and \(l\) in latter two) is split to expose an additional inner loop, using a split factor (lanesize) that matches the SIMD width of the target architecture. For example, for 32-bit floating point data, lanesize is set to 4 for ARMv8a NEON (128-bit SIMD registers) and 16 for Intel AVX512 (512-bit SIMD registers). In general, in our experiments we will employ the exact number of elements for a SIMD register, or an integer multiple of it. From the newly exposed loops, the outer one is unrolled and the inner is vectorized (unroll and vectorize methods, respectively) This produces a code comprising vector instructions for the desired target processor.
* _Prefetching and vector loads within the micro-kernel:_ ac and bc are tensors defined as read-only copies of Ac and Bc. The purpose of creating these artifacts is two-fold: first, it helps the compiler to introduce software-prefetching instructions in order to pre-load the micro-panels of Ac and Bc that will be used throughout the computation of the micro-kernel; second, it instructs the compiler to use vector registers in order to store the active parts of ac and bc in the micro-kernel, and to use vector instructions to load them from Ac and Bc. Diving into the details, the construction, re-use pattern and computation point (loop indexed by pr) of ac and bc induce buffers that store the strictly necessary micro-panels of Ac and Bc used within the computation of each iteration of loop pr within the micro-kernel, that is, \(m_{r}\times 1\) for ac, and \(n_{r}\times 1\) for bc. Splitting the innermost loop (1) using a factor that matches lanesize, unrolling and vectorizing it yield vector instructions to load the micro-panels of Ac and Bc into \(m_{r}/lanesize\) and \(n_{r}/lanesize\) vectors. _Part P7. Loop-level parallelization:_ TVM also allows exploiting loop parallelism via multi-threading. Line 41 shows how to instruct TVM to split the iteration space for a given loop across the active threads. The selection of the optimal loop to parallelize depends on the problem dimensions and target architecture specifications; see [27] for a detailed discussion.
Figure 10 displays two examples of the assembly code generated by TVM: An \(m_{r}\times n_{r}=4\times 4\) micro-kernel for an ARMv8a ISA (instruction set architecture) with NEON vector instructions and unrolling factor of 4 on the left; and an \(m_{r}\times n_{r}=4\times 16\) micro-kernel for an x86 ISA with AVX512 vector instructions (on the right). In both cases, for brevity, we only include the instructions that are comprised by the reduction loop of the micro-kernel. The codes were obtained using the generator for the optimized B3A2C0 variant depicted in Figure 9, selecting the target as llvm -device=arm_cpu -mattr=+v8.2a,+fp-armv8,+neon for the ARMv8a architecture, and llvm -mcpu=icelake-server for the x86 architecture. The codes share a similar structure, but present subtle differences in terms of syntax and vectorization strategy: The ARMv8a codes rely on vector fused multiply-add instructions with a single element of \(B\) as a source (e.g. fmla v3.4s,v5.4s,v4.s[0]); in contrast, the x86 counterpart operates on a broadcast operation and vector fused multiply-add strategy (vbroadcastss %xmm4,%xmm8 followed by vfmadd213ps %xmm3,%xmm6,%xmm8 ).
## 7. Other members in the family of blocked algorithms for gemm
One of the advantages of TVM lies in that producing code for other blocked algorithms of the gemm family only requires small changes in the generator routines in order to accommodate the proper loop ordering, blocking and packing schemes. This is described in this section.
### Blocking and packing for variants with \(C\) in the L2 cache
Transforming the TVM generator for the baseline algorithm, with a micro-tile of Resident \(C\), (left column in Figure 1) into that with a block of \(C\) in the L2 cache and a micro-tile of Resident \(B\) (A3C2B0, right column in Figure 2) requires a number of changes in the TVM generator, shown in Figure 7, resulting in the variant displayed in Figure 11:
_Part P0. Parameter list:_ This generator includes \(\mathsf{kr}\) as a parameter (instead of \(\mathsf{mr}\)).
Figure 9. TVM generator for gemm mimicking (the blocking and packing of) the baseline algorithm, integrating both the optimized micro-kernel with Resident \(C\) and _fine-grain optimizations and a loop-level parallelization of loop \(\mathsf{ic}\).
_Part P2. Definition of the operation and packing schemes for \(A\) and \(B\):_ Lines 15-19 modify the dimensions of the \(\mathtt{Bc}\) tensor (this version moves \(B\) to registers) so that the size of each submatrix (micro-panel) in \(\mathtt{Bc}\) is \(k_{r}\times n_{r}\).
_Part P3. Preparation of the schedule:_ Lines 34-37 apply tiling over \(C\) in preparation for the placement of the packing into the buffer \(\mathtt{Cc}\). In addition, line 39 creates a buffer for reading and writing the matrix \(C\). From that point, the scheduler operates with the object \(\mathtt{Cc}\). This change is crucial for the algorithms where \(C\) resides in either the L2 or L3 caches. Concretely, these variants leverage \(\mathtt{cache\_write}\) to induce a copy from matrix \(C\) to the object \(\mathtt{Cc}\) as well as to restore the result of the micro-kernel, temporarily in \(\mathtt{Cc}\), back into \(C\).
_Part P4. Specification of the loop ordering:_ Line 48 specifies a loop order that matches that of the algorithm A3C2B0.
_Part P5. Placement of the packings:_ Lines 51-53 set the buffer for \(\mathtt{Cc}\) and the \(\mathtt{Ac}\) and \(\mathtt{Bc}\) tensors into the appropriate loops.
The TVM generator in Figure 11 does not include fine-grain optimizations, yet identical techniques as those introduced in Section 6.2 apply for this member of the family. Also, the TVM generator to obtain variant B3C2A0 can be easily derived following the same principles.
### Blocking and packing for variants with \(C\) in the L3 cache
The remaining two blocked algorithmic variants for gemm in the family store a block of matrix \(C\) in the L3 cache throughout the computation. Starting from the TVM generator for variant A3C2B0 in
Figure 10. Assembly codes for the reduction loop (traversing \(k_{c}\)) of micro-kernels with Resident \(C\) automatically generated by TVM for single precision. Left: ARMv8a assembly code with NEON vector instructions for a \(4\times 4\) micro-kernel with a loop unrolling technique with a factor of 4. Right: x86 assembly code with AVX512 vector instructions for a \(4\times 16\) micro-kernel.
Figure 11, the following changes are introduced to obtain the missing TVM generator, for variant C3A2B0, displayed in Figure 12:
Figure 11: TVM generator for gemm mimicking the blocking and packing schemes of the A3C2B0 algorithm.
Part P4Specification of the loop ordering:Line 12 enforces a loop ordering which mimics that of the algorithm in Figure 2 (right). Part P5Placement of the packings:The buffer \(C_{\mathsf{c}}\) does not need to be bound to any loop because it belongs to the outer structure. The packings of Ac and Bc are placed in the same loops as in the version with \(C\) in the L2 cache (see lines 15-16).
Obtaining the last variant, C3B2A0, is direct and, therefore, its discussion is omitted for brevity.
## 8. Experimental results
In this section, we provide strong experimental evidence that the TVM-based approach to automatically generate routines for the matrix multiplication paves an almost effortless road toward experimenting with a rich variety of algorithms, micro-kernels and parallelization/optimization options that offer fair performance in a number of scenarios.
### General setup
Unless otherwise explicitly stated, the experiments in this section were carried out using a single core of the NVIDIA Carmel processor (ARM v8.2) embedded on an NVIDIA Jetson AGX Xavier board, using IEEE 32-bit floating point arithmetic (FP32). In order to reduce variability, the processor frequency was fixed to 2.3 GHz, the threads were bound to the hardware cores, and the experiments were repeated a large number of times reporting in the following the average results for each experiment. In general, performance is measured in terms of billions of arithmetic operations per second, abbreviated as GFLOPS when operating with floating point arithmetic and GIOPS for integer arithmetic.
Also, unless explicitly stated, we target the baseline algorithm for gemm with a micro-kernel that operates with Resident \(C\). For reference, when possible we include in the evaluation the results obtained from up-to-date realizations of the gemm kernel in libraries such as BLIS (version v0.8.1), OpenBLAS (version v0.3.19), and the ARM Performance Libraries (ARMPL, version v21.1).
Figure 12. TVM generator for gemm mimicking the blocking and packing schemes of the C3A2B0 algorithm.
The dataset for the experimentation includes two types of gemm problems: large square matrices versus highly "rectangular" problems. Given the current interest in DL inference, the dimensions of the latter are selected as those that result from applying the im2col transform (Becker et al., 2017) to cast the convolution layers in the ResNet50 v1.5 deep neural network (DNN) model in terms of a gemm. The "batch" size for the inference scenario is set to 128 samples. As some layers share the same parameters, resulting in gemm problems of the same dimensions, we report the results for those only once (per layer type); see Table 1. In the following, we will use "layer" to refer to "layer type", since we are only interested in the "layer dimensions".
### Cache configuration parameters
The performance of the GotoBLAS2-like realizations of gemm as well as our routines obtained with TVM is strongly dictated by the optimization level of the micro-kernel and a proper selection of the cache configuration parameters \(m_{c},n_{c},k_{c}\). The optimal values for the latter three parameters depend on hardware features such as number of cache levels, size, set associativity, etc., and the specific dimensions of the micro-kernel (given by \(m_{r}\times n_{r}\) for a Resident \(C\) case). Determining these values via brute force experimentation involves an expensive search across a large 3D space, possibly for each micro-kernel dimension.
Alternatively, one can use the analytical model in (Kolmogorov, 1999) to select the optimal values for the cache configuration parameters. The advantage of this analytical approach in our particular case will be exposed in the next subsection, when we automatically generate, explore and evaluate a variety of micro-kernels, of different dimensions, using the TVM routine.
### Comparison of the TVM generators
We open the experimental evaluation by illustrating the performance boost that is obtained by incrementally integrating the blocking, packing, and fine-grained optimizations described with the successive TVM generators. For simplicity, in this initial analysis we only consider a square gemm problem with dimensions given by \(m=n=k=\)2,000.
Figure 13 shows the results for this first experiment. The five blue bars there report the performance attained by the gemm routines automatically obtained with the five TVM generators described in Sections 5 and 6. Concretely, the labels Basic, Blocked, Packed, \(\mu\)kernel, and Optimized in the \(x\)-axis respectively refer to the TVM generators basic_GEMM, block_GEMM_B3A2C0, packed_GEMM_B3A2C0, packed_GEMM_B3A2C0_ukernel, and opt_GEMM_B3A2C0_ukernel in Figures 5, 6, 7, 8, and 9. For the latter, parallelization is disabled (line 41 of Figure 9). Hence, the reported performance results correspond
\begin{table}
\begin{tabular}{r l r r r r r r r r} \hline \hline Layer & Layer numbers & \(m\) & \(n\) & \(k\) & Layer & Layer numbers & \(m\) & \(n\) & \(k\) \\ type id. & in ResNet50 v1.5 & & & & type id. & in ResNet50 v1.5 & & & \\ \hline
1 & 001 & 1,605,632 & 64 & 147 & 11 & 080 & 100,352 & 256 & 512 \\
2 & 006 & 401,408 & 64 & 64 & 12 & 083/095/105/115/125/135 & 25,088 & 256 & 2,304 \\
3 & 009/021/031 & 401,408 & 64 & 576 & 13 & 086/098/108/118/128/138 & 25,088 & 1,024 & 256 \\
4 & 012/014/024/034 & 401,408 & 256 & 14 & 088 & 25,088 & 1,024 & 512 \\
5 & 018/028 & 401,408 & 64 & 256 & 15 & 092/102/112/122/132 & 25,088 & 256 & 1,024 \\
6 & 038 & 401,408 & 128 & 256 & 16 & 142 & 25,088 & 512 & 1,024 \\
7 & 041/053/063/073 & 100,352 & 128 & 1,152 & 17 & 145/157/167 & 6,272 & 512 & 4,608 \\
8 & 044/056/066/076 & 100,352 & 512 & 128 & 18 & 148/160/170 & 6,272 & 2,048 & 512 \\
9 & 046 & 100,352 & 512 & 256 & 19 & 150 & 6,272 & 2,048 & 1,024 \\
10 & 050/060/070 & 100,352 & 128 & 512 & 20 & 154/164 & 6,272 & 512 & 2,048 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Dimensions of the gemm resulting from applying the im2col transform to the layers of the ResNet50 v1.5 DNN model with a batch size of 128 samples.
to a sequential gemm execution. For reference, the figure also displays the performance attained with the realizations of the gemm kernel in BLIS (red line), OpenBLAS (yellow line), and ARMPL (green line) for this particular problem size (on the NVIDIA Carmel core).
For this first experiment, we observe the notable performance raise attained with the integration of the micro-kernel and the fine-grain optimizations (rightmost two bars) as well as the fact that, for this problem dimension (and target processor core), the best TVM generator delivers a code for gemm whose performance exactly matches that of the BLIS kernel for this operation. This is not totally surprising since, for this initial experiment, we forced TVM to generate a routine with exactly the same cache configuration parameters and micro-kernel dimensions as those used by BLIS. Hereafter, all our TVM results correspond to the version of gemm obtained with the optimized TVM generator opt_GEMM_B3A2C0_ukernel.
Figure 14: Performance evaluation of the TVM generator on a single NVIDIA Carmel core for square matrices, without and with fixed lane size/prefetching (left and right, respectively).
Figure 13: Performance comparison of distinct TVM generators on a single NVIDIA Carmel core for square matrices.
### In search of the best micro-kernel
In this subsection we demonstrate the benefits of being able to automatically generate and subsequently evaluate a variety of micro-kernels for a particular problem dimension. (In comparison, a manual development is a costly process that requires a high level of expertise.)
#### 8.4.1. Effect of the micro-kernel
We unfold the analysis of the micro-kernels by assessing the performance of the TVM-generated routines for the baseline algorithm that integrate micro-kernels of different dimensions: For a square problem with \(m=n=k=2\),000 in Figure 14, and for the gemm operations arising in two layers of the ResNet model in Figure 15. We clarify here that the only difference between these micro-kernels is their dimensions. Thus, other optimization possibilities, such as varying the loop unrolling factor, selecting the loop that to vectorize, etc. were not evaluated in this experiment and were kept constant across all experiments. With respect to the two plots in Figure 14, they evaluate the impact of two low-level optimizations included in the TVM generator in Figure 9:
* _Lane size:_ The innermost loop of gemm is further split using a factor that is an integer multiple of the number of elements that fit into one vector register (e.g., 8 for FP16, 4 for FP32, or 2 for FP64 in the NVIDIA Carmel processor, for which the vector registers are 128-bit wide).
* _Prefetching:_ The TVM method cache_read is invoked to induce the scheduler to include an automatic _prefetching_ when possible.
The left plot in Figure 14 was obtained by removing Part 6.4 (lines 24 to 38) in the TVM generator in Figure 9, while the right plot was obtained with these lines included. In both cases, the performance results correspond to sequential executions.
As a result from these additional optimizations, the performance of the TVM routine displayed in the right plot of Figure 14, when setting the micro-kernel dimension to \(m_{r}\times n_{r}=8\times 12\), improves from 16 to 18.7 GFLOPS with respect to its counterpart without these low-level optimizations. Therefore, from now on we will only report results for the TVM-generated routines with these two optimizations in place.
The right plot in Figure 14 demonstrates that a careful selection of the micro-kernel is critical to attain high performance. Concretely, the performance achieved by using micro-kernels of different dimensions varies between 10.3 GFLOPS (worst case, with \(m_{r}\times n_{r}=32\times 32\)) and 26.8 GFLOPS (best case, with \(m_{r}\times n_{r}=8\times 12\)). The two performance plots in Figure 15 also contribute to show that the best micro-kernel is largely dependent on the problem dimension. Specifically, the highest
Figure 15. Performance evaluation of the TVM generator on a single NVIDIA Carmel core for layers 2 (left) and 8 (right) of ResNet50 v1.5.
GFLOPS rates are observed for the micro-kernels \(m_{r}\times n_{r}=4\times 16\) and \(4\times 28\) for layer 2, compared with the micro-kernel \(m_{r}\times n_{r}=4\times 24\) for layer 8.
A direct comparison between the realizations of gemm in "hand-encoded" libraries and the TVM routine _using the best micro-kernel_ for each problem case reveals that, for square matrices, the TVM routine delivers GFLOPS rates that are similar to those attained with BLIS (28.9 GFLOPS for the former versus 28.8 GFLOPS for latter), and superior with respect to OpenBLAS (23.3 GFLOPS) and ARMPL (26.4 GFLOPS). The scenario is different for the ResNet50 problems: For layer 2, the TVM routine (with the best micro-kernel) delivers 22.4 GFLOPS versus 13.3 GFLOPS for BLIS, 17.1 GFLOPS for OpenBLAS, and 11.9 for ARMPL. In addition, for layer 8, the TVM routine (with the best micro-kernel) achieves 26.1 GFLOPS versus 22.0 GFLOPS for BLIS (best library-based option). From this point, we will always report the results for the TVM routine with the best micro-kernel for each problem dimension.
The evaluation of the gemm operations associated with all the layers in the ResNet50 model shows a variety of scenarios. Concretely, Figure 16 illustrates that the TVM routine outperforms the BLIS realization by a large margin for layer 1-12 and by a visible difference for layers 15-17 (40 cases out of the total 53 convolution layers in the model, see Table 1); it is competitive for layer 14 and 20 (3 cases); and it is suboptimal for layer 13, 18, 19 (10 cases). Compared with OpenBLAS and ARMPL, the TVM routine is consistently better. An analysis of the results taking into account the operands' dimensions shows that the TVM routine delivers higher performance for "rectangular" cases, with \(m\) in the range 100,352-1,605,632, and it is competitive when \(m\)=25,088. In contrast, BLIS is better choice for "square" problems, with \(m\) in the range of 6,000.4
Footnote 4: As a side note, the actual processing cost of the Resnet50 v1.5 model is concentrated in those cases where \(m\) is in the range 100,352–1,605,632 (47.8% of the total time), followed by \(m\)=25,088 (35.6% of the total time). In terms of absolute cost, this implies that the execution of all layers employing the TVM routine would require 39.1 s compared with 48.0 s when using BLIS (and higher for OpenBLAS and ARMPL).
#### Why is TVM better?
The superiority of the TVM routine is rooted in the fact that, by (automatically) generating the micro-kernels of different dimensions, we can easily explore the
Figure 16. Performance evaluation of the TVM generator on a single NVIDIA Carmel core for ResNet50 v1.5. The number on top of each blue bar represents the dimension \(m_{r}\times n_{r}\) of the best micro-kernel for that layer.
space and select the one that is better suited to a particular problem dimension. This is illustrated in Figure 16, which reports the best micro-kernel for each problem dimension/DNN layer. Compared with that, BLIS, OpenBLAS and ARMPL each integrate a single, manually-encoded micro-kernel, which is therefore the only option for any problem dimension. The reason for this limitation of the libraries is that manually implementing different micro-kernels is a time-consuming task, requiring significant experience in high performance computing, computer architecture, and assembly coding. In addition, the logic of selecting the appropriate micro-kernel dimension based on problem dimensions is usually not supported in commercial or academic libraries.
Delving further into the matter, the theoretical reasons behind this behavior can be explained using the analytical model in [21]. According to that, for problems with a reduced dimension \(k\), which in practice limits the effective value for the cache parameter \(k_{c}\), the micro-panel of \(B_{c}\) that is stored in the L1 cache (\(B_{r}\), of dimensions \(k_{c}\times n_{r}\)) does not attain the optimal occupation of that level, explaining the performance penalty.
Let us illustrate the problem for layer 2 and 8 of ResNet v1.5, whose experimental performance for different micro-kernel dimensions was exposed in Figure 15. We complement that figure with the values in Table 2, illustrating the cache parameters \(m_{c},n_{c},k_{c}\) and the theoretical occupation of the L1 cache by \(B_{r}\), for different micro-kernel dimensions, determined using the analytical model in [21]5. When executed on the Carmel processor, the analytical model indicates that, for the \(m_{r}\times n_{r}=8\times 12\) micro-kernel that is integrated BLIS for that particular architecture, the micro-panel \(B_{r}\) targeting the L1 cache only occupies 4.69% of the L1 cache for layer 2; and 9.38% for layer 8. In contrast, this micro-kernel should have occupied up to 50% with \(B_{r}\) for both layers, reserving the rest of the L1 cache for entries from the \(A,C\) operands.
Footnote 5: In the table, we limit the study to those values of \(m_{r}\) and \(n_{r}\) that do not cause register spilling, as this effect would yield a performance penalty beyond that introduced by the negative effect of L1 under-occupation.
The only way to address this under-utilization of the L1 cache is by increasing \(n_{r}\), and hence developing a new micro-kernel. Let us support this observation with a specific example. Consider a micro-kernel of dimension \(m_{r}\times n_{r}=4\times 28\). In this case, the occupancy of the L1 cache by \(B_{r}\) grows to 10.90% for layer 2 and up to 21.90% for layer 8, which is clearly superior to that observed for the (BLIS) \(8\times 12\) micro-kernel. In summary, there is a clear theoretical benefit from adopting an \(m_{r}\times n_{r}=4\times 28\) micro-kernel for these particular layers, which is conformal with the experimental advantage that was reported in Figure 16.
Figure 17 reports the utilization rates of the L1 and L2 cache levels by \(B_{r}\) and \(A_{c}\), respectively, and all the layers. The results show that, especially for the L1 cache, the occupation compared with BLIS explains the overall differences in performance between the BLIS micro-kernel and the optimal alternative automatically generated with TVM reported in Figure 16. For reference, the figure includes the theoretical maximum occupation rate for each cache memory (black line) for the corresponding operands, dictated by the analytical model.
To wrap up this discussion, the under-utilization of the L1 and L2 cache levels resulting from the small values of \(k\) and \(m\) in non-square problems forces the developer to trade it off with larger register block dimensions (\(m_{r}\) and/or \(n_{r}\)), and hence with the development a family of micro-kernels that, in practice, are invoked intelligently depending on the problem dimensions. Using a tool like TVM to automatically generate micro-kernels, together with an analytical model for cache and register blocking parameters, alleviates this programmability burden not only to obtain dimension-agnostic performance optimization on a single architecture, but also across different architectures.
#### 8.4.3. Optimization of the micro-kernel
To close this discussion on the impact of the micro-kernel, we note that, when searching for the optimal dimension of the micro-kernel, the exploratory space
is basically bi-dimensional, as we only need to select the values for \(m_{r},n_{r}\). Now, the optimal value for at least one of these two parameters is tied to the architecture lane size and, in addition, the hardware imposes strict limits on the number of vector registers that can be used from inside the micro-kernel, which in turn constrains the practical values for \(m_{r},n_{r}\). As a consequence, selecting the best option for the TVM-generated routines only requires a few dozens of experiments, and the whole process can be also automatized, providing significant benefits in terms of reduced programming and optimization efforts for the library developer.
In the next subsections _we examine the programming benefits of the TVM-based approach, from the viewpoints of performance, maintainability and portability,_ by exposing how TVM allows us to easily generate routines for different data types, explore distinct packing schemes, evaluate alternative parallelization options and, finally, instantiate the full family of matrix multiplication algorithms. Finally, we close the experimental section with an evaluation on AMD and Intel architectures.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline Layer & \(m_{c}\) & \(n_{c}\) & \(k_{c}\) & \(m_{r}\) & \(n_{r}\) & L1 & Max & Layer & \(m_{c}\) & \(n_{c}\) & \(k_{c}\) & \(m_{r}\) & \(n_{r}\) & L1 & Max \\ type id. & & & & & & & (\%) & (\%) & type id. & & & & (\%) & (\%) \\ \hline
2 & 7,168 & 64 & 64 & 4 & 4 & 1.56 & 25 & 8 & 3,584 & 256 & 128 & 4 & 4 & 3.12 & 25 \\
2 & 6,656 & 64 & 64 & 4 & 8 & 3.12 & 50 & 8 & 3,584 & 256 & 128 & 4 & 8 & 6.25 & 50 \\
2 & 6,144 & 64 & 64 & 4 & 12 & 4.69 & 50 & 8 & 3,328 & 275 & 128 & 4 & 12 & 9.38 & 50 \\
2 & 6,144 & 64 & 64 & 4 & 16 & 6.25 & 50 & 8 & 3,072 & 298 & 128 & 4 & 16 & 12.50 & 50 \\
2 & 5,632 & 64 & 64 & 4 & 20 & 7.81 & 50 & 8 & 3,072 & 298 & 128 & 4 & 20 & 15.60 & 50 \\
2 & 5,120 & 64 & 64 & 4 & 24 & 9.38 & 50 & 8 & 3,072 & 298 & 128 & 4 & 24 & 18.80 & 50 \\
2 & 7,168 & 64 & 64 & 8 & 4 & 28 & 10.90 & 50 & 8 & 3,072 & 298 & 128 & 4 & 28 & 21.90 & 50 \\
2 & 7,168 & 64 & 64 & 8 & 4 & 1.56 & 25 & 8 & 3,584 & 256 & 128 & 8 & 4 & 3.12 & 25 \\
2 & 7,168 & 64 & 64 & 8 & 12 & 4 & 1.56 & 25 & 8 & 3,584 & 256 & 128 & 8 & 12 & 9.38 & 50 \\
2 & 7,168 & 64 & 64 & 12 & 4 & 1.56 & 25 & 8 & 3,584 & 256 & 128 & 12 & 4 & 3.12 & 25 \\
2 & 7,168 & 64 & 64 & 16 & 4 & 1.56 & 25 & 8 & 3,584 & 256 & 128 & 16 & 4 & 3.12 & 25 \\
2 & 7,168 & 64 & 64 & 20 & 4 & 1.56 & 25 & 8 & 3,584 & 256 & 128 & 20 & 4 & 3.12 & 25 \\
2 & 7,168 & 64 & 64 & 24 & 4 & 1.56 & 25 & 8 & 3,584 & 256 & 128 & 24 & 4 & 3.12 & 25 \\
2 & 7,168 & 64 & 64 & 28 & 4 & 1.56 & 25 & 8 & 3,584 & 256 & 128 & 28 & 4 & 3.12 & 25 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Detailed L1 cache occupation for layers 2 and 8 of ResNet50 v1.5 for different micro-kernel dimensions.
Figure 17: Occupation for the L1 and L2 caches (left and right, resp.) on the Carmel platform for the best TVM-generated micro-kernel compared with the BLIS micro-kernel.
### Maintainability: Generating codes for different data types
Generating a routine for a specific data type with TVM only requires adjusting the dtype argument in the TVM generator, and modifying accordingly the lane size (see subsection 8.4.1). Compared with this, producing manually a GotoBLAS-2 like routine, for a particular data type, requires a careful re-design of the micro-kernel, usually in assembly, as well as the adaptation of the packing functions.
In Figure 18 we report the performance of the routines generated with TVM for five data types: FP16 (IEEE 16-bit floating point), FP32 (IEEE 32-bit floating point), FP64 (IEEE 64-bit floating point), INT16 (integer 16-bit), and INT32 (integer 32-bit). This figure shows acceleration factors which are conformal with the use of other precision formats.
### Performance: Packing costs
For some problem dimensions, it may be beneficial to skip the packing of any of the matrix operands, or both of them, into the buffers \(A_{\mathrm{c}},B_{\mathrm{c}}\). As described in subsection 5.3, eliminating the packing scheme is straight-forward with TVM. In contrast, introducing this modification into a conventional GotoBLAS2-like routine implies rewriting the micro-kernel, as this piece of code assumes that the matrix operands are disposed/packed into the buffers in a certain manner; see Figure 4. Given that the micro-kernel is in general encoded in assembly, this is a non-trivial task.
Figure 19 evaluates the packing possibilities using TVM to generate modified versions of the baseline algorithm. As shown in the top chart, for small square matrices, the cost of the re-arranging the data due to packing is not compensated with a "sufficient" acceleration of the micro-kernel. For the problems associated with the convolutional layers in ResNet-50 v1.5 model, the dimensions are large enough and this effect is not present. Nonetheless, the irregular sizes of these operands dictate that, for layers 1, 2, 5, 7 and 18, the performance of the baseline algorithm generated with TVM and a variant which packs only one of the matrix operands are close.
Figure 18. Performance of the TVM generator for distinct data types on a single NVIDIA Carmel core for ResNet-50 v1.5.
### Performance: Parallelization options
The TVM generator can be leveraged to assess distinct options to orchestrate a multi-threaded execution on a multicore processor. In Figure 20, we target the 8 cores in the NVIDIA Carmel processor, evaluating four parallelization options for the baseline algorithm (see Figure 1, left) that differ in the loop which is parallelized: \(j_{c},i_{c},j_{r}\), or \(i_{r}\). (Loop \(p_{c}\) cannot be parallelized as this would yield a race condition.) This shows that the best choice varies slightly between two of the options, depending on the problem dimensions. However, investigating the best parallelization option adds a dimension to the complexity to the optimization effort that is out-of-scope for this work.
At this point we recognize that an analysis of the parallelization options for a conventional GotoBLAS-\(2\)-like routine is also straight-forward. Furthermore, in some cases, it may be more convenient to parallelize multiple loops to expose sufficient thread-level parallelism [31]. Unfortunately, when instructed to parallelize two or more loops, currently TVM extracts parallelism from the outermost loop only.
Figure 19: Performance of the TVM generator for four distinct packing configurations using a single NVIDIA Carmel core for small-square matrices and ResNet-50 v1.5 (top and bottom, respectively).
### Performance, maintainability and portability: The complete family of algorithms
In Section 7, we argued that producing code with TVM for other blocked algorithmic variants of the gemm family only requires small changes into the generator routines. In this subsection we analyze the practical impact of this on the sequential and parallel performance.
Figure 21 shows a clear superiority of the variants that maintain \(C\) in the processor registers over their counterparts that operate with \(B\) in the registers. (We omit the variants with Resident \(A\) because they present a symmetric role with the respect to the variants with Resident \(B\).) The reason is that, in the variants with Resident \(C\) (i.e., B3A2C0 and A3B2C0), the elements of \(C\) are housed in the processor registers during the full execution of the micro-kernel loop and, therefore, there are no writes to memory as part of its innermost loop (L6) of the algorithm. In contrast, the two other variants, A3C2B0 and C3A2B0, integrate a micro-kernel that, at each iteration of the innermost loop (L6), performs several writes on \(C\) while this operand resides in a certain level of the cache hierarchy. This behavior lies at the core of the algorithms/variants, and is preserved by TVM which simply follows the programmer's directives. With respect to the results, we observe small differences in performance between the two versions that maintain \(C\) in the processor registers, in favor of either one or another depending on the specific layer.
### Memory performance: footprint
We next investigate the memory requirements of the automatically-generated codes for gemm. Table 3 reports the size for a bare gemm test driver routine statically linked with each library (column labeled as "gemm"). The memory allocation for the matrices and the necessary packing buffers are not included, as their space requirements should be similar in all cases. Furthermore, the test driver is the same in all cases to allow a fair comparison.
The lowest memory footprint for the gemm executable is offered by OpenBLAS with close to 89 KiB, followed by TVM with 520 KiB. BLIS and ARMPL need a total amount of 1.3 MiB and 27.8 MiB, respectively.
Figure 20. Performance of the TVM generator for four distinct loop parallelization options on 8 NVIDIA Carmel cores for ResNet-50 v1.5.
### Portability: Experiments in other architectures
We close the experimental section by demonstrating the performance portability of the automatic-generation approach for gemm using a pair of AMD and Intel processor architectures. For this
Figure 21. Performance of the TVM generators for four variants of the gemm family of algorithms on a single core and 8 cores of the NVIDIA Carmel processor (top and bottom, respectively), for ResNet-50 v1.5. Note the different scales of the \(y\) axis in the two plots.
\begin{table}
\begin{tabular}{l r} \hline \hline Solution & gemm \\ \hline ARMPL & 29,145,688 \\ BLIS & 1,350,456 \\ OpenBLAS & 90,944 \\ TVM & 532,976 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Size (in bytes) of the gemm realization for each solution.
purpose, we compare the (sequential) routine generated by TVM, combined with the best micro-kernel, against that of the realization of this kernel in BLIS (version 0.9.0) on an AMD EPYC 7282 (Rome) processor6; and the tuned implementation of the same kernel in Intel MKL (version 2021.0) on an Intel Xeon Platinum 8358 (Icelake) processor. In order to generate code specifically tuned for these two architectures, we only had to set the appropriate target in Part P8 of the TVM generator. Concretely, lines 35-38 in Figure 5 specify the backends selected for these two processor architectures.
Footnote 6: We note that the realization of gemm in AMD’s native library (AOCL) is basically BLIS in disguise; see [https://developer.amd.com/amd-aocl/blas-library/](https://developer.amd.com/amd-aocl/blas-library/).
Figure 22 reports the results for this final experiment using the layers in the ResNet50 model as testbed. The two charts there show a similar trend, which was already present in the results for the NVIDIA Carmel core in Figure 16. Concretely, the TVM routine outperforms the library realizations
Figure 22. Performance of the TVM generator for ResNet-50 v1.5 for AMD and Intel (top and bottom, respectively).
for the "rectangular" cases, but it is suboptimal for "square" problems. In order to investigate this behavior in more detail, given that the library evaluated in the case of the AMD architecture is BLIS, we inspected the internals of the micro-kernel integrated in the library for that particular processor, in order to compare it with the micro-kernel generated by TVM.
Note that the BLIS and TVM solutions rely on the baseline algorithm for gemm and, therefore, on a micro-kernel with Resident \(C\). Specifically, for the AMD Rome, BLIS hand-encodes a micro-kernel of dimension \(m_{r}\times n_{r}=6\times 16\), unrolling loop L6 by a factor of 4. Given that the AMD Rome features 16 vector registers, and that the SIMD width is 256 bits (that is, 8 FP32 per vector register) for AVX2, the BLIS micro-kernel dedicates \(6\times 2=12\) vector registers to maintain the micro-tile of \(C\). Furthermore, at each iteration of loop L6, the micro-kernel utilizes two vector registers to load a row of the micro-panel \(B_{r}\) (of dimension \(k_{c}\times 16\) micro-panel) plus and single vector register to broadcast one-by-one the six entries of a column of the micro-panel \(A_{r}\) (of dimension \(6\times k_{c}\)) prior to operating with each. In total, the micro-kernel thus occupies 15 out of the 16 available vector registers. In addition, the code for the micro-kernel features a notable number of (assembly) prefetching instructions.
With the above-described configuration for the micro-kernel, the realization of gemm in BLIS delivers 95.3 GFLOPS for square gemm problems of dimension \(m=n=k=\)2,000 while, by disabling the prefetching instructions, the performance drops to 78.5 GFLOPS. Compared with that, the best micro-kernel generated with TVM for that problem dimension corresponds to \(m_{r}\times n_{r}=2\times 40\), which delivers 88.2 GFLOPS. The flop throughput rate for TVM is thus somewhere between the two rates for BLIS (i.e., with and without prefetching) which, on the one hand, is not totally surprising as TVM cannot exploit (assembly-level) prefetching instructions. On the other hand, the dimensions of the best micro-kernel selected by TVM are quite surprising. To further investigate this, we instructed TVM to generate a gemm routine using a BLIS-like micro-kernel, that is, with \(m_{r}\times n_{r}=6\times 16\) and an unrolling factor of 4. Interestingly, the TVM routine that integrated this micro-kernel reported 39.60 GFLOPS only. The reason is that, for this particular micro-kernel dimension and unrolling factor, TVM produced a micro-kernel that did not maintain the micro-tile of \(C\) into the processor registers, incurring into register spilling during the execution of the micro-kernel loop and considerably degrading performance! Whether we can enforce TVM to avoid this effect is currently under investigation as, to a good extent, it depends on internal behavior of TVM.
## 9. Concluding Remarks
We have presented an integral TVM-based solution to automatically obtain high performance realizations of gemm. On the one hand, our solution departs from conventional library-based realizations of gemm in that the full code is automatically generated, including both the blocked routines for the family of gemm algorithms extending the ideas of GotoBLAS2 as well as the processor-specific micro-kernels. On the other hand, compared with other JIT compilation frameworks, we mimic the techniques in the GotoBLAS2/BLIS/OpenBLAS2 algorithms for gemm to obtain blocked algorithms that attain an efficient utilization of the cache memories, considerably trimming the cost of exploring the optimization search space. TVM can also generate competitive code for GPUs that perform close to NVIDIA cuBLAS library. However, the schedule for the GPU-specific gemm differs from that in the CPU version described in this work. Explaining the differences out of scope for this work.
Our work exposes the programming advantages, from the points of view of performance, maintainability, and portability, of the TVM-automatized approach, which can be leveraged, among others, to seamlessly generate routines for different data types, explore distinct packing schemes, evaluate alternative parallelization options, and instantiate the entire family of matrix multiplication algorithms.
## Acknowledgments
This work was supported by the research projects PID2020-113656RB-C22 (MCIN/AEI/10.13039/501100011033), and PID2021-126576NB-I00; and CM via Multiannual Agreement with Complutense University in the line Program to Stimulate Research for Young Doctors in the context of the V PRICIT under projects PR65/19-22445 and CM S2018/TCS-4423. A. Castello is a FJC2019-039222-I fellow supported by MCIN/AEI/10.13039/501100011033. H. Martinez is a postdoctoral fellow supported by the _Consejeria de Transformacion Economica, Industria, Conocimiento y Universidades de la Junta de Andalucia_. This project has received funding from the European High-Performance Computing Joint Undertaking (JU) under grant agreement No 955558 (eFlows4HPC project). The JU receives support from the European Union's Horizon 2020 research and innovation programme, and Spain, Germany, France, Italy, Poland, Switzerland, and Norway.
|
2309.06585 | Statistical analysis of stochastic magnetic fluctuations in space plasma
based on the MMS mission | Based on the Magnetospheric Multiscale (MMS) mission we look at magnetic
field fluctuations in the Earth's magnetosheath. We apply the statistical
analysis using a Fokker-Planck equation to investigate processes responsible
for stochastic fluctuations in space plasmas. As already known, turbulence in
the inertial range of hydromagnetic scales exhibits Markovian features. We have
extended the statistical approach to much smaller scales in space, where
kinetic theory should be applied. Here we study in detail and compare the
characteristics of magnetic fluctuations behind the bow shock, inside the
magnetosheath, and near the magnetopause. It appears that the first Kramers-
Moyal coefficient is linear and the second term is quadratic function of
magnetic increments, which describe drift and diffusion, correspondingly, in
the entire magnetosheath. This should correspond to a generalization of
Ornstein-Uhlenbeck process. We demonstrate that the second order approximation
of the Fokker-Planck equation leads to non-Gaussian kappa distributions of the
probability density functions. In all cases in the magnetosheath, the
approximate power-law distributions are recovered. For some moderate scales we
have the kappa distributions described by various peaked shapes with heavy
tails. In particular, for large values of the kappa parameter this shape is
reduced to the normal Gaussian distribution. It is worth noting that for
smaller kinetic scales the rescaled distributions exhibit a universal global
scale-invariance, consistently with the stationary solution of the
Fokker-Planck equation. These results, especially on kinetic scales, could be
important for a better understanding of the physical mechanism governing
turbulent systems in space and astrophysical plasmas. | Wiesław M. Macek, Dariusz Wójcik | 2023-09-12T20:17:34Z | http://arxiv.org/abs/2309.06585v1 | # Statistical analysis of stochastic magnetic fluctuations in space plasma based on the _Mms_ mission
###### Abstract
Based on the _Magnetospheric Multiscale_ (_MMS_) mission we look at magnetic field fluctuations in the Earth's magnetosheath. We apply the statistical analysis using a Fokker-Planck equation to investigate processes responsible for stochastic fluctuations in space plasmas. As already known, turbulence in the inertial range of hydromagnetic scales exhibits Markovian features. We have extended the statistical approach to much smaller scales in space, where kinetic theory should be applied. Here we study in detail and compare the characteristics of magnetic fluctuations behind the bow shock, inside the magnetosheath, and near the magnetopause. It appears that the first Kramers-Moyal coefficient is linear and the second term is quadratic function of magnetic increments, which describe drift and diffusion, correspondingly, in the entire magnetosheath. This should correspond to a generalization of Ornstein-Uhlenbeck process. We demonstrate that the second order approximation of the Fokker-Planck equation leads to non-Gaussian kappa distributions of the probability density functions. In all cases in the magnetosheath, the approximate power-law distributions are recovered. For some moderate scales we have the kappa distributions described by various peaked shapes with heavy tails. In particular, for large values of the kappa parameter this shape is reduced to the normal Gaussian distribution. It is worth noting that for smaller kinetic scales the rescaled distributions exhibit a universal global _scale-invariance_, consistently with the stationary solution of the Fokker-Planck equation. These results, especially on kinetic scales, could be important for a better understanding of the physical mechanism governing turbulent systems in space and astrophysical plasmas.
keywords: magnetic fields - turbulence - methods: data analysis - methods: statistical - Sun: heliosphere - solar wind
## 1 Introduction
Turbulence is a complex phenomenon that notwithstanding progress in (magneto-)hydrodynamic simulations is still a challenge for natural sciences (Frisch, 1995), and physical mechanisms responsible for turbulence cascade are not clear (Biskamp, 2003). Fortunately, collisionless solar wind plasmas can be considered natural laboratories for investigating this complex dynamical system (Bruno and Carbone, 2016). Fluctuations of magnetic fields play an important role in space plasmas, leading also to a phenomenon known as magnetic reconnection (e.g., Burlaga, 1995; Treumann, 2009).
Turbulent magnetic reconnection is a process in which energy can proficiently be shifted from a magnetic field to the motion of charged particles. Therefore, this process responsible the redistribution of kinetic and magnetic energy in space plasmas is pivotal to the Sun, Earth, as well as to other planets and generally across the whole Universe. Reconnection also impedes the effectiveness of fusion reactors and regulates geospace weather which can affect contemporary technology such as the Global Positioning System (GPS) navigation, modern mobile phone networks, including electrical power grids. Electric fields responsible for reconnection in the Earth's magnetosphere has been observed on kinetic scales by the _Magnetospheric Multiscale_ (_MMS_) mission (Macek et al., 2019, 2019). Certainly, reconnection in the magnetosphere is related to turbulence at small scales (Karimabadi et al., 2014).
Basically, in a Markov process, given an initial probability distribution function (PDF), the transition to the next stage can fully be determined. It is also interesting here that we can prove and demonstrate the existence of such a Markov process experimentally. Namely, without relying on any assumptions or models for the underlying stochastic process we are able to extract the differential equation of this Markov process directly from the collected experimental data. Hence this Markov approach appears to be a bridge between the statistical and dynamical analysis of complex physical systems. There is a substantial evidence based on statistical analysis that stochastic fluctuations exhibits Markov properties (Pedrizzetti and Novikov, 1994; Renner et al., 2001). We have already proved that turbulence has Markovian features in the inertial
range of hydromagnetic scales (Strumik & Macek, 2008a,b). Admittedly, for turbulence inside the inertial region of magnetized plasma, the characteristic spectra should be close to standard Kolmogorov (1941) power-law type with exponent: \(-5/3\approx-1.67\)(Kolmogorov, 1941) and Kraichnan (1965) power-law spectrum with exponent: \(-3/2\), but surprisingly, the absence of these classical spectra, especially on smaller scales, seems to be the rule.
Moreover, we have also confirmed clear breakpoints in the magnetic energy spectra in the Earth's magnetosheath, which occur near the ion gyrofrequencies just behind the bow shock, inside the magnetosheath, and before leaving the magnetosheath. Namely, we have observed that the spectrum steepens at these points to power exponents in the kinetic range from -5/2 to -11/2 for the magnetic field data of the highest resolution available within the _MMS_ mission (Macek et al., 2018). Therefore, we would like to investigate the Markov property of stochastic fluctuations outside this inertial region of magnetized plasma on small scales, when the slopes are consistent with kinetic theory.
It should also be noted that based on the measurements of magnetic field fluctuations in the Earth's magnetosheath gathered onboard the _MMS_ mission, we have recently extended these statistical results to much smaller scales, where kinetic theory should be applied (Macek et al., 2023). Here we compare the characteristics of stochastic fluctuations behind the bow shock, inside the magnetosheath, and near the magnetopause. In this paper, we therefore present the results of our comparative analysis, where we check whether the solutions of the Fokker-Planck (FP) equation are consistent with experimental PDFs in various regions of the magnetosheath.
In Section 2, a concise description of the _MMS_ mission and the analyzed data is provided, while the Section 3 outlines used stochastic mathematical and statistical methods. The vital results of our analysis are presented in Section 4, which demonstrates that the solutions of the FP equation are in good agreement with empirical PDFs. Finally, Section 5 emphasizes the significance of stochastic Markov processes in relation to turbulence in space plasmas, which exhibit a universal global _scale-invariance_ across the kinetic domain.
## 2 Data
The _MMS_ mission, which begun in March 12, 2015 and is still in operation, delves into the connection and disconnection of the Sun's and Earth's magnetic fields. Four spacecraft (namely _MMS 1 - MMS 4_), which carry identical instrument suites, are orbiting near the equator to observe magnetic turbulence in progress. They are formed into a pyramid-like formation. Each satellite has an octagonal form that is around 3.5 meters in breadth and 1.2 meters in height. The satellites rotate at Three Revolutions Per Minute during scientific operations. Position data is supplied by ultra-precise GPS apparatus, while attitude is sustained by four stellar trackers, two accelerometers, and two solar sensors. All of the spacecraft have identical instruments to measure plasmas, magnetic and electric fields, and other particles with remarkably high (milliseconds) time resolution and accuracy. This allows us to figure out if reconnection takes place in an individual area, everywhere within a broader area simultaneously, or traversing through space. The _MMS_ studies the reconnection of the solar and terrestrial magnetic fields in both the day and night sides of Earth's magnetosphere, which is the only place where it can be directly observed by spacecraft. In our previous studies we have observed reconnection in the Earth's magnetosphere involving small kinetic scales (Macek et al., 2019a,b).
We have further examined fluctuations of all components of the magnetic fields \(\mathbf{B}=(B_{x},B_{y},B_{z})\) in the Geocentric Solar Ecliptic (GSE) coordinates, with the magnitude strength \(B=|\mathbf{B}|\) (square root of the sum of the squares of the field components), which have been taken from the _MMS_ Satellite No. 1 (_MMS 1_), located just beyond the Earth's bow shock (BS). In this way, we have shown that magnetic fluctuations exhibits Markov character also on very small kinetic scales (Macek et al., 2023). Moreover, we have noticed that in all components the Markovian features are quite similar. Here, we would like to further investigate statistical properties of magnetic fluctuations in various regions of the magnetosheath. The spacecraft trajectories in the magnetosheath, in three different regions, namely:
* (a) just behind the bow shock (BS),
* (b) deep inside the magnetosheath (SH), and
* (c) near the magnetopause (MP),
which have been shown in Fig. 1 of (Macek et al., 2018).
Therefore, we would like to look at the measurements of the magnetic field strength \(B=|\mathbf{B}|\), but now at various regions of the magnetosheath. To investigate magnetosheath stochastic fluctuations, now we have chosen the same three different time intervals samples, which correspond to Table 1 (List of Selected _MMS 1_ Interval Samples) of Ref. (Macek et al., 2018). In cases (a) and (c) of approximately 5 minutes and 1.8 minutes respective intervals, we use burst type observations from the FGM (FluxGate Magnetometer) sensor with the highest resolution (\(\Delta t_{B}\)) of 7.8 ms (128 samples per second) with 37,856 and 13,959 data points, correspondingly. However, in the other case (b), between the bow shock and the magnetopause, where only substantially lower resolution 62.5-125 ms in survey mode (8-16 samples per second) data are available, we have a much longer interval lasting 3.5 h with 198,717 data points with \(\Delta t_{B}=62.5\) s. All of the data are publicly available on the website: [http://cdaweb.gsfc.nasa.gov](http://cdaweb.gsfc.nasa.gov), which is hosted by NASA.
Admittedly, the gaps in time series, which commonly appear in the data gathered from space missions, can have a considerable impact on the conclusions that can be derived from statistical analysis based on experimental data. One of the powerful but simple tools used to cope with this problem is a linear interpolation method between points, which we have used, if necessary, to fill these gaps in the analyzed data sets. Therefore, in Fig. 1 on the upper side of each case (a) - (c) from left to right, we have depicted time series of the magnetic field strength \(B=|\mathbf{B}|\). Whereas on the bottom side of each case, we have shown the respective power spectral density (PSD) obtained using the method proposed by Welch (1967).
The calculated average ion and electron gyrofrequencies are as follows: in case (a) \(f_{ci}=0.25\) Hz, and \(f_{ce}=528\) Hz; case (b) \(f_{ci}=0.24\) Hz and \(f_{ce}=510\) Hz; case (c) \(f_{ci}=0.29\) Hz and \(f_{ce}=609\) Hz (Macek et al., 2018). In addition, employing the hypothesis according to Taylor (1938), relating time and space scales in this way: \(l=v_{\rm sw}\cdot\tau\), where \(l\) is a spatial scale and \(v_{\rm sw}\) is the mean velocity of the solar wind flow in the magnetosheath, we estimate characteristic inertial frequencies for ions and electrons: in case (a) \(f_{di}=0.55\) Hz and \(f_{de}=24.5\) Hz; case (b) \(f_{di}=0.41\) Hz and \(f_{de}=18.1\) Hz; case (c) \(f_{di}=0.45\) Hz and \(f_{de}=20.1\) Hz. We have marked these values on each graph of power spectral density. In case (a) the obtained spectral exponent is about \(-2.60\pm 0.06\) somewhat steeper, before the \(f_{de}=24.5\) Hz threshold and undoubtedly more steep than the Kolmogorov (1941) (-5/3) or Kraichnan (1965) (-3/2) slopes.
On the other hand, outside the inertial range of scales large spectral exponents has been reported from the _Cluster_ multi-spacecraft mission (Sahraoui et al., 2009), the _WIND_ data (Bruno et al., 2014), including the proposed explanation of nature of solar wind magnetic fluctuations on kinetic scales based on the missions (e.g., Lion et al., 2016; Roberts et al., 2016). Owing to unprecedented high 7.8 ms time resolution of magnetometer data in the _MMS_ mission available in burst mode, we also see that in case (a) the slope is of \(-2.60\pm 0.06\) (close to -5/2) above \(f_{de}=24.5\) Hz. This is further followed by an even steeper spectrum with the slope of \(-5.59\pm 0.32\) (close to -11/2 or -16/3). Because of a substantially lower survey data resolution of 62.5 ms in case (b) the spectrum with \(-2.24\pm 0.09\) (\(\approx-7/3\)) is steeper than the Kolmogorov (1941) (\(-5/3\)) spectrum only after the visible breakpoint in the slope, which lies at \(f=0.12\) Hz, i.e. near the ion gyrofrequency \(f_{ci}=0.24\) Hz, while more gentle slope of \(-0.77\pm 0.06\) is observed before this breakpoint. Finally, in case (c), similarly as in case (a) using burst data, the spectral exponent of \(-2.75\pm 0.05\) is again steeper before, and even more with the exponent \(-3.82\pm 0.06\) (close to -7/2) after the observed breakpoint that lies at around the electron Taylor's (1938) shifted frequency \(f_{de}=20\) Hz, as discussed by Macek et al. (2018). This shows that the observed stochastic nature of fluctuations in the sub-ion scale could be due to the interaction between coherent structures (Perrone et al., 2016, 2017), and a very high slope of -16/3 is possibly related to the dissipation of the kinetic Alfven waves (e.g., Schekochihin et al., 2009).
Figure 1: Time series of the magnetic field strength \(\mathbf{B}=|\mathbf{\mathrm{B}}|\) of the _MMS_ data with the corresponding spectra in the magnetosheath (a) near the bow shock (BS), (b) inside the magnetosheath (SH), and (c) near the magnetopause (MP) plotted with three different colors. Average ion gyrofrequency (\(f_{ci}\)), as well as a characteristic Taylor’s shifted frequencies for ions (\(f_{Al}\)) and electrons (\(f_{de}\)) are shown by the dashed, dashed-dotted, and dotted lines, respectively, see Table 1 of Ref. (Macek et al., 2018).
## 3 Methods of data analysis
As usual, we use the fluctuations of the magnetic fields \(B=|{\bf B}|\), which describe this turbulent system at each time \(t>0\). Therefore, with a given time scale \(\tau_{i}>0\ \ \forall_{i}\), one can typically define the increments of this quantity as follows:
\[b_{i}(t):=B(t+\tau_{i})-B(t), \tag{1}\]
and, assuming an arbitrary \(\tau_{i}>0\), it can be labeled as \(b_{\tau}\) or \(b\) for simplicity in the following sections.
We assume that the fluctuations of increment \(b_{\tau}\) in a larger time scale \(\tau\) are transferred to smaller and smaller scales. Therefore, stochastic fluctuations may be regarded as a stochastic process in scale with the \(N\)-point joint (transition) conditional probability density function denoted by \(P(b_{1},\tau_{1}|b_{2},\tau_{2},\ldots,b_{N},\tau_{N})\). In this case, the conditional probability density function is defined by default as:
\[P(b_{i},\tau_{i}|b_{j},\tau_{j})=\frac{P(b_{i},\tau_{i};b_{j},\tau_{j})}{P(b_{ j},\tau_{j})}, \tag{2}\]
with the marginal (unconditional) probability density function, \(P(b_{j},\tau_{j})\), and the joint probability function, \(P(b_{i},\tau_{i};b_{j},\tau_{j})\), of finding the fluctuations \(b_{i}\) at a scale \(\tau_{i}\) and \(b_{j}\) at a scale \(\tau_{j}\), for \(0<\tau_{i}<\tau_{j}\). In the same way, we may construct the conditional probability densities for any longer sequences of increments \(b\).
The stochastic process is Markovian if the conditional probability function depends only on the initial values \(b_{1}\) and \(b_{2}\) at the time scales \(\tau_{1}\) and \(\tau_{2}\), but not on \(b_{3}\) at the next larger scale \(\tau_{3}\), and so on, i.e., for any \(i=1,\ldots,N\) we have:
\[P(b_{1},\tau_{1}|b_{2},\tau_{2})=P(b_{1},\tau_{1}|b_{2},\tau_{2},\ldots,b_{N}, \tau_{N}), \tag{3}\]
for \(0<\tau_{1}<\tau_{2}<\ldots<\tau_{N}\). Basically, the Markov process can be determined by the initial conditional probability function \(P(b_{1},\tau_{1}|b_{2},\tau_{2})\). Strictly speaking, the future states of the process are conditionally independent of past states. Because of this relation, the conditional probabilities are also called transition probabilities, while the property of Eq. (3) is known as a _memorylessness_.
One of the generalizations of Eq. (3) is called the Chapman-Kolmogorov (CK) condition, which is given by the equation (Risken, 1996):
\[P(b_{1},\tau_{1}|b_{2},\tau_{2})=\int_{-\infty}^{+\infty}P(b_{1},\tau_{1}|b^{ \prime},\tau^{\prime})P(b^{\prime},\tau^{\prime}|b_{2},\tau_{2})db^{\prime}, \tag{4}\]
for \(\tau_{1}<\tau^{\prime}<\tau_{2}\). This equation can be interpreted in the following way: the transition probability from \(b_{2}\) at a time scale \(\tau_{2}\) to \(b_{1}\) at a time scale \(\tau_{1}\) is the same as a product of the transition probability from \(b_{2}\) at a time scale \(\tau_{2}\) to \(b^{\prime}\) at a time scale \(\tau^{\prime}\), and the transition probability from \(b^{\prime}\) at a time scale \(\tau^{\prime}\) to \(b_{1}\) at a time scale \(\tau_{1}\), for all possible \(b^{\prime}\)'s. Let us emphasize here, that such a generalization is a necessary condition for a stochastic process to be the Markov process.
Next, from the CK condition of Eq. (4), by using a standard series expansion, one can derive a corresponding Kramers-Moyal backward expansion with an infinite number of terms. Backward expansions are equations of evolution of probability \(P(b,\tau|b^{\prime},\tau^{\prime})\), where we differentiate with respect to \(b\). This equation has the following differential form (Risken, 1996, Section 4.2):
\[-\frac{\partial}{\partial\tau}P(b,\tau|b^{\prime},\tau^{\prime})=\sum_{k=1}^{ \infty}\left(-\frac{\partial}{\partial b}\right)^{k}D^{(k)}(b,\tau)P(b,\tau|b ^{\prime},\tau^{\prime}), \tag{5}\]
where it is important to note that the differential symbol acts on both \(D^{(k)}(b,\tau)\) and \(P(b,\tau|b^{\prime},\tau^{\prime})\) coefficients. Since the solutions of the forward and backward KM equations are equivalent, then without loss of generality, we can label it as KM expansion. Formally, \(D^{(k)}(b,\tau)\) are called KM coefficients, which in this way are defined as the limit at \(\tau\to\tau^{\prime}\) of \(k\)-th power of conditional moments (see Risken, 1996):
\[D^{(k)}(b,\tau) = \frac{1}{k!}\lim_{\tau\to\tau^{\prime}}\frac{1}{\tau-\tau^{ \prime}}M^{(k)}(b,\tau,\tau^{\prime}), \tag{6}\] \[M^{(k)}(b,\tau,\tau^{\prime}) = \int_{-\infty}^{+\infty}(b^{\prime}-b)^{k}P(b^{\prime},\tau^{ \prime}|b,\tau)db^{\prime}. \tag{7}\]
Ideally, using the conditional moments \(M^{(k)}(b,\tau,\tau^{\prime})\), the KM coefficients can be evaluated, though they cannot be obtained directly from the analyzed data. While these conditional moments can be calculated from the empirical observations, the \(D^{(k)}(b,\tau)\) coefficients can only be obtained by extrapolation in the limit \(\tau\to\tau^{\prime}\) according to Eqs. (6) and (7), but these formulae can not be applied explicitly.
One of the popular extrapolation methods for this problem is a use of piecewise linear regression model with breakpoints. This is a type of regression models, which allows multiple linear models to fit to the analyzed data. The crucial objective of this method is an accurate estimation of a number of breakpoints. First, in order to estimate the best breakpoint position, we have evaluated every value within a specified interval and looked at the value of logarithmic transformation of the likelihood function (also known as _log-likelihood_ function) of each adjusted model. Naturally, the highest value of this function provides the optimal breakpoint. Further, to select (and estimate) the best possible number of breakpoints of the segmented relationship, we have used the standard Akaike (1973) Information Criterion (AIC) and Bayesian Information Criterion (BIC) (Schwarz, 1978). Nonetheless, the truly similar results are obtained when the lowest time resolution is taken. Thus, in our case, we have a simple approximation of the KM coefficients, which is given by:
\[D^{(k)}(b,\tau)=\frac{1}{k!}\frac{1}{\Delta t}M^{(k)}(b,\tau,\tau^{\prime}), \tag{8}\]
where a \(\Delta t\) is a given lowest time resolution of the time series. It is also interesting to note that \(D^{(k)}(b,\tau)\) coefficients show the same dependence on \(b\) as \(M^{(k)}(b,\tau,\tau^{\prime})\). This simplification substantially decrease the time required to obtain the results numerically.
Now, in order to find the solution of Eq. (5), it is necessary to determine the number of terms of the right hand side (RHS) of this equation that need to be considered. According to Pawula's theorem, the KM expansion of a positive transition probability \(P(b,\tau|b^{\prime},\tau^{\prime})\) may end after the first or second term (e.g., Risken, 1996, Section 4.3). If it does not end after the second term, then the expansion must contain an infinite number of terms. On the other hand, if the second term is the last one, namely \(D^{(k)}(b,\tau)=0\) for \(k\geqslant 3\), then the KM expansion of Eq. (5) leads to the following particular formula:
\[-\frac{\partial}{\partial\tau}P(b,\tau|b^{\prime},\tau^{\prime})=\left[{}- \frac{\partial}{\partial b}D^{(1)}(b,\tau)+\frac{\partial^{2}}{\partial b^{2 }}D^{(2)}(b,\tau)\right]P(b,\tau|b^{\prime},\tau^{\prime}), \tag{9}\]
with the well-known FP operator \(\mathcal{L}_{\rm FP}(b,\tau)\) in the squared parenthesis (e.g., Risken, 1996, Eqs. 5.1 and 5.2) governing the evolution of the probability density function \(P(b,\tau|b^{\prime},\tau^{\prime})\) and is called the FP equation (also known as a forward Kolmogorov equation). It has been primarily used for the Brownian motion of particles, but now Eq. (9) defines a generalized Ornstein-Uhlenbeck process. Strictly speaking, this is a linear second-order partial differential equation of a parabolic type. By solving the FP equation, it is possible to find distribution functions from which any averages (expected values) of macroscopic variables can be determined by integration. If the relevant time-dependent solution is provided, this equation can be used to not only describe stationary features, but also the dynamics of systems.
The first term \(D^{(1)}(b,\tau)\) and a second term \(D^{(2)}(b,\tau)>0\) determining the FP Equation (9) are responsible for the drift and diffusion processes, respectively. The former process accounts for the deterministic evolution of the stochastic process (as a function of \(b\) and \(\tau\)). The latter process modulates the amplitude of the \(\delta\)-correlated Gaussian noise \(\Gamma(\tau)\) (which is known as the Langevin force - the fluctuating force \(F_{f}(\tau)\) per unit mass \(m\)), that fulfills the normalization conditions: \(\langle\Gamma(\tau)\Gamma(\tau^{\prime})\rangle=2\delta(\tau-\tau^{\prime})\), where \(\delta\) is a Dirac delta function and \(\langle\Gamma(\tau)\rangle=0\) (see Risken, 1996). Thus, in the equivalent approach another complementary equation arises:
\[-\frac{\partial b}{\partial\tau}=D^{(1)}(b,\tau)+\sqrt{D^{(2)}(b,\tau)}\cdot \Gamma(\tau), \tag{10}\]
which is formally called the Langevin equation. Here we have used the Ito (1944) definition, that is missing a spurious drift (e.g., Risken, 1996, Section 3.3.3), hence the drift coefficient \(D^{(1)}\) occurs directly, unlike in the Stratonovich (1968) definition. Admittedly, the Ito (1944) definition is more difficult to interpret and analyze, because of the new rules for integration and differentiation that must be used. Although, owing to a powerful apparatus, which is the Ito Lemma, it allows us to deal with stochastic processes analytically. Anyway, here again, all higher KM coefficients \(D^{(k)}\) for \(k\geqslant 3\) are equal to zero. Note that the negative signs on the left hand side (LHS) of Eqs. (9) and (10) show that the corresponding transitions proceed backward to smaller and smaller scales.
Next, because the differentiating in the FP operator in Eq. (9) should act on both the KM coefficients and the conditional probability density \(P(b,\tau|b^{\prime},\tau^{\prime})\) by performing relatively simple transformations, it can be rewritten in the following expanded form (Risken, 1996, Eq. 45.3a):
\[-\frac{\partial}{\partial\tau}P(b,\tau|b^{\prime},\tau^{\prime})=D ^{(2)}(b,\tau|b^{\prime},\tau^{\prime})\frac{\partial^{2}}{\partial b^{2}}P(b,\tau|b^{\prime},\tau^{\prime})+\] \[+\Big{[}2\frac{\partial}{\partial b}D^{(2)}(b,\tau|b^{\prime}, \tau^{\prime})-D^{(1)}(b,\tau|b^{\prime},\tau^{\prime})\Big{]}\frac{\partial }{\partial b}P(b,\tau|b^{\prime},\tau^{\prime})\] \[+\Big{[}\frac{\partial^{2}}{\partial b^{2}}D^{(2)}(b,\tau|b^{ \prime},\tau^{\prime})-\frac{\partial}{\partial b}D^{(1)}(b,\tau)\Big{]}P(b,\tau|b^{\prime},\tau^{\prime}). \tag{11}\]
Formally, Eq. (11) resulting from the FP Equation (9) is the second-order parabolic partial differential equation.
It is also worth mentioning that this equation is the generalization of the case of thermal conductivity diffusion equation, which can be solved with the initial and boundary conditions \(P(b,\tau=0|b^{\prime},\tau^{\prime}=0)=p(b,b^{\prime})\) and \(P(b=0,\tau|b^{\prime}=0,\tau^{\prime})=0\), respectively, using the method of separation of variables. The solution of nonstationary FP Equation (11) can well be approximated numerically, i.e. by the difference method. The master curve for the probability density function \(P(b,\tau)\) of Eq. (11) can readily be evaluated by the stationary solution \(p_{s}(b,\tau)\) of Eq. (9), which is given by
\[\frac{\partial}{\partial b}\Big{[}D^{(2)}(b,\tau)p_{s}(b,\tau)\Big{]}=D^{(1)}(b,\tau)p_{s}(b,\tau) \tag{12}\]
that results from comparing the LHS of Eq. (9) with zero.
## 4 Results
In order to inspect processes responsible for stochastic fluctuations in space plasma, we have applied the methods described in Section 3 to small-scale in cases (a) and (c) and medium-scale in case (b) fluctuations of the magnetic field \(B=|{\bf B}|\) in the Earth's magnetosheath. In general, the approach presented in this paper could be applied under a few important conditions that should be tested as preliminary procedures (see Rinn et al., 2016). The first condition is that time series data must be stationary. If they were non-stationary, then the conditional moments given by Eq. (7) are not essentially meaningful. The second condition is that the process should be Markovian, i.e., the present state should only depend on the preceding state. The third condition is that the Pawula's theorem must hold, as discussed in Section 3.
Having this in mind, we have started with the brief analysis and description of the relevant time series and the corresponding graphs of power spectral densities (PSD). Next, we have checked stationarity of all analyzed time series (see, e.g., Macek, 1998). To show that a Markov processes approach is suitable in our situation, we have moved forward to the verification of the necessary CK condition, through estimation
of the KM coefficients, and then have checked the validity of the Pawula's theorem. This lets us to apply the reduced formula of the FP Equation (9), which describes evolution of the probability density function \(P(b,\tau)\).
Following our initial discussion, we must now verify whether the data time series under study is stationary. Generally, if a time series exhibits no trend, has a constant variance over time, and a consistent autocorrelation function over time, then it is classified as stationary. Such time series are also much easier to model. There are a variety of ways to evaluate this feature of any time series. One of such method is the Augmented Dickey & Fuller (1979) test. This test uses the following null and alternative hypotheses: \(\mathbf{H_{0}}\): The time series is non-stationary, vs. \(\mathbf{H_{1}}\): The time series is stationary. When the \(p\)-value is less than 0.05, then the null hypothesis can be rejected and it can be concluded that the time series is stationary. In fact, after performing such a statistical test, we have determined that in cases (a) and (b), the respective \(p\)-values are \(<0.01\), indicating that the null hypothesis can be rejected. Thus, these magnetic field strength \(B=|\mathbf{B}|\) time series are stationary. However, in case (c) where a much smaller data sample is available the \(p\)-value is equal to 0.154, hence we have failed to reject the null hypothesis. The result suggests that the time series is non-stationary and has some time-dependent structure with varying variance over time.
Once again, there are various methods of eliminating trends and seasonality, which define non-stationary time series. Trends can cause the mean to fluctuate over time, while seasonality can lead to changes in the variance over time. The most straightforward approach to address this issue is the differencing technique, a common and frequently used data transformation that is applied for making time series data stationary. Differencing is achieved by subtracting the previous observation from the current one. Following notation in Eq. (1), this can simply be written as \(b(t)=B(t)-B(t-1)\). To reverse this process, the prior time step's observation must be added to the difference value. The practice of computing the difference between successive observations is referred to as a lag-1 difference. The number of times that differencing is carried out is referred to as the order of differentiation. Fortunately, in our case (c), applying the lag-1 (order 1) difference operation has been sufficient to get rid of non-stationarity. The augmented Dickey-Fuller test has yielded a \(p\)-value of less than 0.01, thus the null hypothesis could be rejected, indicating that the analyzed \(B=|\mathbf{B}|\) time series is stationary.
We have used one of the exploratory data analysis approaches called unsupervised binning method (compare with normalized histogram method to make bins (histogram's boxes) and to obtain the empirical conditional probability density functions \(P(b_{1},\tau_{1}|b_{2},\tau_{2})\), for \(0<\tau_{1}<\tau_{2}\) directly from the analyzed data. First, we have estimated the empirical joint PDF \(P(b_{1},\tau_{1};b_{2},\tau_{2})\) by counting the number of different pairs \((b_{1},b_{2})\) on a 2-dimensional grid of equal-width data bins (small intervals). This _bins_ integer should be neither too large, such that each bin no longer contains a significant quantity of points, nor too small, such that any dependency of the drift and diffusion coefficients on the state variable cannot be detected. Next, we have performed the normalization such that the integral over all bins is equal to 1 (note that the sum will not be equal to 1 unless bins of unity width are chosen). Similarly, the empirical one-dimensional PDF \(P(b_{2},\tau_{2})\) can be estimated with the use of a one-dimensional grid of bins (and carrying-out the normalization), and the empirical conditional PDFs are obtained using Eq. (2) directly (in a numerical sense).
In such a way, we have found the empirical conditional probability density functions from the analyzed data, which are shown by red continuous contours in Fig. 2. They are compared here with the theoretical conditional PDFs that are solutions of the CK condition of Eq. (4) displayed by blue dashed contours, which are 2-dimensional representation of 3-dimensional data. Such a comparison is seen in Fig. 2 for the magnetic field increments \(b\), at the various scales: in cases (a) and (c) \(\tau_{1}=0.02\) s, \(\tau^{\prime}=\tau_{1}+\Delta t_{B}=0.0278\) s, \(\tau_{2}=\tau_{1}+2\Delta t_{B}=0.0356\) s, where \(\Delta t_{B}=0.0078\) s, and in case (b) \(\tau_{1}=0.2\) s, \(\tau^{\prime}=\tau_{1}+\Delta t_{B}=0.2625\) s, \(\tau_{2}=\tau_{1}+2\Delta t_{B}=0.325\) s, where \(\Delta t_{B}=0.0625\) s. The depicted
Figure 2: Comparison of observed contours (red solid curves) of conditional probabilities at various time scales \(\tau\), with reconstructed contours (blue dashed curves) according to the Chapman–Kolmogorov (CK) condition, recovered by the use of _MMS_ Magnetic Field total magnitude \(B=|\mathbf{B}|\) in the magnetosheath: (a) just behind the bow shock (BS), (b) inside the magnetosheath (SH), and (c) near the magnetopause (MP), corresponding to the spectra in Fig. 1.
subsequent isolines correspond to the following decreasing levels of the conditional PDFs, from the middle of the plots, for following magnetic field increments \(b\): case (a) 2, 1.1, 0.5, 0.3, 0.05, 0.01; case (b) 5, 1, 0.7, 0.45, 0.3, 0.22, 0.15, 0.1, 0.05; and case (c) 7, 3.3, 1.3, 0.3, 0.08, 0.06. This is rather evident that the contour lines corresponding to these two empirical and theoretical probability distributions are nearly matching for all three cases. Thus, it appears that the CK condition of Eq. (4) is sufficiently well satisfied.
Next, in the corresponding Fig. 3, we have verified again the CK condition of Eq. (4). Intuitively speaking (and somehow informally), what we see in Fig. 2 is just a view 'from the top' of the 3-dimensional shape, while in Fig. 3 the orthogonal cuts are depicted. Again, we have compared these cuts through the conditional probability density functions for particular chosen values of parameter \(b_{2}\), which can be seen at the top of each plot. As is evident, the cuts through the empirical probability density functions coincide rather well with the cuts through the theoretical probability density functions, providing good fits in all of the analyzed cases. Admittedly, only in case (b) for \(b_{2}=0\) [nT] the cuts
Figure 3: Comparison of cuts through \(P(b_{1},\,\tau_{1}|b_{2},\,\tau_{2})\) for the fixed value of the strength of the magnetic field total magnitude \(B=|\mathbf{B}|\) in the magnetosheath: (a) just behind the bow shock (BS), (b) inside the magnetosheath (SH), and (c) near the magnetopause (MP), with increments \(b_{2}\) with \(\tau_{1}=0.02\) s, \(\tau^{\prime}=0.0278\) s, and \(\tau_{2}=0.0356\) s in cases (a) and (c), and with \(\tau_{1}=0.2\) s, \(\tau^{\prime}=0.2625\) and \(\tau_{2}=0.325\) s in case (b).
points deviate from the lines in tails, but it seem to be caused by the data outliers, which can eventually be further eliminated. It is necessary to mention that after such a comparison for different values of \((\tau_{1},\tau^{\prime},\tau_{2})\), we have found that the CK condition of Eq. (4) is satisfied for \(b\) up to a scale of approximately \(100\Delta_{B}=0.78\) s in case (a), to about \(150\Delta_{B}=9.375\) s in case (b), and around \(40\Delta_{B}=0.312\) s in case (c), thus indicating that the stochastic fluctuations have Markov properties.
To verify Pawula's theorem, which states that if the fourth-order coefficient is equal to zero, then \(D^{(k)}(b,\tau)=0,\ k\geq 3\), it is necessary to estimate the \(D^{(1)}(b,\tau)\), \(D^{(2)}(b,\tau)\) and \(D^{(4)}(b,\tau)\) coefficients using our experimental data. The standard procedure for calculating these values is to use an extrapolation method such as a piecewise linear regression to estimate the respective limits in Eq. (6). However, as already mentioned in Section 3, the similar results are obtained by simplifying the problem of finding these coefficients, by using Eq. (8), which enables us to estimate these values using the adequately scaled \(M^{(k)}(b,\tau,\tau^{\prime})\) coefficients. In our situation, the time resolution \(\Delta_{B}\) is equal to \(7.8\) ms in case (a) and (c), while in case (b) it is \(62.5\) ms. Thus, given the conditional probabilities \(P(b_{1},\tau_{1}|b_{2},\tau_{2})\), for \(0<\tau_{1}<\tau_{2}\), we have calculated these central moments directly from Eq. (7), using the obtained empirical data by counting the numbers \(N(b^{\prime},b)\) of occurrences of two fluctuations \(b^{\prime}\) and \(b\). Given that the errors of \(N(b^{\prime},b)\) might be simply determined by \(\sqrt{N(b^{\prime},b)}\), then, in a similar way, it is possible to calculate the errors for the conditional moments \(M^{(k)}(b,\tau,\tau^{\prime})\). Consequently, scaling these values according to Eq. (8), we have obtained the empirical KM coefficients. By examination of the \(M^{(k)}(b,\tau,\tau^{\prime})\) and \(D^{(k)}(b,\tau)\) coefficients, we can observe that they both exhibit the same dependence on \(b\).
The results of this analysis are shown in Fig. 4, where on the upper part we have depicted the first order coefficient depending on \(b\), while at the bottom we have shown the second and fourth order coefficients depending on \(b\), for all three cases (a), (b), and (c). Moreover, for each case, we have provided the calculated confidence intervals (error bars). It is demonstrated, that the fit for \(D^{(1)}(b,\tau)\) coefficient is a linear function of \(b\) and for \(D^{(2)}(b,\tau)\) is a quadratic function of \(b\), for \(\Delta_{B}=0.0078\) s in cases (a) and (c), and \(\Delta_{B}=0.0625\) s in case (b). In fact, we have checked that the same fits are reasonable up to even \(150\Delta_{B}\) for all three analyzed cases. This means that in this instance, there should be no difficulties with fitting the polynomials for different \(\Delta_{B}\).
As seen at the bottom part of Fig. 4 of cases (a) and (c), it is evident that the Pawula's theorem is clearly satisfied. On the other hand, in case (b) it might be not so obvious. For instance, for \(b\approx-6.2\) nT, we can see that the value of \(D^{(4)}(b,\tau)\) is somewhat greater than zero. In this case, we can use the somewhat weaker version of this theorem, which states that it is sufficient to check if \(D^{(4)}(b,\tau)\ll\left[D^{(2)}(b,\tau)\right]^{2}\), for all \(b\) (see Risken, 1996; Rinn et al., 2016). Thus, in this situation, we have \(\left[D^{(2)}(b,\tau)\right]^{2}\approx 1225\), which is significantly larger than \(D^{(4)}(b,\tau)\approx 1\), for \(b\approx-6.2\) nT. Therefore, it is reasonable to conclude that the Pawula's theorem is sufficiently well fulfilled in all of the analyzed cases. Hence we can assume that the Markov process is described by the FP Equation (9).
In order to find the analytical solution of the FP Equation (9), we have proposed certain approximations of the lowest order KM coefficients.
Figure 4: The first and second limited-size Kramers–Moyal coefficients determined by the magnetic field increments \(b\) for a total strength of magnetic field \(B=|\mathbf{B}|\) in the magnetosheath: (a) just behind the bow shock (BS), (b) inside the magnetosheath (SH), and (c) near the magnetopause (MP). The dashed red lines show the best choice fits to the calculated values of \(D^{(1)}(b,\tau)\) and \(D^{(2)}(b,\tau)\) with \(D^{(4)}(b,\tau)=0\), according to the Pawula’s theorem.
As previously discussed (see Fig. 4), it is straightforward that \(D^{(1)}(b,\tau)\) exhibits a linear dependence, whereas \(D^{(2)}(b,\tau)\) displays a quadratic dependence on \(b\). Consequently, it is reasonable to assume the following parametrization:
\[\begin{cases}D^{(1)}(b,\tau)=-a_{1}(\tau)b,\\ D^{(2)}(b,\tau)=a_{2}(\tau)+b_{2}(\tau)b^{2}.\end{cases} \tag{13}\]
where the relevant parameters \(a_{1}>0,a_{2}>0\), and \(b_{2}>0\) depend on temporal scale \(\tau>0\). Moreover, it appears that all of these parameters exhibit a power-law dependence on temporal scale \(\tau\):
\[\begin{cases}a_{1}(\tau)=A\tau^{\alpha};\\ a_{2}(\tau)=B\tau^{B};\\ b_{2}(\tau)=C\tau^{\gamma},\end{cases} \tag{14}\]
where the values for all of the logarithmized parameters \(A,B,C\in\mathbb{R}\), as well as the \(\alpha,\beta,\gamma\in\mathbb{R}\) are given in Table 1.
It is important to emphasize that the functional dependencies of the coefficients \(a_{1}(\tau)\), \(a_{2}(\tau)\), and \(b_{2}(\tau)\) on \(\tau\) given by Eq. (14) are merely parametrizations of the empirical results. In fact, here power-laws have been selected, because they have adequately described the observed values with sufficient accuracy. Nevertheless, some alternative theoretical analyses may lead to slightly different functional dependence (see Renner et al., 2001). Admittedly, it turned out that the values of the fitted parameters can slightly be different from those that fit exactly the solution of the FP Equation (9). Renner et al. (2001) has also highlighted the asymmetry of the fit \(D^{(2)}(b,\tau)\) on \(b\), which is also present in our analysis (especially in case (c), and to a lesser degree in case (a)).
The obtained fits to the _MMS_ observations in the magnetosheath are depicted in Fig. 5, for each case (a), (b), and (c), showing the dependence of KM coefficients parameters on scale \(\tau>0\). Since our data contain a multitude of relatively low values and a few exceedingly large values, which would render a linear graph rather unreadable, instead of using a standard linear graph, we have decided to employ logarithmic scales for both the vertical and horizontal axes (so called: \(\log-\log\) plot). Such a procedure is rather straightforward. For example, for the first row of Eq. (14), taking the logarithm of both sides one obtains: \(\log(a_{1}(\tau))=\alpha\log(\tau)+\log(A)\), which is a special case of a linear function, with the exponent \(\alpha\) corresponding to the slope of the line. The value of \(\log(A)\) corresponds to the intercept of a \(\log(a_{1}(\tau))\)-axis, while the \(\log(\tau)\)-axis is intercepted at \(\log A/(-\alpha)\). We have opted for this approach to enhance the clarity of the presentation. Therefore, since we have used both the logarithmic scales the respective power-laws appear as straight lines in Fig. 5. Similarly, the graphical representations for all the parameters \(a_{1}\), \(b_{1}\), and \(b_{2}\) of Eqs. (13) and (14), which we have provided, are helpful for identifying correlations and determining respective constants \(A\), \(B\), \(C\) and \(\alpha<0\), \(\beta>0\), \(\gamma<0\) in Table 1 (cf. Macek et al., 2023).
After performing a careful analysis of the _MMS_ magnetic field magnitude \(B\) data, our findings indicate that the power-law dependence is applicable for the values of: \(\tau\lesssim 100\Delta t_{B}=0.78\) s in case (a); \(\tau\lesssim 150\Delta t_{B}=9.375\) s in case (b); \(\tau\lesssim 50\Delta t_{B}=0.39\) s in case (c), and for some larger scales, say \(\tau\gtrsim\tau_{G}\), the shapes of the probability density functions appear to be closer to Gaussian. However, despite the satisfactory results obtained at these small kinetic scales, a more intricate functional dependence (possibly polynomial fits) is characteristic for much higher scales, in particular, in the inertial domain (Strumik & Macek, 2008a,b).
As a result of our investigations, we are able to obtain analytical stationary solutions \(p_{s}(x)\) given by Eq. (12) following from the FP Equation (9), which can be expressed by a continuous kappa distribution (also known as Pearson's type VII distribution), which exhibits a deviation from the normal Gaussian distribution. The probability density function (PDF) of this distribution is of a given form:
\[p_{s}(b)=\frac{N_{\omega}}{\left[1+\frac{1}{\kappa}\Big{(}\frac{b}{p_{\omega }}\Big{)}^{2}\right]^{\kappa}}, \tag{15}\]
where, for \(a_{2}(\tau)\neq 0,\ b_{0}(\tau)\neq 0\), we have a shape parameter \(\kappa=1+a_{1}(\tau)/\big{[}2b_{2}(\tau)\big{]}\) and \(b_{0}^{2}=a_{2}(\tau)/\big{[}b_{2}(\tau)+a_{1}(\tau)/2\big{]}\), while \(N_{0}=p_{s}(0)\) satisfies the normalization \(\int_{-\infty}^{\infty}p_{s}(b)db=1\). By substituting \(p_{s}(b)\) to this integral we find that:
\[N_{0}=\frac{\Gamma(\kappa)}{\Gamma(\kappa-\frac{1}{2})b_{0}\sqrt{\pi\kappa}}, \tag{16}\]
where, this time, \(\Gamma(\kappa)=\int_{0}^{\infty}b^{\kappa-1}\mathrm{e}^{-b}db\), \(\mathrm{Re}(\kappa)>0\) is a mathematical gamma function (Euler integral of the second kind), as defined for all complex numbers with a positive real part.
It is worth noting that kappa distribution, as represented by Eq. (15), approaches the normal Gaussian (Maxwellian) distribution for large values of scaling parameter \(\kappa\). To be precise, as \(\kappa\to\infty\), the following well-known formula can approximately be satisfied:
\[\lim_{\kappa\to\infty}p_{s}(b)=N_{0}\mathrm{exp}\Big{(}-\frac{b^{2}}{2\sigma^ {2}}\Big{)} \tag{17}\]
\begin{table}
\begin{tabular}{l l l l l l} \hline Case & \(\log_{10}(A)\) & \(\alpha\) & \(\log_{10}(B)\) & \(\beta\) & \(\log_{10}(C)\) & \(\gamma\) \\ \hline (a) & \(0.6989\pm 0.0225\) & \(-1.1191\pm 0.0089\) & \(-0.4946\pm 0.1259\) & \(1.1631\pm 0.0498\) & \(0.5854\pm 0.0706\) & \(-1.7325\pm 0.0279\) \\ (b) & \(0.1837\pm 0.0139\) & \(-1.0417\pm 0.0100\) & \(-0.4666\pm 0.0160\) & \(0.5425\pm 0.0116\) & \(0.4183\pm 0.0163\) & \(-1.2233\pm 0.0118\) \\ (c) & \(0.7791\pm 0.0079\) & \(-1.1055\pm 0.0057\) & \(-0.5893\pm 0.0126\) & \(1.0002\pm 0.0091\) & \(0.5011\pm 0.0274\) & \(-1.7646\pm 0.0199\) \\ \hline \end{tabular}
\end{table}
Table 1: The fitted parameters for power-law dependence of first- and second-order Kramers–Moyal (KM) coefficients of Eqs. (13) and (14) as functions of scale \(\tau\)
with the scaling parameter \(b_{0}\) related to the standard deviation \(\sigma=b_{0}/\sqrt{2}\). This time the parameter \(N_{0}=p_{s}(0)\) satisfies the elementary normalization \(N_{0}=\frac{1}{\sigma\sqrt{2\pi}}\).
The numerical results of fitting the empirical _MMS_ data to the given distributions and determining the relevant parameters of Eq. (15) are as follows: \(\kappa=1.5179\), \(b_{0}=1.9745\), and \(N_{0}=0.68438\) for \(B\) in case (a); \(\kappa=1.3758\), \(b_{0}=2.6955\), and \(N_{0}=0.34375\) in case (b); with \(\kappa=3.5215\), \(b_{0}=1.7313\), and \(N_{0}=1.1866\) in case (c). These values of \(\kappa\) would correspond to the nonextensivity parameter of the generalized (Tsallis) entropy \(q=1-1/\kappa\) (e.g., Burlaga & Vilias, 2005). In our case this is given by \(q=\frac{a_{1}(\tau)+2b_{2}(\tau)}{a_{1}(\tau)+2b_{2}(\tau)}\) and is equal to \(0.341\) in case (a), \(0.273\) in case (b), and \(0.716\) in case (c). The extracted values of the \(\kappa\) and \(q\) parameters provide robust measures of the departure of the system from equilibrium. We see that these values are similar to \(q\sim 0.5\) for \(\kappa\sim 2\) reported for the Parker Solar Probe (_PSP_) data by Benella et al. (2022).
Now, by using the system of Eqs. (13) with Eq. (9), we have arrived at the following formula (Macek et al., 2023):
\[\Big{(}a_{2}(\tau)+b_{2}(\tau)b^{2}\Big{)}\frac{\partial^{2}P(b, \tau)}{\partial b^{2}}+\Big{(}a_{1}(\tau)+4b_{2}(\tau)\Big{)}b\frac{\partial P (b,\tau)}{\partial b}+\] \[+\frac{\partial P(b,\tau)}{\partial\tau}+\Big{(}a_{1}(\tau)+2b_ {2}(\tau)\Big{)}P(b,\tau)=0. \tag{18}\]
This implies that the FP Equations (11) and (18) are expressed in terms of a second-order parabolic partial differential equation. Thus, through the implementation of the numerical Euler integration scheme, which has been verified for stationary solution \(\frac{\partial P(b,\tau)}{\partial\tau}=0\), we are able to
Figure 5: Linear dependence of the parameters \(a_{1},a_{2},b_{2}\) (see Eq. (14)) on the double logarithmic scale \(\tau\) (log-log plot), for the magnetic field overall intensity \(B=|\mathbf{B}|\) in the magnetosheath: (a) just behind the bow shock (BS), (b) inside the magnetosheath (SH), and (c) near the magnetopause (MP). The dashed red lines, with the standard error of the estimate illustrated by gray shade, show the best choice fits to the calculated values.
successfully solve the non-stationary FP equation numerically. Our results are in line with those obtained by Rinn et al. (2016) using the statistical modeling package in programming language R.
Figure 6 shows the findings resulting from our analysis based on the _MMS_ data. Here we compare the solutions of the FP Equation (9) with the empirical probability density functions of \(P(b,\tau)\): (a) near the bow shock (BS), (b) inside the magnetosheath (SH), and (c) near the magnetopause (MP) for various scales \(\tau\) (not greater than \(\tau_{G}\)). The displayed plotted curves, in each case, are as follows: the stationary solution (denoted by open circles), the non-stationary solutions (marked with dashed lines), and the empirical PDFs (indicated with various colored continuous lines).
Further, in cases (a) and (c) the corresponding time scales are: \(\tau=0.0078,0.04,0.078,0.12,0.2,0.39\), and \(0.78\) s; whereas in case (b) these scales are equal: \(\tau=0.0625,0.3125,0.625,0.9375,1.5625,3.125\), and \(9.375\) s. The corresponding curves are shifted in the vertical direction from bottom to top for even better clarity of presentation. It is also worth noting that we have used the semi-logarithmic scale \(\tau\), what is useful when dealing with data that covers a broad range of values. On this scale, the vertical scale is logarithmic (base 10) axis, which means that the separation between the ticks on the graph is proportional to the logarithm of PDF, while the horizontal \(b\)-axis is a standard linear scale, and the ticks are evenly spaced.
What is important to note from this picture are the peaked leptokurtic shapes of PDFs and corresponding stationary solutions. Namely, in case (a) the peak (with large kurtosis) is present for scales up to \(\sim 0.5\) s; in case (b) up to about \(\sim 3\) s; and in case (c) up to \(\sim 0.25\) s. For these levels selected for each case the PDF becomes closer to Gaussian (i.e., approximately parabolic shape on the graph with the semi-logarithmic scales), as expected for large values of \(\kappa\). In case (c) we can see more jumps in fluctuations, i.e., the curves are not so smooth. Fluctuations are quite evident in both the empirical curves and the theoretical solutions, so it seems that some numerical noise is present in the tails of the PDFs. Admittedly, reducing noise is a tricky issue, although the easiest way is to artificially smooth using the simple moving average. Therefore, we have tried this procedure for \(n=1,2,3\) steps and it has appeared that the \(n=3\) choice is sufficient.
Figure 7 depicts finally the probability density functions (PDFs) of fluctuations of the strength of the magnetic field \(b_{\tau}\) rescaled by the standard deviations \(\sigma(b_{\tau})\) in the following way:
\[b_{\tau} \longrightarrow\frac{b_{\tau}}{\sigma(b_{\tau})}, \tag{19}\] \[\mathrm{PDF}(b_{\tau}) \longrightarrow\sigma(b_{\tau})\cdot\mathrm{PDF}(b_{\tau}). \tag{20}\]
In this way, we can define a master curve for the shape of the PDFs. Again, we have used the logarithmic scale on the vertical axis. We also see that the rescaled curves are consistent with the stationary solutions of Eq. (15), as marked with open circles in Fig. 6. It should be noted that all the curves in Fig. 7 are very close to each other for small scales. However, for larger \(\tau=50\) or \(100\Delta t_{B}\) these shapes deviate from the master curve and naturally tend to the well known Gaussian shape. We see that the shape of the PDFs obtained from the _MMS_ data exhibits a global _scale-invariance_ in the magnetosheath up to scales of \(\sim 2\) s. A similar collapse has also been reported with the _PSP_ data at subproton scales
Figure 6: The empirical probability density functions (various continuous colored lines) for a total strength of magnetic field \(B=|\mathbf{B}|\), which correspond to spectra in Fig. 1, compared with the non-stationary (dashed lines) and the stationary (open circles) solutions of the FP equation, for various time scales (shifted from bottom to top) \(\tau=0.0078,0.04,0.078,0.12,0.2,0.39\), and \(0.78\) s in cases (a) and (c), and \(\tau=0.0625,0.3125,0.625,0.9375,1.5625,3.125,9.375\) s in case (b).
(Benella et al., 2022). Figure 7 shows that fluctuations in the magnetosheath are described by a stochastic process. Admittedly, the mechanism of generation of these magnetic fluctuation at small kinetic scale is not known, but the this results suggest some universal characteristics of the processes. An alternative point of view has recently been proposed by Carbone et al. (2022).
## 5 Conclusions
Following our studies in the space plasmas at large inertial scales (Strumik & Macek, 2008a,b), we have examined time series of the strength of magnetic fields in different regions of the Earth's magnetosheath, where the spectrum steepens at subproton scales (Macek et al., 2018). With the highest resolution available on the _MMS_, the data samples just after the bow shock and near the magnetopause are stationary and for somewhat lower resolution deep inside the magnetosheath the deviations from stationarity are small and could well be eliminated. Basically, in all these cases the stochastic fluctuations exhibits Markovian features. We have verified that the necessary Chapman-Kolmogorov condition is well satisfied, and the probability density functions are consistent with the solutions of this condition.
In addition, the Pawula's theorem is also well satisfied resulting in the Fokker-Planck equation reduced to drift and diffusion terms. Hence, this corresponds to the generalization of Ornstein-Uhlenbeck process. Further, the lowest Kramers-Moyal coefficients have linear and quadratic dependence as functions of the magnetic field increments. In this way, the power-law distributions are well recovered throughout the entire magnetosheath. For some moderate scales we have the kappa distributions described by various peaked shapes with heavy tails. In particular, for large values of the kappa parameter these distributions are reduced to the normal Gaussian distribution.
Similarly as for the _PSP_ data, the probability density functions of the magnetic fields measured onboard the _MMS_ rescaled by the respective standard deviations exhibit a universal global _scale-invariance_ on kinetic scales, which is consistent with the stationary solution of the Fokker-Planck equation. We hope that all these results, especially those reported at small scales, are important for a better understanding of the physical mechanism governing turbulent systems in space and laboratory.
## Acknowledgements
We thank Marek Strumik for discussion on the theory of Markov processes. We are grateful for the efforts of the entire _MMS_ mission, including development, science operations,and the Science Data Center at the University of Colorado. We benefited from the efforts of T. E. Moore as Project Scientist, C. T. Russell and the magnetometer team. We acknowledge B. L. Giles, Project Scientist for information about the magnetic field instrument, and also to D. G. Sibeck and M. V. D. Silveira for discussions during previous visits by W. M. M to the NASA Goddard Space Flight Center.
This work has been supported by the National Science Centre, Poland (NCN), through grant No. 2021/41/B/ST10/00823.
Figure 7: A collapse of probability density functions of \(b_{\tau}\) (compare Fig. 6), which are scaled by the corresponding standard deviations (see Eqs. (19) and (20)), for the small time scales \(\tau\) stopping at approximately: \(\tau\sim 0.4\) s. in case (a), \(\tau\sim 2.0\) s in case (b), and \(\tau\sim 0.25\) s in case (c).
## Data Availability
The data supporting the results in this article are available through the MMS Science Data Center at the Laboratory for Atmospheric and Space Physics (LASP), University of Colorado, Boulder: [https://lasp.colorado.edu/mms/sdc/public/](https://lasp.colorado.edu/mms/sdc/public/). The magnetic field data from the magnetometer are available online from [http://cdaweb.gsfc.nasa.gov](http://cdaweb.gsfc.nasa.gov). The data have been processed using the statistical programming language R.
ORCID iDs
W. M. Macek [https://orcid.org/0000-0002-8190-4620](https://orcid.org/0000-0002-8190-4620)
[http://www.ckb.waw.pl/~macek](http://www.ckb.waw.pl/~macek)
D. Wojcik [https://orcid.org/0000-0002-2658-6068](https://orcid.org/0000-0002-2658-6068)
|
2305.00547 | Toward Constructing a Continuous Logical Operator for Error-Corrected
Quantum Sensing | Error correction has long been suggested to extend the sensitivity of quantum
sensors into the Heisenberg Limit. However, operations on logical qubits are
only performed through universal gate sets consisting of finite-sized gates
such as Clifford+T. Although these logical gate sets allow for universal
quantum computation, the finite gate sizes present a problem for quantum
sensing, since in sensing protocols, such as the Ramsey measurement protocol,
the signal must act continuously. The difficulty in constructing a continuous
logical operator comes from the Eastin-Knill theorem, which prevents a
continuous signal from being both fault tolerant to local errors and
transverse. Since error correction is needed to approach the Heisenberg Limit
in a noisy environment, it is important to explore how to construct
fault-tolerant continuous operators. In this paper, a protocol to design
continuous logical z-rotations is proposed and applied to the Steane Code. The
fault tolerance of the designed operator is investigated using the
Knill-Laflamme conditions. The Knill-Laflamme conditions indicate that the
diagonal unitary operator constructed cannot be fault tolerant solely due to
the possibilities of X errors on the middle qubit. The approach demonstrated
throughout this paper may, however, find success in codes with more qubits such
as the Shor code, distance 3 surface code, [15,1,3] code, or codes with a
larger distance such as the [11,1,5] code. | Cameron Cianci | 2023-04-30T18:22:34Z | http://arxiv.org/abs/2305.00547v2 | # Toward Constructing a Continuous Logical Operator for Error-Corrected Quantum Sensing
###### Abstract
Error correction has long been suggested to extend the sensitivity of quantum sensors into the Heisenberg Limit. However, operations on logical qubits are only performed through universal gate sets consisting of finite-sized gates such as Clifford+T. Although these logical gate sets allow for universal quantum computation, the finite gate sizes present a problem for quantum sensing, since in sensing protocols, such as the Ramsey measurement protocol, the signal must act continuously. The difficulty in constructing a continuous logical operator comes from the Eastin-Knill theorem, which prevents a continuous signal from being both fault tolerant to local errors and transverse. Since error correction is needed to approach the Heisenberg Limit in a noisy environment, it is important to explore how to construct fault-tolerant continuous operators. In this paper, a protocol to design continuous logical z-rotations is proposed and applied to the Steane Code. The fault tolerance of the designed operator is investigated using the Knill-Laflamme conditions. The Knill-Laflamme conditions indicate that the diagonal unitary operator constructed cannot be fault tolerant solely due to the possibilities of X errors on the middle qubit. The approach demonstrated throughout this paper may, however, find success in codes with more qubits such as the Shor code, distance 3 surface code, [15,1,3] code, or codes with a larger distance such as the [11,1,5] code.
## 1 Introduction
Quantum sensors have found utility in a variety of fields including commercial applications such as geoscience and mining [1]. There have been many recent studies examining the potential utility of error correction to improve the sensitivity of quantum sensors in noisy environments [2, 3, 4, 5, 6, 7, 8]. Error correction in quantum sensors promise to surpass the Standard Quantum Limit, where sensitivity scales as \(\frac{1}{\sqrt{t}}\), and instead approach the Heisenberg Limit, scaling as \(\frac{1}{t}\)[2]. This scaling is the best allowed by the laws of quantum mechanics.
Current studies into quantum error-corrected sensors propose codes which can correct the most prevalent type of noise in a system but are still vulnerable to other local errors [9]. For an example, [6] utilized a code to correct relaxation in a quantum magnetometer, but the sensor designed is still vulnerable to single qubit phase errors. Although the paper proposes mitigating these phase errors by leveraging dynamical decoupling [10; 11], the designed sensor will realistically still accumulate uncorrected errors over time from random environmental fluctuations in the magnetic field. Therefore, this design will be reduced to the Standard Quantum Limit on time scales dictated by the strength of this environmental noise [2]. This can be addressed by using stronger error-correcting codes such as a distance 3 code, which has the ability to correct single qubit errors.
A common quantum sensing protocol is the Ramsey measurement protocol described below [12].
1. A sensor qubit begins in the state \(\left|0\right\rangle\).
2. A Hadamard gate is applied bringing the state to, \(H\left|0\right\rangle=\left|+\right\rangle\).
3. The signal is applied to the qubit, giving it a signal dependent phase, \(P_{L}(\phi)\left|+\right\rangle=P_{L}(\phi)\times\frac{1}{\sqrt{2}}(\left|0 \right\rangle+\left|1\right\rangle)=\frac{1}{\sqrt{2}}(\left|0\right\rangle+e^ {i\phi}\left|1\right\rangle)\).
4. A Hadamard gate is applied again, bringing the state to \(H\frac{1}{\sqrt{2}}(\left|0\right\rangle+e^{i\phi}\left|1\right\rangle)=\frac{ 1+e^{i\phi}}{2}\left|0\right\rangle+\frac{1-e^{i\phi}}{2}\left|1\right\rangle\).
5. Measuring in the z-basis, the probability of obtaining \(\left|1\right\rangle\) is \(|\frac{1-e^{i\phi}}{2}|^{2}\), from which \(\phi\) can be inferred.
The continuous phase gate \(P_{L}(\phi)\) acts on the computational basis states as,
\[P_{L}(\phi)=\begin{bmatrix}1&0\\ 0&e^{i\phi}\end{bmatrix} \tag{1}\]
The Ramsey measurement protocol requires that there is a continuous symmetry around the z-axis of the qubit for \(P_{L}(\phi)\) to be fault tolerantly applied. As logical gate sets do not typically include any continuous gates, it is not straightforward to apply this protocol directly to an error-corrected logical qubit. Instead, current error-corrected sensing protocols leave the logical qubit vulnerable to certain local errors, only correcting the most prominent types of error. For example, these sensors often employ codes such as the bit flip or amplitude damping codes [6; 7; 8]. This design choice ultimately allows for transverse operators to generate the signal, for example, magnetic fields in flux tunable superconducting qubits [6].
The reason for this difficulty in designing error-corrected quantum sensors fault tolerant to single qubit errors comes from the Eastin-Knill theorem. This theorem states that no quantum error-correcting code that can correct local
errors can also have a continuous symmetry which acts transversely on the qubits [9]. This is proven by demonstrating that the set of fault tolerant gates on any local error-correcting code is finite and cannot have any continuous symmetries as a continuous symmetry would imply an infinite number of fault tolerant gates. Since a continuous symmetry is required in many sensing protocols such as Ramsey measurement shown above, the Eastin-Knill theorem complicates the design of error-corrected quantum sensors. This is the reason why current error-corrected quantum sensors leave a degree of freedom uncorrected and therefore preserve a continuous symmetry for the signal. However, as was proven in [2], the presence of any noise along this symmetry will make these sensors revert to the Standard Quantum Limit as they will no longer satisfy the HNLS criterion.
The Eastin-Knill theorem uncovers an interesting question in quantum sensing, is it possible to realize continuous logical operators for error-corrected sensing on a logical qubit? Therefore, our goal in Sections 3 and 4 will be to construct a non-transverse logical phase operator, \(P_{L}(\phi)\), acting on the logical subspace for the purpose of creating quantum error-corrected sensors. The difficulty constructing this operator is likely what has prevented prior exploration into correcting local errors in quantum sensors.
## 2 Arbitrary Diagonal Unitary Gate
One problem in creating a fault tolerant phase operator is that errors may occur between the gates constructing this operator. This would increase the number of possible errors, requiring a larger code which can recognize these new error syndromes. To circumvent this problem, we will consider diagonal unitary operators and demonstrate that they can be built from commuting gates which could be applied simultaneously. Additionally, restricting the operator to be diagonal greatly reduces its complexity from \((2^{n})^{2}\) to \(2^{n}\) degrees of freedom. We will also find the requirements for a creating logical phase gate are simpler when restricted to a diagonal unitary.
This operator can be constructed from a single qubit z-axis rotation gate, \(R_{Z}(\phi)\), controlled by \(n\) qubits where \(n\in\{0,1,...N\}\) with the total number of qubits \(N\) (ex. \(R_{Z_{1}}(\phi)\),\(C_{1}R_{Z_{2}}(\phi)\), \(C_{1}C_{3}R_{Z_{4}}(\phi)\)...). As these controlled-phase gates are all diagonal and therefore commute, it may be possible to realize them simultaneously and prevent errors from occurring between these gates. Whether these multi-qubit gates can be created in superconducting quantum circuits is a topic for future investigation, and we will focus only on constructing the logical operators.
Alternatively, other ways to efficiently create diagonal unitaries have been previously explored [13]. However, these diagonal unitaries are built from non-commuting single and two-qubit gates, increasing the number of distinct errors which can occur.
Next, we will programmatically construct an arbitrary diagonal unitary operator from controlled-phase gates. We begin by noting that, for any given \(N\), there are \(2^{N}\) degrees of freedom in a diagonal unitary, and \(2^{N}-1\) different
controlled-phase gates. We will eliminate the first diagonal entry through the application of a global phase, without any loss of generality.
We can now construct an arbitrary diagonal unitary operator through the following protocol.
1. Initialize an array to the Identity, in which we will store the constructed operator, \(U_{C}=\mathbb{1}\).
2. Let the index \(i\) loop through each diagonal entry of the desired operator \(U_{d}\). 1. Convert the current index \(i\) into binary. This binary representation shows the basis state on which this entry will act (ex. \(i=9\implies\left|1001\right\rangle\)). 2. Apply a \(R_{Z}(\phi)\) gate to one of the qubits in a \(\left|1\right\rangle\) state, which is controlled by all other qubits in a \(\left|1\right\rangle\) state. This gate is given a value such that, when applied to the constructed operator, the current diagonal element will obtain the desired phase, \((CC...R_{Z}(\phi)\times U_{C})[i][i]=U_{d}[i][i]\). 3. Update the constructed operator with this new gate. \(U_{C}^{\prime}=CC...R_{Z}(a)\times U_{C}\).
3. Return the constructed operator \(U_{C}\), and the phases applied at each index.
This protocol will construct any desired diagonal unitary from commuting controlled-phase gates, as the list of applied phases at each index can be used to determine the gates applied. The correctness of this protocol can be proven through induction, as each gate affects only the current and later diagonal entries in the constructed operator. Each diagonal entry has a unique operator which can tune its phase without affecting previously considered entries, except for the first entry which can be removed through an application of a global phase. This unique operator is found by applying a phase gate controlled by the binary representation of the state corresponding to the index of the entry. Therefore, we can simply construct the desired operator by making greedy decisions at each entry. Now that the construction of diagonal unitary operators from commuting gates has been stated, we will put forward a simple example to clarify.
### Constructing a Simple Diagonal Unitary Operator
Here is an example of this protocol used to construct this unitary \(U_{d}\),
\[\left|00\right\rangle\quad\left|01\right\rangle\quad\left|10\right\rangle\quad \left|11\right\rangle\]
\[\begin{pmatrix}1&0&0&0\\ 0&e^{i\phi}&0&0\\ 0&0&e^{i2\phi}&0\\ 0&0&0&1\end{pmatrix}\]
First we loop through each diagonal entry. For each entry, the value can be changed by the phase gate controlled by the values of \(1\) in the binary representation of the index.
\[U_{d}\ket{01}=e^{i\phi}\ket{01}\implies R_{Z_{1}}(\phi) \tag{2}\]
which in turn gives our constructed operator (previously initialized to \(\mathbb{1}\)) as,
\[U_{C}=R_{Z_{1}}(\phi)\times\mathbb{1}=\begin{pmatrix}1&0&0&0\\ 0&e^{i\phi}&0&0\\ 0&0&1&0\\ 0&0&0&e^{i\phi}\end{pmatrix} \tag{3}\]
Next we find,
\[U_{d}\ket{10}=e^{i2\phi}\ket{10}\implies R_{Z_{2}}(2\phi) \tag{4}\]
This makes our constructed operator,
\[U_{C}=R_{Z_{1}}(\phi)\times R_{Z_{2}}(2\phi)\times\mathbb{1}=\begin{pmatrix}1& 0&0&0\\ 0&e^{i\phi}&0&0\\ 0&0&e^{i2\phi}&0\\ 0&0&0&e^{i3\phi}\end{pmatrix} \tag{5}\]
Lastly, viewing the final entry,
\[U_{d}\ket{11}=\ket{11} \tag{6}\]
Currently, our constructed operator gives the value,
\[U_{C}\ket{11}=e^{i3\phi}\ket{11} \tag{7}\]
This indicates that we must apply \(C_{1}R_{Z_{2}}(-3\phi)\), giving us the final operator,
\[U_{C}=R_{Z_{1}}(\phi)\times R_{Z_{2}}(2\phi)\times C_{1}R_{Z_{2}}(-3\phi)\times \mathbb{1}=\begin{pmatrix}1&0&0&0\\ 0&e^{i\phi}&0&0\\ 0&0&e^{i2\phi}&0\\ 0&0&0&1\end{pmatrix} \tag{8}\]
Which is exactly the desired operator, decomposed into two phase gates and one controlled-phase gate. As was discussed previously, these gates commute and could therefore be applied simultaneously to prevent errors from occurring between them.
Creating a Logical Phase Gate
Next we want to create a logical phase gate from a diagonal operator. To start, we must consider the code words of the chosen code on which we will be acting. Since our operator must function as a logical z-rotation gate, it must satisfy the two conditions,
\[P_{L}(\phi)\left|0\right\rangle_{L}=\left|0\right\rangle_{L} \tag{9}\]
\[P_{L}(\phi)\left|1\right\rangle_{L}=e^{i\phi}\left|1\right\rangle_{L} \tag{10}\]
These constraints are straightforward when applied to a diagonal operator, as we simply must ensure that all diagonal elements which are multiplied by the nonzero basis states in \(\left|1\right\rangle_{L}\) have a value of \(e^{i\phi}\) while all diagonal elements which are multiplied by the nonzero basis states in \(\left|0\right\rangle_{L}\) have a value of 1.
For an example, if the logical eigenstates of a desired code were,
\[\left|0\right\rangle_{L}=\left|000\right\rangle+\left|001\right\rangle+\left| 010\right\rangle+\left|100\right\rangle=\begin{bmatrix}1&1&1&0&1&0&0&0\end{bmatrix}\]
\[\left|1\right\rangle_{L}=\left|111\right\rangle+\left|110\right\rangle+\left| 101\right\rangle+\left|011\right\rangle=\begin{bmatrix}0&0&0&1&0&1&1&1\end{bmatrix}\]
These code words would restrict our diagonal logical phase operator to
\[P_{L}(\phi)=\begin{pmatrix}1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&0&e^{i\phi}&0&0&0&0\\ 0&0&0&0&1&0&0&0\\ 0&0&0&0&0&e^{i\phi}&0&0\\ 0&0&0&0&0&0&e^{i\phi}&0\\ 0&0&0&0&0&0&0&e^{i\phi}\end{pmatrix}\]
as this diagonal unitary uniquely results in,
\[P_{L}(\phi)\left|0\right\rangle_{L}=\left|0\right\rangle_{L} \tag{11}\]
\[P_{L}(\phi)\left|1\right\rangle_{L}=e^{i\phi}\left|1\right\rangle_{L} \tag{12}\]
Through application of the protocol from Section 2, we find this operator can be realized by the simultaneous application of the gates \(C_{1}R_{Z_{2}}(\phi)\), \(C_{1}R_{Z_{3}}(\phi)\), \(C_{2}R_{Z_{3}}(\phi)\), and \(C_{1}C_{2}R_{Z_{3}}(-2\phi)\).
### Ambiguous Entries
In most codes, however, code words do not include a superposition of every basis state. This leaves ambiguous or unconstrained degrees of freedom in the constructed operator. For an example, consider a code with the code words \(\ket{0}_{L}=\ket{00}\) and \(\ket{1}_{L}=\ket{11}\). This leaves an ambiguous logical phase operator,
\[P_{L}(\phi)=\begin{pmatrix}1&0&0&0\\ 0&a&0&0\\ 0&0&b&0\\ 0&0&0&e^{i\phi}\end{pmatrix} \tag{13}\]
We can therefore tune these variables \(a\) and \(b\) as needed. For another more applicable example, the Steane code logical states include a superposition of 8 states out of 128 basis states.
\[\ket{0}_{L}=\frac{1}{\sqrt{8}}(\ket{0000000}+\ket{1010101}+\ket{01 10011}+\ket{1100110}\\ +\ket{0001111}+\ket{1011010}+\ket{0111100}+\ket{1101001}) \tag{14}\]
This logical code word restricts only 8 of the 128 diagonal values of the logical operator. The \(\ket{1}_{L}\) state similarly restricts 8 more leaving 112 tunable values. In the next section, we will consider constraining these values in the Steane code in an attempt to make our logical phase operator fault tolerant through satisfying the Knill-Laflamme conditions.
## 4 The Fault Tolerance of Designed Logical Phase Gates
Now that we can construct a diagonal logical phase operator for any error-correcting code given its logical code words, we may now test if the constructed logical operator is fault tolerant. This can be done by satisfying the Knill-Laflamme conditions, which are both sufficient and necessary for error correction [14].
The Knill-Laflamme conditions for a code with code words \(\ket{W_{\sigma}}\) fault tolerant to errors \(K_{i}=\{K_{1}K_{2},...K_{n}\}\) is,
\[\bra{W_{\sigma}}K_{l}^{\dagger}K_{k}\ket{W_{\sigma^{\prime}}}=\alpha_{lk} \delta_{\sigma\sigma^{\prime}} \tag{15}\]
The coefficients \(\alpha_{lk}\) must have no dependence on \(\sigma\) or \(\sigma^{\prime}\). When considering local errors, the Kraus Operators \(K_{i}\) are the single qubit Pauli gates (X, Y, Z, and I).
Assuming we want to make our logical phase operator fault tolerant, we need to expand the Kraus operators such that we account for errors taking place both before and after the application of the logical operator. Since we can construct this operator from simultaneously applied commuting gates as shown
in Section 2, we will not consider errors occurring between gates constructing the logical operator. With \(P_{L}(\phi)\) as a logical operator, we need to expand the Knill-Laflamme conditions to the following four equations.
\[\left\langle W_{\sigma}\right|P_{L}^{\dagger}(\phi)K_{l}^{\dagger}K_{k}P_{L}( \phi)\left|W_{\sigma^{\prime}}\right\rangle=\alpha_{lk}\delta_{\sigma\sigma^{ \prime}} \tag{16}\]
\[\left\langle W_{\sigma}\right|K_{l}^{\dagger}P_{L}^{\dagger}(\phi)K_{k}P_{L}( \phi)\left|W_{\sigma^{\prime}}\right\rangle=\beta_{lk}\delta_{\sigma\sigma^{ \prime}} \tag{17}\]
\[\left\langle W_{\sigma}\right|P_{L}^{\dagger}(\phi)K_{l}^{\dagger}P_{L}(\phi)K _{k}\left|W_{\sigma^{\prime}}\right\rangle=\beta_{lk}\delta_{\sigma\sigma^{ \prime}} \tag{18}\]
\[\left\langle W_{\sigma}\right|K_{l}^{\dagger}P_{L}^{\dagger}(\phi)P_{L}(\phi) K_{k}\left|W_{\sigma^{\prime}}\right\rangle=\gamma_{lk}\delta_{\sigma\sigma^{ \prime}} \tag{19}\]
A logical phase gate which satisfies these conditions will additionally be fault tolerant to single qubit errors. The error detection and correction operators can then be derived from the Knill-Laflamme equations [14].
We must now return our attention to the tunable elements of the logical phase unitary noted in Section 3.1. We will attempt to satisfy the Knill-Laflamme conditions shown in equations 16, 17, 18, and 19 by using these values.
Consider the simple example explored in section 3.1 with code words \(\left|0\right\rangle_{L}=\left|00\right\rangle\) and \(\left|1\right\rangle_{L}=\left|11\right\rangle\), in the presence of \(X_{1}\) errors, \(K_{i}=\left\{X_{1}\right\}\). Equations 16, 17, 18, and 19 require \(a=1\) and \(b=e^{i\phi}\), making the following logical phase gate tolerant to these \(X_{1}\) errors,
\[P_{L}(\phi)=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&e^{i\phi}&0\\ 0&0&0&e^{i\phi}\end{pmatrix} \tag{20}\]
As can be seen by applying the protocol from section 2, this is simply a z-axis rotation gate on the second qubit \(R_{Z_{2}}(\phi)\). Now we will attempt to apply this approach to a more powerful error-correcting code, the Steane Code.
## 5 Results in the Steane Code
The Steane Code is one of the simplest and most well-studied error-correcting codes of distance 3, meaning it can correct any single local error [15]. Since this code can correct local errors, the Eastin-Knill theorem requires the signal to be non-transverse. However, the approach designed in Sections 2, 3, and 4 is not restricted to transverse operators, and therefore is not forbidden from creating a fault tolerant continuous operator. We will now apply our process for making logical phase gates to the Steane Code.
Using the code words of the Steane code (shown below), we restrict the corresponding diagonal elements of our operator as shown in Section 3.
\[\left|0\right\rangle_{L}=\frac{1}{\sqrt{8}}(\left|0000000\right\rangle +\left|1010101\right\rangle+\left|0110011\right\rangle+\left|1100110\right\rangle\\ +\left|0001111\right\rangle+\left|1011010\right\rangle+\left|0111 100\right\rangle+\left|1101001\right\rangle) \tag{21}\]
\[\left|1\right\rangle_{L}=\frac{1}{\sqrt{8}}(\left|1111111\right\rangle+\left|01 01010\right\rangle+\left|1001100\right\rangle+\left|0011001\right\rangle\\ +\left|1110000\right\rangle+\left|0100101\right\rangle+\left|1000 011\right\rangle+\left|0010110\right\rangle) \tag{22}\]
The diagonal elements of the desired logical operator \(U_{d}\) must behave as,
\[U_{d}\left|0\right\rangle_{L}=\left|0\right\rangle_{L} \tag{23}\]
\[U_{d}\left|1\right\rangle_{L}=e^{i\phi}\left|1\right\rangle_{L} \tag{24}\]
This leaves 112 unconstrained degrees of freedom in the logical phase operator. However, when applying the Knill-Laflamme conditions as shown in Section 4, it is found that the following condition is unsatisfiable.
\[\left\langle W_{0}\right|X_{3}P_{L}^{\dagger}(\phi)X_{3}P_{L}(\phi)\left|W_{0 }\right\rangle=\left\langle W_{1}\right|X_{3}P_{L}^{\dagger}(\phi)X_{3}P_{L}( \phi)\left|W_{1}\right\rangle \tag{25}\]
This can be interpreted as an error on the middle qubit of the Steane code before \(P_{L}(\phi)\), which is indistinguishable from an error after the application of \(P_{L}(\phi)\) when restricting \(P_{L}(\phi)\) to be diagonal.
More precisely, the value of \(\beta\) in \(\left\langle W_{\sigma}\right|X_{3}P_{L}^{\dagger}(\phi)X_{3}P_{L}(\phi)\left| W_{\sigma^{\prime}}\right\rangle=\beta\delta_{\sigma\sigma^{\prime}}\) changes sign based on the value of \(\sigma\) and \(\sigma^{\prime}\). This indicates that an \(X_{3}\) error before \(P_{L}(\phi)\) changes the state differently than an \(X_{3}\) error after \(P_{L}(\phi)\), but both errors are recognized by the same error syndromes and are indistinguishable. Therefore, this approach unfortunately does not succeed in creating a logical phase operator in the Steane code.
## 6 Future Directions
Although this approach does not succeed at creating a fault tolerant logical phase operator in the Steane code, it is possible that this approach may succeed when applied to larger codes.
The \([11,1,5]\) code is typically tolerant to two qubit errors and may still be tolerant to a single local error when designing a continuous logical operator. Due to the increased code distance, it is possible that even if an error propagates through the multi-qubit gates of the logical operator, the larger distance of this code may still be able to correct these errors.
Additionally, the Shor Code and the distance 3 surface code [16, 17] have a higher qubit count, which may be able to accommodate more error syndromes
and correct more errors than the Steane code. Therefore, diagonal operators in these codes may still be fault tolerant.
Furthermore, the diagonal restriction on the logical operator could be lifted to allow for more tunable degrees of freedom. However, when considering this, it is important to remember that the gates composing this logical operator may not commute, meaning gates may not be applied simultaneously and errors may occur between them. This will likely result in further issues for the fault tolerance of the logical operator.
Also, as the Solovay-Kitaev theorem allows for universal computation to be achieved from Clifford+T gates in \(O(m\log^{c}(m/\epsilon))\)[18], a logarithmic speedup may be possible using a continuous operator instead of the T gate. However, a logarithmic speedup is not often significant compared to the quadratic or exponential speedups commonly provided by quantum algorithms.
One of the most interesting outcomes may be that it is impossible to construct a fault tolerant continuous logical operator in error-correcting codes that can be decomposed into physically realizable gates. If this is the case, it would potentially indicate the presence of new and interesting theorems for error correction and quantum sensing.
|
2303.18086 | Differentially Private Stream Processing at Scale | We design, to the best of our knowledge, the first differentially private
(DP) stream aggregation processing system at scale. Our system -- Differential
Privacy SQL Pipelines (DP-SQLP) -- is built using a streaming framework similar
to Spark streaming, and is built on top of the Spanner database and the F1
query engine from Google.
Towards designing DP-SQLP we make both algorithmic and systemic advances,
namely, we (i) design a novel (user-level) DP key selection algorithm that can
operate on an unbounded set of possible keys, and can scale to one billion keys
that users have contributed, (ii) design a preemptive execution scheme for DP
key selection that avoids enumerating all the keys at each triggering time, and
(iii) use algorithmic techniques from DP continual observation to release a
continual DP histogram of user contributions to different keys over the stream
length. We empirically demonstrate the efficacy by obtaining at least
$16\times$ reduction in error over meaningful baselines we consider. We
implemented a streaming differentially private user impressions for Google
Shopping with DP-SQLP. The streaming DP algorithms are further applied to
Google Trends. | Bing Zhang, Vadym Doroshenko, Peter Kairouz, Thomas Steinke, Abhradeep Thakurta, Ziyin Ma, Eidan Cohen, Himani Apte, Jodi Spacek | 2023-03-31T14:23:48Z | http://arxiv.org/abs/2303.18086v3 | # Differentially Private Stream Processing at Scale
###### Abstract
We design, to the best of our knowledge, the first differentially private (DP) stream processing system at scale. Our system - _Differential Privacy SQL Pipelines (DP-SQLP)_ - is built using a streaming framework similar to Spark streaming, and is built on top of the Spanner database and the F1 query engine from Google.
Towards designing DP-SQLP we make both algorithmic and systemic advances, namely, we (i) design a novel DP key selection algorithm that can operate on an unbounded set of possible keys, and can scale to one billion keys that users have contributed, (ii) design a preemptive execution scheme for DP key selection that avoids enumerating all the keys at each triggering time, and (iii) use algorithmic techniques from DP continual observation to release a continual DP histogram of user contributions to different keys over the stream length. We empirically demonstrate the efficacy by obtaining at least \(16\times\) reduction in error over meaningful baselines we consider.
## 1 Introduction
Analysis of streaming data with differential privacy (DP) [15] has been studied from the initial days of the field [10, 16], and this has been followed up in a sequence of works that include computing simple statistics [36], to machine learning applications (a.k.a. online learning) [1, 25, 26, 39, 48]. While all of these works focus on the abstract algorithmic design for various artifacts of streaming data processing, to the best of our knowledge, none of them focus on designing a scalable stream processing system. In this work, we primarily focus on designing a scalable DP stream processing system, called Differential Privacy SQL Pipelines (DP-SQLP), and make algorithmic advances along the way to cater to the scalability needs of it. DP-SQLP is implemented using a streaming framework similar to Spark streaming [47], and is built on top of the Spanner database [12] and F1 query engine [38] from Google.
In this paper we consider a data stream to be an unbounded sequence of tuples of the form (key, value, timestamp, user_id) that gets generated continuously in time. We also have a discrete set of times (a.k.a. _triggering times_) \(\texttt{Tr}=[t_{1}^{\texttt{tr}},t_{2}^{\texttt{tr}},...,t_{T}^{\texttt{tr}}]\). The objective is to output the sum of all the values for each of the keys at each time \(t_{i}^{\texttt{tr}}\), while preserving \((\varepsilon,\delta)\)-DP [15] over the entire output stream with respect to all of the contributions with the same user_id. Although prior research has extended simple one-shot DP algorithms to the streaming setting [9, 10, 16], designing an at-scale DP-streaming system using off-the-shelf algorithms is challenging because of the following reasons:
1. **Unknown key space:** A data stream processing system can only process the data that has already arrived. For example, keys for a GROUP BY operation are not known in advance; instead we discover
new keys as they arrive. To ensure \((\varepsilon,\delta)\)-DP one has to ensure the set of keys for which the statistics are computed is _stable_ to change of an individual user's data. That is, we can only report statistics for a particular key when enough users have contributed to it; to ensure DP the threshold for reporting a key must be randomized.
2. **Synchronous execution:** The execution of the streaming system is driven by the data stream. That is, we must process the data as it arrives, and cannot run asynchronously at times when there is nothing to trigger execution. We refer to the times when our system runs as _triggering times_. Furthermore, typically, at each triggering time, only the keys that appeared since the last triggering time are processed. However, this is problematic for DP - if we only output a key when it has appeared in the most recent event time window, then this potentially leaks information. Naively, to avoid this, one has to process all the keys at each triggering time, which is computationally prohibitive.
3. **Large number of observed keys and events:** A fundamental challenge is scalability. The system should be able to handle millions of updates per second from a data stream with billions of distinct keys.
4. **Effective user contribution bounding:** In real applications, each person may contribute multiple records to the data stream at different times. Providing "event-level DP" (where a single action by a person/user is protected) does not provide sufficient privacy protection. In this work, we provide "user-level DP", where _all_ the actions by a person/user is protected simultaneously. To provide user-level DP, one has to bound the contribution of each user, and that eventually introduces bias on the statistics that get computed. But contribution bounding controls the variance of the noise we must add to ensure DP. A natural (and unavoidable [4, 28, 33]) challenge is to decide on the level of contribution bounding to balance this bias and variance.
5. **Streaming release of statistics:** One has to output statistics at every triggering time. If we treat each triggering time as an independent DP data release, then the privacy cost grows rapidly with the number of releases. Alternatively, to attain a fixed DP guarantee, the noise we add at each triggering time must grow polynomially with the number of triggering times. This is impractical when the number of triggering times is large. Thus the noise we add to ensure DP is not independent across triggering times. This helps in drastically reducing the total noise introduce for a fixed DP guarantee.
In our design of DP-SQLP we address these challenges, either by designing new algorithms, or by implementing existing specialized algorithms. This is our main contribution. To the best of our knowledge, _we provide the first at-scale differentially private distributed stream processing system._
**Motivation for DP-SQLP:** Data streams appear commonly in settings like Web logs, spatio-temporal GPS traces [5], mobile App usages [30], data generated by sensor networks and smart devices [32], live traffic in maps [45], cardiovascular monitoring [42], real-time content recommendation [37], and pandemic contact tracing [11]. Almost all of these applications touch sensitive user data on a continual basis. Calandrino et al. [7] demonstrated that continuous statistic release about individuals can act as a strong attack vector for detecting the presence/absence of a single user (in the context of collaborative recommendation systems). Hence, it is imperative to a streaming system to have rigorous privacy protections. In this work we adhere to differential privacy. For more discussion on the type of streams we consider, see a detailed survey in [24].
**Our Contributions:** As mentioned earlier, our main contribution is overcoming multiple challenges to build a distributed DP stream processing system that can handle large-scale industrial workloads.
* **Private key selection:** A priori, the set of possible keys is unbounded, so our system must identify a set of relevant keys to track. To protect privacy, we cannot identify a particular key based on the contributions of a single user. The streaming setting adds two additional complications: (a) The existence of each key is only known when it is observed in a data record, and (b) the privacy leakage due to continually releasing information about any particular key increases the DP cost due to
composition [18]. To address these challenges, we design a novel algorithm (Algorithm 2) that couples "binary tree aggregation" [10, 16, 23] (a standard tool for continual release of DP statistics that only accumulates privacy cost that is poly-logarithmic in the number of aggregate releases from the data stream) with a thresholding scheme that allows one to only operate on keys that appear in at least \(\mu>0\) user records. To further minimize privacy leakage, we employ a variance reduced implementation of the binary tree aggregation protocol [23].
* **Preemptive execution:** In a real system, since we may track a large number of keys, it is not scalable to scan through all of the keys each time the system is invoked. Thus we design a new algorithm (Algorithm 4) that only runs on the keys that have appeared between the current and previous triggering times. The idea is to _predict_ when a key will be released in advance, rather than checking at each triggering time whether it should be released now. That is, whenever we observe a given key, we simulate checking the release condition for the rest of triggering times assuming no further updates to key. In the future we only check for key at any triggering time if either of the two conditions happen: (i) key appears in a fresh microbatch in the data stream, or (ii) the earlier simulation predicted a release for that time. By doing so, we reduce the expensive I/O and memory cost, with little CPU overhead. This idea is motivated by the caching of pages in the operating systems literature.
* **Empirical evaluation:** We provide a thorough empirical evaluation of our system. We consider a few natural baselines that adopt one-shot DP algorithms to stream data processing (e.g., repeated differential privacy query). At (\(\varepsilon=6,\delta=10^{-9}\))-DP, we observed up to 93.9% error reduction, and the number of retained keys is increased by 65 times when comparing DP-SQLP with baselines. Furthermore, through our scalability experiments we show that DP-SQLP can handle billions of keys without incurring any significant performance hit. Obtaining the above results not only required the algorithmic advancements mentioned earlier, they also required appropriate contribution bounding for each user keys, and use of variance reduced version of the classic _tree aggregation mechanism_[10, 16, 23] both for key selection and statistic release. A surprising fact is that the degree to which an user's contribution needs to be bounded seemed to vary significantly based on the error measure we chose. We leave it as an open problem to exploit the recent progress in hyperparameter selection with DP [35] and achieve a DP variant of choosing the appropriate threshold for user contribution bounding in an automated fashion with DP.
In the following, we formally define the problem, and delve deeper into relevant related works.
### Problem Statement
Let \(D\) be a data stream defined by an unbounded sequence of records, i.e., \(D=[d_{1},d_{2},...)\), where each record \(d_{i}\) is a tuple (key, value, timestamp\(\,t_{i}\), user_id). A common query pattern in data analytics is the unknown domain histogram query. For example, consider a web service that logs user activities: each record is a URL click that contains (URL, user_id, timestamp). An analyst wants to know the number of clicks on each URL for each day up to today. An SQL query to generate this histogram is presented in Listing 1.
```
SELECTURL, TO_DATE(timestamp)ASdate, COUNT(*)AScount FROMweb_logs GROUPBYURL,TO_DATE(timestamp)
```
Listing 1: Single histogram query
In Listing 1, the keys are (URL, date) denoted by key1, and the count is an aggregation column denoted by \(m\).
Footnote 1: Throughout the paper, key and k are used interchangeably.
When querying a growing database or data stream, the above-mentioned query only shows a snapshot at certain date. In stream data processing, we use event-time window to determine which records in the _event time domain_ to process, and triggers to define when in the _processing time domain_ the results of groupings are emitted [2]. Let \(W\) denote the event-time windows, \(W=[w_{1},w_{2}...)\). Each event-time window is defined by a starting time and an end time \(w_{i}=(t_{i,s},t_{i,e})\). \(D_{w_{i}}\in D\) contains all records that can be assigned to window \(w_{i}\) so that the timestamp \(t\) of each record satisfies \(t_{i,s}\leq t<t_{i,e}\). Let \(Tr\) denote a set of triggering times in the processing domain, \(\texttt{Tr}=[tr_{1},tr_{2}...)\). We assume triggering time is predefined and independent to dataset2. The streaming system incrementally processes \(D_{w_{i}}\) at triggering time \(\texttt{Tr}_{w_{i}}=\{tr_{i,s},...,tr_{i,e}\}\subset\texttt{Tr}\). Due to the time domain skew[2], \(t_{i,s}\leq tr_{i,s}\) and \(t_{i,e}\leq tr_{i,e}\leq t_{i,e}+t\), where \(t\) is the maximum delay that system allows for late arriving records. Our goal is to release the histogram for all sub-stream \(D_{w_{i}}\), at every triggering time \(tr_{i}\in\texttt{Tr}_{w_{i}}\), in a differentially private (DP) manner (See Section 2.1 for a formal DP definition). For a pictorial representations of various timing concepts, see Figure 1.
Footnote 2: In practical application, the streaming system may choose trigger adaptively, with complicated implementation Akidau et al. [2]
**Privacy implication of input driven stream:** In terms of privacy, we want to ensure that the stream processing system ensures \((\varepsilon,\delta)\)-user level DP [14, 15, 18] over the complete stream. Since we are operating under the constraint that the data stream is an input driven stream (Definition A.1), the timings of the system (e.g., event time (Definition A.2), processing time (Definition A.3), and triggering time (Definition A.5)) can only be defined w.r.t. times at which the inputs have appeared. Thus it forces us to define the DP semantics which considers the _triggering times to be fixed_ across neighboring data sets (in the context of traditional DP semantics). For a given user, what we protect via DP is the actual data that is contributed to the data stream. We provide a formalism in Section 2.1.
Figure 1: Event time domain and processing time domain
### Related Work
Stream processing has been an active research field for more than 20 years [21]. It is now considered to be a mature technology with various streaming frameworks deployed at scale in industry, including Spark Streaming [47], Apache Beam [2] and Apache Flink [8]. However, none of these systems offer differentially private streaming queries.
Our work builds on a long line of DP research that focuses on extending one shot applications of DP mechanisms to the continual observation (streaming) setting for both analytics and learning applications [9, 10, 13, 16, 22, 23, 26, 31]. These mechanisms crucially leverage the tree aggregation protocol [10, 16, 23] or variants of it based on the matrix factorization mechanism [13, 22, 31]. All these approaches have the advantage of drastically reducing the error induced by repeated application of a DP mechanism, making the DP protocol itself stateful.
Of the above cited, our work is most related to [9], which investigates the problem of computing DP histograms with unknown-domains under continual observations. Similar to our work, they leverage (extensions of) the tree aggregation protocol to build an efficient DP protocol for the continual observation setting. However, our work departs from theirs in a few fundamental ways.
1. We consider user level DP whereas they consider event level DP. As outlined in the introduction, this introduces interesting algorithmic and system challenges.
2. We provide a concrete streaming system architecture for scalable production deployments whereas they focus on developing algorithms. More precisely, we develop and test an empty key prediction scheme that allows us to scale to millions of updates per second that contain billions of distinct keys.
3. We test our algorithms and architecture on a number of large-scale synthetic and real-life datasets to demonstrate the efficacy of our approach relative to meaningful baselines (experiments are completely absent from their work).
### Organization
The rest of the paper is organized as follows. In Section 2 we provide the necessary background on differential privacy and streaming systems; in Section 3 we describe the main algorithmic components of our DP streaming system; in Section 4 we provide details of the improvements needed to scale up the algorithms to large workloads; in Section 5 we provide a thorough experimental evaluation; and finally in Section 6 we provide some concluding remarks and outline a few interesting open directions. We also provide a glossary of terms from the streaming literature (used in this paper) in Appendix A.
## 2 Preliminaries
### Differential Privacy on Streams
Consider a data stream which is a collection of records \(D=[d_{1},d_{2},...)\), where each \(d_{i}\) is a tuple (key, value, timestamp, user_id) that can be assigned to certain event-time window \(w\in W\). The computation of records in \(w\) happens at trigger time \(tr_{i}\in\texttt{Tr}_{w}\).
Throughout this paper we will consider algorithms which take streams as input and produce output in trigger times from Tr of certain window \(w\): at each trigger time \(tr_{i}\), the algorithm processes data up to this time. _While the discussion of DP processing focus on each window, the DP guarantee is for the entire data stream, because user contributions are bounded over the complete stream._
In the following, we provide a formal definition of differential privacy we adhere to in the paper (which includes the notion of neighboring data streams). Note that by Definition 2.1 we consider user level differential privacy with fixed triggering times. Triggering times are not protected.
**Definition 2.1**.: _Streams \(D,D^{\prime}\) are called neighbouring if they only differ by the absence or presence of the data of one user while keeping the trigger times Tr fixed between the two datasets._
**Definition 2.2** (Differential Privacy [14, 15]).: _An algorithm \(\mathcal{A}\) is \((\varepsilon,\delta)\)-differentially private (DP) if for any neighboring pairs of data streams \(D\) and \(D^{\prime}\) the following is true for any \(S\subseteq\texttt{Range}(\mathcal{A})\):_
\[\operatorname{\mathbf{Pr}}[\mathcal{A}(D)\in S]\leq e^{\varepsilon} \operatorname{\mathbf{Pr}}[\mathcal{A}(D^{\prime})\in S]+\delta.\]
We can similarly define Concentrated DP (zCDP) [6, 19] with the neighbouring relation given by Definition 2.1. We use zCDP as an analysis tool3, as it has clean composition properties, but we convert the final guarantee into \((\varepsilon,\delta)\)-DP using standard tools. We use three facts:
Footnote 3: Throughout the paper, zCDP is used as a convenient tool to analyze DP-Binary Tree. The privacy guarantee is reported in \((\varepsilon,\delta)\)-DP.
* Given a query \(q\) with sensitivity \(\Delta_{2}:=\sup_{\genfrac{}{}{0.0pt}{}{x,x^{\prime}}{\text{neighboring}}}\|q(x)-q(x^{ \prime})\|_{2}\), releasing \(M(x)=\mathcal{N}(q(x),\sigma^{2}\mathbb{I})\) satisfies \(\frac{\Delta_{2}^{2}}{2\sigma^{2}}\)-zCDP.
* Composing an algorithm satisfying \(\rho_{1}\)-zCDP with an algorithm satisfying \(\rho_{2}\)-zCDP yields an algorithm satisfying \((\rho_{1}+\rho_{2})\)-zCDP.
* For any \(\rho,\delta>0\), \(\rho\)-zCDP implies \(\Big{(}\rho+2\sqrt{\rho\log(1/\delta)},\delta\Big{)}\)-DP.
### Privacy Under Continual Observation and Binary Tree Aggregation
Consider a data stream \(D=x_{1},\ldots,x_{n}\) with each \(x_{i}\in[0,L]\). The objective is to output \(\left\{S_{j}=\sum\limits_{i=1}^{j}x_{i}\right\}\) while preserving DP (with the neighborhood relation defined w.r.t. changing any one of the \(x_{i}\)'s). We will use the so-called _binary tree aggregation algorithm_[10, 16, 23] stated below.
```
0: Data set: \(D=\{x_{1},\ldots,x_{n}\}\) with each \(x_{i}\in[0,L]\), noise standard deviation \(\sigma\).
1: Initialize a complete binary tree with \(2^{\lceil\lg(n)\rceil}\) leaf nodes, with each node being sampled from \(\mathcal{N}(0,\sigma^{2})\).
2:for\(i\in[n]\)do
3: Add \(x_{i}\) to all the nodes on the path from the \(i\)-th leaf node to the root of the tree.
4: Initialize \(S_{i}^{\text{priv}}\gets 0\), and convert \(i\) to the binary representation \([b_{1},\ldots,b_{h}]\) with \(b_{1}\) being the most significant bit, and \(h=\lceil\lg(n)\rceil\) being the height of the tree.
5: Let \([\texttt{node}_{1},\ldots,\texttt{node}_{h}]\) be the nodes from the root to the \(i\)-th leaf node of the tree.
6:for\(j\in[h]\)do
7: If\(b_{j}==1\), then add the value in the left sibling of \(\texttt{node}_{j}\) to \(S_{i}^{\text{priv}}\). Here, if \(\texttt{node}_{j}\) is the left child, then treat it as its own sibling.
8:endfor
9: Output \(S_{i}^{\text{priv}}\).
10:endfor
```
**Algorithm 1** DP-Binary Tree Aggregation (exposition from [26])
One example DP-Binary tree with nodes encoding is shown in Figure 2.
**Theorem 2.3**.: _Algorithm 1 satisfies \(\frac{L^{2}\lceil\lg(n)\rceil}{2\sigma^{2}}\)-zCDP. This is equivalent to \((\varepsilon,\delta)\)-DP with \(\varepsilon=\frac{L^{2}\lceil\lg(n)\rceil}{2\sigma^{2}}+2\sqrt{\frac{L^{2} \lceil\lg(n)\rceil}{2\sigma^{2}}\lg(1/\delta)}\)._
The proof of Theorem 2.3 follows immediately from the zCDP guarantee of Gaussian mechanism, and composition over the levels of the binary tree. One can also show the following in terms of utility.
**Theorem 2.4**.: _With probability at least \(1-\beta\), the following is true for any \(i\in[k]\):_
\[\left|S_{i}^{\mathsf{priv}}-S_{i}\right|\leq\sqrt{\frac{2\ln(n/\beta)\lceil\lg(n )\rceil\sigma^{2}}{\pi}}.\]
The proof of Theorem 2.4 follows immediately from the tail properties of Gaussian distribution. One can improve the constants in the guarantee by using variance reduction techniques from [23], which we briefly describe below.
**Honaker estimation for variance reduction:** The idea of Honaker estimation [23] is to use multiple estimates for the information of same node in the binary tree for variance reduction. Consider any node \(\mathsf{node}_{i}\), and the information that is inserted into it by Algorithm 1. Let \(\mathcal{T}_{\mathsf{node}_{i}}\) be the subtree rooted at node \(\mathsf{node}_{i}\). Notice that sum of all the nodes at each level of \(\mathcal{T}_{\mathsf{node}_{i}}\) is an unbiased and independent estimate of the information contained in \(\mathsf{node}_{i}\). Furthermore, we also know the variance of such an estimate. For example, if a level \(\mathcal{T}_{\mathsf{node}_{i}}\) has \(m\) nodes, then the variance of the estimate from that level is \(m\sigma^{2}\). We can use this information to reduce the variance in the estimate of \(\mathsf{node}_{i}\) The following is the formalization of this idea.
Let \(\mathsf{level}_{0},\ldots,\mathsf{level}_{\kappa}\) be the levels of the subtree rooted at \(\mathsf{node}_{i}\), with \(\mathsf{level}_{0}\) being the level of \(\mathsf{node}_{i}\). Notice that sum of all the nodes at level \(\mathsf{level}_{j}\) is an unbiased estimate of the true value of node \(\mathsf{node}_{i}\) (called the \(\mathsf{sum}(\mathsf{level}_{j})\)), and the variance is \(\sigma_{j}^{2}=2^{j}\sigma^{2}\) (since there are \(2^{j}\) nodes at level \(\mathsf{level}_{j}\)). Honaker estimate is \(\sum\limits_{j=0}^{\kappa-1}c_{j}\cdot\mathsf{sum}(\mathsf{level}_{j})\), where \(c_{j}=\frac{1/2^{j}}{\sum\limits_{j=0}^{\kappa-1}(1/2^{j})}\). Therefore, the variance in the Honaker's estimate is as follows:
\[\mathsf{Variance}(\mathsf{node}_{i})=\left(\sum\limits_{j=0}^{\kappa-1}c_{ j}^{2}\cdot 2^{j}\right)\cdot\sigma^{2}=\frac{1}{2\cdot(1-2^{-\kappa})}\cdot \sigma^{2}. \tag{1}\]
Now consider the nodes \(\mathsf{node}_{1},\mathsf{node}_{2},\ldots,\mathsf{node}_{m}\) to be the nodes used to compute the DP variant of \(S_{i}\) at time instance \(i\in[n]\) in Algorithm 1. Notice that the noise due the Honaker estimate mentioned above is still independent for each of the nodes. Hence, we have the error \(S_{i}^{\mathsf{priv}}-S_{i}\) follows the following distribution:
Figure 2: DP-Tree with nodes encoding
\[\left(S_{i}^{\mathsf{priv}}-S_{i}\right)\sim\mathcal{N}\left(0,\sum_{j=1}^{m} \mathsf{Variance}(\mathsf{node}_{j})\right). \tag{2}\]
Unlike Theorem 2.4 we write the exact distribution of the error above is because, we will use this distribution to decide on the threshold for key selection in Section 3.3.
**Ensuring user-level DP:**
Although existing works commonly assume that each user corresponds to a single record, in practice, a single user can contribute multiple records. Thus, we must account for this in the DP analysis. The simplest approach - which we follow - is to limit the number of contributions per person to some constant \(C\) within the entire data stream \(D\). If we have a DP guarantee for a single record, we can apply group privacy to obtain a DP guarantee for \(C\) records. Specifically, if we have \((\varepsilon,\delta)\)-DP with respect to the addition or removal of one record, then this implies \(\left(C\cdot\varepsilon,\frac{e^{C\cdot\varepsilon}-1}{e^{\varepsilon}-1} \delta\right)\)-DP with respect to the addition or removal of \(C\) records [44]. The group privacy is used in privacy accounting for hierarchical perturbation (Section 4.4).
In worst case scenario, we have to go through group privacy to achieve user-level DP. However, in practice, depending on how we bound user contribution, if the sensitivity of any user is inflated by at most a factor of \(C\), the composition bounds instead of group privacy bounds is applicable, which gives better bounds. Later in Section 3.3, we apply advanced composition to extend the privacy guarantee from the single user contribution (\(C=1\)) to a more general case (\(C\geq 1\)).
### Streaming System Architecture
The streaming differential privacy mechanism described in this paper can be generally applied to various streaming frameworks, including Spark Streaming[47], Apache Beam[2] and Apache Flink[8]. The DP-SQLP system we develop is implemented using a streaming framework similar to Spark Streaming[47], as shown in Figure 3.
The input data stream contains unordered, unbounded event records. The streaming scheduler will first assign each record to the corresponding event-time window \(w\). Within each window, at every triggering timestamp \(tr\), records are bundled together to create an immutable, partitioned datasets, called a _micro-batch_. After a micro-batch is created, it will be dispatched to the DP-SQLP operator for processing.
When processing a micro-batch, the DP-SQLP operator will interact with the system state store for a state update. Once the differentially private histogram is generated, it will be materialized to the data sink4. Similar to Spark Streaming, our streaming framework provides consistent, "exactly-once" processing semantic across multiple data centers. In addition, the streaming framework also provides fault tolerance and recovery.
Footnote 4: Data sink is the storage system used to store and serve output data, including file systems and databases.
There are multiple ways to schedule micro-batches based on certain rules[2], like processing timer based trigger, data arrival based trigger and combinations of multiple rules.
Figure 3: High-level overview for the DP-SQLP system
Based on the number of micro-batches received by operator at each time instance, we can also classify scheduling methods into two categories, sequential scheduling and parallel scheduling, as shown in the Figure 4. In sequential scheduling, input data stream is divided into a sequence of micro-batches. The operator will process one micro-batch at a time. Parallel scheduling is able to further scale up the pipeline, by allowing multiple micro-batches to be processed at the same time. The streaming scheduler will partition records by predefined key tuples, and create one micro-batch per key range.
In the rest of paper, we will assume the sequential scheduling for the algorithm discussion. Given data stream \(D_{w_{i}}\) for window \(w_{i}\), the sub-stream at triggering timestamp \(tr_{i}\in\texttt{Tr}_{w_{i}}\) can be denoted as \(D_{tr_{i}}\subseteq D_{w_{i}}\), and the sub-stream for each micro-batch can be represented as the incremental data stream \(\Delta D_{tr_{i}}=D_{tr_{i}}-D_{tr_{i-1}}\).
As Akidau pointed out in the unified dataflow model [2], batch, micro-batch, and pure streaming are implementation details of the underlying execution engine. Although the streaming differential privacy mechanism discussed in this paper is executed by a streaming system based on micro-batch, the mechanism and algorithm can be widely applied to batch, micro-batch, and pure streaming systems.
It is worth noting that different execution modes (batch, micro-batch and pure streaming) result in different trade-offs between data utility and pipeline latency. In differential privacy, the more frequent we repeat the process, the nosier results tend to be. Therefore, data utility is an additional factor when choosing execution modes.
## 3 Streaming Private Mechanism
Figure 4: Two types of streaming scheduling
Figure 5: Overview of streaming differential privacy mechanism
In this section we will discuss the overall mechanism for streaming differential privacy. Our target is to perform aggregation and release histogram at every triggering timestamp in Tr, while maintaining \((\varepsilon,\delta)\)-_differential privacy_. There are four main components within streaming differential privacy mechanism - user contribution bounding, partial aggregation, streaming private key selection and hierarchical perturbation.
To simplify the discussion, we will assume Sum is the aggregation function for GROUP BY. It is also possible to use other aggregation function within streaming differential privacy mechanism.
Let's declare the inputs, parameters and outputs that will be used in streaming differential privacy.
* Input: Data Stream \(D\), event-time windows \(W\), triggering timestamp per window Tr\({}_{w}\), privacy parameters \(\varepsilon,\delta>0\), accuracy parameter \(\beta>0\), per record clamping limit \(L\).
* System Parameters: Max. number of records per user \(C\).
* Output: Aggregated DP histogram at every triggering timestamp.
### Non-Private Streaming Aggregation
The traditional streaming aggregation operator without differential privacy is shown on the top of Figure 5. Records within the micro-batch are grouped by key and aggregated by the _reduce_ function [47]. After that, the partial aggregation result will be merged with the previous state [43] and the update histogram is emitted. There is no differential privacy protection within this process, and user privacy can be leaked from multiple dimensions, including histogram value, aggregation key and the differences between two histogram updates.
### User Contribution Bounding
DP algorithms require that the sensitivity of each user contributions be limited, for example that a user can contribute up to \(C\) times. However, in reality, each user may contribute to many records and many keys, especially for heavy users. Therefore, we need to bound the maximum influence any user can have on the output in order to achieve a desired overall DP guarantee. This step is called "user contribution bounding".
Some one-shot mechanisms perform user contribution bounding by limiting contributed value per key and the number of contributed keys per user [3, 46]. However, this approach does not fit the streaming setting, since it requires three shuffle stages - shuffle by user, shuffle by (key, user) and shuffle by key.
User contribution bounding in streaming DP is performed on the user level for the entire data stream \(D\)5:
Footnote 5: In a production streaming system, the DP guarantee is commonly defined with the minimum privacy protection unit (e.g., [user, day]). The maximum number of record per user \(C\) needs to be enforced within each privacy unit.
* Each user can contribute to at most \(C\) records in the data stream \(D\).
* The value \(v\) for the aggregation column \(m\) in each record is clamped to \(L_{m}\) so that \(|v|<L_{m}\).
The maximum number of records per user \(C\) and per record clamping limit \(L_{m}\) together determine the per-user \(\ell_{1}\) sensitivity in data stream:
\[L_{1}=C\times L_{m}.\]
Choosing the right contribution bounding parameters \(C\) and \(L_{m}\) is critical for privacy-utility trade-off. When the bounding limit is small, the noise is small, but the data loss may be significant (e.g. if most/all users have a lot of data to contribute). On the other side, when the bounding limit is large, the data loss is small, but noise is large. It is possible to find a near optimal point from a heuristic study, which we discuss in Section 5. Indeed, one approach to choosing a good a contribution bound is to inspect the data distribution. For example, we can pick \(C\) at \(99^{th}\) percentile of per-user records (i.e. \(<1\%\) users have more
than C records), which can be chosen in a DP way if computed on a fraction of the data stream, or in a non-DP way if it is based on proxy data.
In the following sections, we will introduce streaming private key selection and hierarchical perturbation, which are two main private operations in streaming differential privacy. Since the private operations are performed per window, we will focus on \(D_{w_{i}}\) during algorithm discussion. However, the same private operations should be applied to all windows.
### Streaming Private Key Selection
The main objective we consider in this section is to select the set of keys that exceed a certain threshold of user contributions \(\mu\geq 0\). (These are the keys that are used later for releasing the aggregation columns in Section 3.4 below.) Recall that our streaming system is input driven by a growing data stream, meaning, one only sees the existence of a key if it has at least one user contribution. This poses a significant privacy challenge since the detectable set of keys is highly dependent on the data set. As a result, we design a novel thresholding scheme coupled with binary tree aggregation [10, 16, 23] that allows one to only operate with keys that deterministically have at least \(\mu\) user contributions, and still preserve \((\varepsilon,\delta)\)-DP. In the following, we provide a description of the algorithm, along with the privacy analysis.
**Data preprocessing:** After user contribution bounding (discussed in Section 3.2), we first perform a regular key GROUP BY and aggregation for all records within the current micro-batch6. Then we aggregate the user data via GROUP BY on each key and merge results with a data buffer that is stored in a system state7. Beyond that we execute the key selection algorithm described in Algorithm 2.
Footnote 6: This is a system level operation without any implication to the privacy guarantee.
Footnote 7: For every key, we accumulate aggregation column values, as well as the number of unique users. This process is called data accumulation.
**Algorithm description:** As mentioned earlier, the emitted key space is not predefined. A key may emerge when there is at least one user record with the key (due to the nature of input driven stream). Therefore, the streaming DP system must determine _when_ and _what_ keys to release or update, in a private manner. This is different from the non-private streaming aggregation, where updates will be emitted at every triggering timestamp after processing each micro-batch.
**Remark:** In the description of Algorithm 2, we use the following primitives implicitly used in Algorithm 1:
i) InitializeTree\((T,\sigma)\): Initialize a complete binary tree \(\mathcal{T}\) with \(2^{[T]}\) leaf nodes with each node being sampled from \(\mathcal{N}(0,\sigma^{2})\), ii) AddToTree\((\mathcal{T},i,c_{i})\): Add \(c_{i}\) to all the nodes on the path from the \(i\)-th leaf node to the root of the tree, and iii) GetTotalSum\((\mathcal{T}_{\mathrm{key}},i)\): Prefix sum of the all the inputs \(\{c_{1},\ldots,c_{i}\}\) to the binary tree computed via Algorithm 1.
Our approach is an extension of thresholding algorithm in [29] to the streaming setting. In short, we carefully select a threshold and compute (with DP noise) the number of data records that contributed to each encountered key. If this noisy number is greater than or equal to the chosen threshold, the key is released. We describe the algorithm in full detail in Algorithm 2, and provide the formal privacy guarantee in Theorem 3.1.
A crucial component of the algorithm is the choice of the threshold \(\tau\) in Line 2 of Algorithm 2. One can instantiate \(\tau\) with the bound in Theorem 2.4. However, in our implementation (described in Section 4) we actually implement the tree aggregation via the "bottom-up Honaker" variance reduction described in Section 2.2. One can write the exact distribution of the differences between DP-Tree aggregated count and the true count, which is \(\hat{q}_{tri,i}-\texttt{count}_{k}(D_{tri_{i}})\) in Line 2 of Algorithm 2, via (2). This allows us to get a tighter bound on \(\tau\) based on the inverse CDF of the Gaussian distribution. Also, it should be obvious from (2) that the variance of the Gaussian distribution is dependent on the time step at which we are evaluating the cumulative sum. Hence, to obtain a tighter estimation of the threshold, we actually have a time dependent threshold \(\tau_{tri_{i}}\) (based on (2)) instead of an universal threshold in Line 2.
**Remark:** For brevity, in Theorems 3.1 and 3.2, we provide the guarantees assuming each user only contributes once, i.e., in the language of Section 3.2 we assume that that user contribution bound \(C=1\)
However, in our implementation we do allow \(C>1\). The idea is to use a tighter variant of advanced composition for \((\varepsilon,\delta)\)-DP [18] while ensuring that each user contributes _at most once to each key_ in any instance of Algorithm 2.
```
0: Data stream \(D_{w_{i}}=\{d_{1},\ldots,d_{n}\}\), where the event timestamp \(t_{i}\) of each \(d_{i}\) can map to event-time window \(w_{i}=(t_{s},t_{e})\), \(t_{s}<t_{i}<t_{e}\), triggering timestamps \(\texttt{Tr}_{w_{i}}=[tr_{1},tr_{2},...,tr_{T_{i}}]\). At each triggering time \(tr_{i}\subseteq\mathbb{R}\), only a sub-stream \(D_{tr_{i}}\subseteq D_{w_{i}}\) is available. Threshold \(\mu\geq 0\), privacy parameters \(\varepsilon,\delta>0\), failure probability \(\beta>0\).
1: Compute the noise standard deviation \(\sigma\) for the tree aggregation based on \((T_{i}=|\texttt{Tr}_{w_{i}}|,\varepsilon,\delta)\). (See Appendix B for more details.)
2: Compute the accuracy threshold \(\tau\) of the tree aggregation such that for any fixed key and the corresponding binary tree \(\mathcal{T}_{\texttt{key}}\), \[\operatorname*{\mathbb{P}}_{\mathcal{T}_{\texttt{key}}}\left[\forall tr_{i} \in\texttt{Tr}_{w},|\hat{q}_{tr_{i},\texttt{key}}-\texttt{count}_{\texttt{ key}}\left(D_{tr_{i}}\right)|\leq\tau\right]\geq 1-\beta,\] which depends on \(\sigma\) and \(\beta\). (See Appendix B for more details.) Here, \(\texttt{count}_{\texttt{key}}\left(D_{tr_{i}}\right)\) denote the unique user count for key in \(D_{tr_{i}}\), and \(\hat{q}_{tr_{i},\texttt{key}}\) denote the private estimate of \(\texttt{count}_{\texttt{key}}\left(D_{tr_{i}}\right)\).
3:for\(i\in|\texttt{Tr}_{w_{i}}|\)do
4:\(\mathcal{S}^{(i)}\leftarrow\) Set of all keys in the stream \(D_{tr_{i}}\) with count \(>\mu\).
5: For all key\({}\in\mathcal{S}^{(i)}\backslash\mathcal{S}^{(i-1)}\), create a new tree \(\mathcal{T}_{\texttt{key}}\) using InitializeTree\((T_{i},\sigma)\), and execute Algorithm 1 till \((i-1)\)-th step with all zeros as input.
6:forkey\({}\in\mathcal{S}^{(i)}\)do
7:\(\mathcal{T}_{\texttt{key}}\leftarrow\) AddToTree \(\left(\mathcal{T}_{\texttt{key}}\,,i,\texttt{count}_{\texttt{key}}\left(D_{ tr_{i}}\right)-\texttt{count}_{\texttt{key}}\left(D_{tr_{i-1}}\right)\right)\), i.e., Add the count at time stamp \(tr_{i}\) to \(\mathcal{T}_{\texttt{key}}\).
8:\(\hat{q}_{tr_{i},\texttt{key}}\leftarrow\texttt{GetTotalSum}\left(\mathcal{T}_{ \texttt{key}},i\right)\).
9:if\(\hat{q}_{tr_{i},\texttt{key}}>\mu+\tau\), then output \((\texttt{key},\hat{q}_{tr_{i},\texttt{key}})\).
10:endfor
11:endfor
```
**Algorithm 2** Streaming Private Key Selection
**Theorem 3.1** (Privacy guarantee).: _Algorithm 2 is \((\varepsilon,\delta+(e^{\varepsilon}+1)\cdot\beta)\)-DP for addition or removal of one element of the dataset._
Proof.: Consider two datasets \(D\) and \(D^{\prime}\) which differ by the addition or removal of one key \(k_{*}\). Let \(A(D)\) denote Algorithm 2 run on input \(D\). First we note that we can ignore all other keys \(k\neq k_{*}\) because the behaviour of Algorithm 2 on those keys is independent of its behaviour with respect to \(k_{*}\).
For the purposes of the analysis, we consider a different algorithm \(A^{\prime}(D)\) which does the following with respect to \(k_{*}\). It initializes the tree aggregation mechanism \(\mathcal{T}_{k_{*}}\) at the beginning. At each time \(t\in\texttt{Tr}\), the algorithm \(A^{\prime}(D)\) computes \(\hat{q}_{t,k_{*}}\leftarrow\texttt{GetTotalSum}(\mathcal{T}_{k_{*}})\). If \(\hat{q}_{t,k_{*}}>\mu+\tau\), then it outputs \((k_{*},\hat{q}_{t,k_{*}})\); otherwise it outputs nothing about \(k_{*}\) at time \(t\).
The algorithm \(A^{\prime}\) is \((\varepsilon,\delta)\)-DP. This is because it is simply a postprocessing of the tree aggregation mechanism, which is set to have this DP guarantee.
The only difference between the outputs of \(A(D)\) and \(A^{\prime}(D)\) is that, if \(\texttt{count}_{k_{*}}(D_{t_{i}})\leq\mu\) and \(\hat{q}_{t,k_{*}}>\mu+\tau\), then \(A^{\prime}(D)\) outputs \((k_{*},\hat{q}_{t,k_{*}})\), but \(A(D)\) outputs nothing regarding \(k_{*}\). Since \(\hat{q}_{t,k_{*}}>\mu+\tau\) is meant to be an approximation to \(\texttt{count}_{k_{*}}(D_{t_{i}})\leq\mu\), this means the outputs only differ when the tree aggregation mechanism \(\mathcal{T}_{k_{*}}\) has error \(>\tau\). The accuracy guarantee of \(\mathcal{T}_{k_{*}}\) ensures this happens with probability at most \(\beta\). That is, we can define a coupling such that \(\operatorname*{\mathbb{P}}\left[A(D)\neq A^{\prime}(D)\right]\leq\beta\) (and the same for \(D^{\prime}\) in place of \(D\)).
Thus we can obtain the DP guarantee for \(A\): Let \(E\) be an arbitrary measurable set of outputs of \(A\). We
have
\[\mathbb{P}\left[A(D)\in E\right] \leq\mathbb{P}\left[A^{\prime}(D)\in E\right]+\mathbb{P}\left[A(D) \neq A^{\prime}(D)\right]\] \[\leq e^{\varepsilon}\cdot\mathbb{P}\left[A^{\prime}(D^{\prime}) \in E\right]+\delta+\mathbb{P}\left[A(D)\neq A^{\prime}(D)\right]\] \[\leq e^{\varepsilon}\cdot(\mathbb{P}\left[A(D^{\prime})\in E \right]+\mathbb{P}\left[A(D)\neq A^{\prime}(D)\right])+\delta\] \[\quad+\mathbb{P}\left[A(D)\neq A^{\prime}(D)\right]\] \[=e^{\varepsilon}\cdot\mathbb{P}\left[A(D^{\prime})\in E\right]+ \delta+(e^{\varepsilon}+1)\cdot\mathbb{P}\left[A(D)\neq A^{\prime}(D)\right]\] \[\leq e^{\varepsilon}\cdot\mathbb{P}\left[A(D^{\prime})\in E \right]+\delta+(e^{\varepsilon}+1)\cdot\beta.\]
**Theorem 3.2** (Utility guarantee).: _For any fixed key \(k\), there exists a threshold \(\tau=\mathcal{O}\left(\frac{\sqrt{\log(T/\beta)\log^{2}(T)\log(1/\delta)}}{ \varepsilon}\right)\) such that w.p. at least \(1-\beta\), Algorithm 2 outputs \(k\) if at any one of the triggering time (in \(\texttt{Tr}=[tr_{1},tr_{2},...,tr_{T}]\)) the true count is at least \(\mu+\tau\)._
Proof.: By using Theorem 2.4, and using the notation in Line 2 the following is immediate, when the noise added to each node of the binary tree in Algorithm 2 is \(\mathcal{N}(0,\sigma^{2})\). For any fixed key \(k\),
\[\mathbb{P}_{\mathcal{T}_{k}}\left[\forall tr_{i}\in\texttt{Tr}\ \ |\hat{q}_{tr_{i},k}-\texttt{count}_{k}(D_{tr_{i}})|\leq\sqrt{\frac{2\ln(T/\beta) \lceil\lg(T)\rceil\sigma^{2}}{\pi}}\right] \tag{3}\] \[\geq 1-\beta.\]
Setting \(\sigma^{2}=\frac{2\lceil\lg(T)\rceil\ln(1.25/\delta)}{\varepsilon^{2}}\), and using the translation of zCDP to \((\varepsilon,\delta)\)-DP (based on the privacy statement in Theorem 2.3) completes the proof.
### Hierarchical Perturbation
```
0: Data stream: \(D_{w_{i}}=\{d_{1},\ldots,d_{n}\}\), where each \(d_{i}\) arrive at time \(t_{i}\) within event-time window \(w_{i}=(t_{s},t_{e})\), \(t_{s}<t_{i}<t_{e}\). Triggering timestamps \(\texttt{Tr}_{w_{i}}=[tr_{1},tr_{2},...,tr_{T_{i}}]\). At each triggering timestamp \(tr_{i}\subseteq\mathbb{R}\), only a sub-stream \(D_{t}\in D_{w_{i}}\) is available. Privacy parameters \(\varepsilon,\delta>0\), number of DP-Tree leaf nodes \(n\).
1: Compute the noise standard deviation \(\sigma\) for the tree aggregation based on \((n,\varepsilon,\delta)\). (See Appendix B for more details.)
2:for\(i\in|\texttt{Tr}_{w_{i}}|\)do
3:\(\mathcal{S}_{i}\leftarrow\) Set of keys output by the key selection algorithm (Algorithm 2) at \(tr_{i}\).
4:forkey\(\in\mathcal{S}_{i}\)do
5: Let \(D_{tr_{i}}\leftarrow\) data stream available at time stamp \(tr_{i}\).
6: Last Release Time \(\texttt{LRT}_{\texttt{key}}\leftarrow\) triggering timestamp of the previous statistic release.
7:\(\Delta V_{\texttt{key}}\leftarrow\) Aggregated value for key in the sub-stream \(D_{tr_{i}}-D_{\texttt{LRT}_{\texttt{key}}}\).
8:\(\mathcal{T}_{\texttt{key}}\leftarrow\texttt{AddToTree}\left(\mathcal{T}_{ \texttt{key}}\,,i,\Delta V_{\texttt{key}}\,\right)\).
9: Output GetTotalSum\(\left(\mathcal{T}_{\texttt{key}}\,,i\right)\).
10:endfor
11:endfor
```
**Algorithm 3** Hierarchical Perturbation with DP-Tree
Once sufficient records of certain key are accumulated (i.e., following the notation from the previous section there are at least \(\mu\) unique user contributions), the objective is to select the key for _statistic release_. Statistic release corresponds to adding the value from user contributions to a main histogram that estimates
the distribution of records across all the keys. Notice that in this histogram, a single user can contribute multiple times to the same key. In this section we discuss how to create this histogram while preserving DP.
The crux of the algorithm is that for all the set of keys detected during the key selection phase via Algorithm 2, we maintain a DP-tree (an instantiation of Algorithm 1) for every key detected. We provide a \(\rho\)-zCDP guarantee for each of the trees for each of the keys. Since each user can contribute \(C\) records in this phase, and in the worst case all these contributions can go to the same node of a single binary tree, we scale up the sensitivity corresponding to any single node in the tree to \(L_{1}=C\cdot L\) (analogous to that in Section 3.2), and ensure that each tree still ensures \(\rho\)-zCDP. We provide the details of the algorithm in Algorithm 3. The privacy guarantee follows immediately from Theorem 2.3, and the translation from \(\rho\)-zCDP to \((\varepsilon,\delta)\)-DP guarantee. In Algorithm 3, we will use a lot of the binary tree aggregation primitives we used in Section 3.3.
## 4 System Implementation and Optimization
When implementing the streaming differential privacy algorithms described in Section 3, one must take the system constraints into practical considerations. There are three main challenges:
* The streaming framework described in Section 2.3 discretizes the data stream into micro-batches. Therefore, streaming key selection and hierarchical perturbation, whose algorithms are defined based on data stream \(D_{\tilde{t}^{\varepsilon}_{i}}\), must be implemented using micro-batch \(\Delta D_{\tilde{t}^{\varepsilon}_{i}}\) (defined in Section 2.3) and the system state. In Section 4.1 we detail the complete state management of DP-SQLP.
* DP-SQLP is input data stream driven. The state loading and updating require the existence of a key in the current micro-batch. However, Algorithm 2 requires to test all keys that have appeared at least once. In Section 4.3 we discuss a new algorithm that avoids testing all the keys that have appeared at least once.
* There are multiple components to the DP-SQLP system wich are individually \((\varepsilon,\delta)\)-DP. It is necessary to use appropriate form of composition to account for the total privacy cost. In Section 4.4, we detail the complete privacy accounnting for DP-SQLP.
### State Management
As mentioned above, both streaming key selection and hierarchical perturbation are defined based on data stream \(D_{\tilde{t}^{\varepsilon}_{i}}\). Therefore, they both require stateful operations. Furthermore, the global user contribution bounding that tracks the number of records per-user in data stream \(D\) is also stateful.
In DP-SQLP, the system state store is a persistent storage system, backed by Spanner database [12] that provides high availability and fault tolerance. The system state store is co-managed by DP-SQLP operator and the streaming framework for state update and maintenance. All the state information required by streaming differential privacy is stored in the system state store. For a pictorial depiction, see Figure 3.
Figure 6: Execution of streaming differential privacy mechanism
**State Store Structure:** There are two main state tables in the system state stores, which are managed by the same streaming framework. Each state table is a key value storage contains _state key_ and _state object_.
The first state table is keyed by user id to track per-user contribution within the data stream. The state object simply stores the count value.
The second state table is keyed by GROUP BY keys. The state object contains data buffer, DP-Trees for key selection and DP-Trees for aggregation columns. Data buffer is a data structure that temporarily stores the unreleased, aggregated data from new records, due to the failure in thresholding test (line 9, Algorithm 2). One DP-Tree is used by each round of Algorithm 2 execution, and one DP-Tree is used for hierarchical perturbation per aggregation column.
**Execution Procedures:** The execution of streaming differential privacy mechanism is shown in Figure 6. Different shapes represent different users and different colors represent different keys. Each step is described as following.
1. **User contribution bounding**: Records in one _micro-batch_ are grouped by user id. A map in _system state store_ is maintained to track the number of records each user contributes. Once the number of contributions for a user reaches \(C\), all the remaining records for that user in the data stream will be discarded. Furthermore, we clamp the value \(v\) of each _aggregation column_\(m\), so that \(|v|\leq L_{m}\).
2. **Cross-user aggregation**: Records in one _micro-batch_ are grouped by key and aggregated, which form a delta result [20]. After that, the delta result will be merged into the data buffer that is loaded from the system state store.
3. **Streaming key selection**: The DP-Trees for streaming key selection are loaded from the system state store. Then we will perform Algorithm 2, which adds the incremental user count from the current micro-batch into DP-Tree, as a leaf node.
4. **Hierarchical perturbation**: Once a key is selected, the DP-Tree for hierarchical perturbation is loaded from the system state store. After that, we will use Algorithm 3 to get DP aggregation results, and output the results.
The execution engine used to implement user contribution bounding and hierarchical perturbation will be discussed in section 4.2.
As mentioned in section 3.3, the DP-Tree estimator is implemented with the "bottom-up Honaker" variance reduction to get the DP sum. The estimated sum for DP-Tree root at node\({}_{i^{*}}\) equals
\[\texttt{Sum}(\textsf{node}_{i})=\sum_{j=0}^{\mu-1}c_{j}\cdot\textsf{sum}( \textsf{level}_{j}).\]
In case more than one DP-Trees are used in key selection or hierarchical perturbation, we need to further sum the Honaker estimations from each tree together.
### Parallel Execution
When building the DP-SQLP operator, we leverage the F1 query engine [38] for its wide range of data sources and distributed query execution (Figure 7). The user contribution bounding step is executed by the _user contribution bounding server_. The privacy key selection and hierarchical perturbation are executed by the _data perturbation server_. Both servers contain thousands of workers that are horizontally scalable. Input data is first read by F1 query engine, partitioned, then sent to user contribution bounding server through Remote Procedure Calls (RPCs). After that, the bounded data will stream back to F1, re-partitioned, then being sent to data perturbation server for key selection and hierarchical perturbation.
### Empty Key Release Prediction
Since the data stream is unordered and unbounded, the existence of user contributions within each micro-batch can be arbitrary, as shown in Figure 8. It is possible that some keys do not have any user records in a micro-batch. In the traditional streaming systems, states are not updated unless new records appear in the micro-batch[43]. Therefore, the system only needs to load state with keys from the current micro-batch.
However, the streaming key selection algorithm (Algorithm 2) requires us to perform the thresholding test for the entire key space, and it is possible for a key being selected without new records. An naive approach is to load all the keys with their associated states from the system state store, and run Algorithm 2 directly. Unfortunately, when the key space is large, the I/O cost and memory cost of loading the entire state table is too high. Here, we propose the empty key release prediction algorithm, together with other operational strategy, to solve the scalability challenge.
**Algorithm description for empty key release prediction:** There are two scenarios that may trigger a key being selected: key is selected due to additional user contributions, or key is selected due to noise addition without user contributions. The first scenario is naturally handled by streaming system, when processing micro-batch with new records. The secondary scenario is handled by Algorithm 4.
When a micro-batch \(\Delta D_{tr_{j}}\) contains key \(k\), the streaming private key selection algorithm for key \(k\) is applied to the sub-stream \(D_{tr_{j}}\). In case \(k\) is not selected, we will simulate streaming private key selection algorithm executions from \(tr_{j+1}\) to \(tr_{\lceil\texttt{Tr}\rceil}\), using sub-stream \(D_{tr_{j}}\), and predict if any future release is possible
Figure 8: Some key may not have records within certain micro-batch
Figure 7: Parallel execution within DP-SQLP operator
by adding leaf node with _zero_ count. The predicted releasing time is \(tr_{p}\), and it is written to the system state store.
After making a release prediction, the DP-SQLP will continue process the next micro-batch. For key \(k\) with predicted releasing time \(tr_{p}\), there are two cases:
1. \(k\) appears in another micro-batch \(\Delta D_{tr_{n}}\) before the predicted triggering timestamp (\(j<n<p\)). In this case, the prior prediction result is discarded. We will perform key selection algorithm for the micro-batch \(\Delta D_{tr_{n}}\). All the thresholding test for micro-batches that do not have \(k\) between \(tr_{j+1}\) and \(tr_{n-1}\) have been performed during the prediction phase in the prior micro-batch \(D_{tr_{j}}\). In addition, we will make a new prediction for \(\Delta D_{tr_{n}}\).
2. \(k\) appears in another micro-batch \(\Delta D_{tr_{n}}\) after the predicted triggering timestamp (\(p<n\)). In this case, DP-SQLP have loaded system state for \(k\) at the predicted time \(tr_{p}\) and release data from the data buffer. We will start a new round of streaming key selection from micro-batch \(\Delta D_{tr_{n}}\).
Within these operations, some computations might be wasted (e.g., Case 1). However, when the key space is large, the reductions in I/O cost and memory cost bring in more benefits than the CPU overhead.
The prediction result is stored in the system state. The step to load predicted result is shown in Figure 6. We also add a secondary index on the predicted timestamp to improve the state loading speed.
### Privacy Accounting for DP-SQLP
In DP-SQLP, the privacy costs occur in streaming key selection (Algorithm 2) and hierarchical perturbation (Algorithm 3). Because each user is allowed to contribute at most \(C\) records, we use the combination of composition and sensitivity in privacy accounting.
* _Privacy accounting for streaming key selection:_ When executing Algorithm 2 in DP-SQLP, the _value_ added to each leaf node is the _unique user count_. Therefore, the per-user sensitivity for each DP-Tree is one. In addition, we restart the Algorithm 2 once a key is selected, and data accumulated in data buffer is released immediately. As the result, each user may participate in at most \(C\) rounds of key selection. Given the \((\varepsilon,\delta)\) privacy budget for each round of Algorithm 2, the total privacy cost for streaming key selection is calculated using the empirically tighter variant of advanced composition [18] with \(C\)-fold.
* _Privacy accounting for hierarchical perturbation:_ For each user, in the worst case, all \(C\) contributions can go to the same node of a single DP-Tree, we scale up the sensitivity corresponding to any single
node in the tree to \(L_{1}=C\cdot L\), and ensure that each tree still ensures \(\rho\)-zCDP. After that, we use the conversaion from [34, Proposition 3] to translate privacy cost from \(\rho\)-zCDP to \((\varepsilon,\delta)\)-DP guarantee 8.
Footnote 8: The exact computation is from [https://github.com/IBM/discrete-gaussian-differential-privacy/blob/master/cdp2adp.py#1123](https://github.com/IBM/discrete-gaussian-differential-privacy/blob/master/cdp2adp.py#1123)
Finally, the privacy costs of key selection and hierarchical perturbation are combined via advanced composition.
## 5 Experiments
The experiments are performed using both synthetic and real-world data to demonstrate the data utility and scalability. The streaming DP mechanism is implemented in DP-SQLP operator, as described in section 4.
**Baselines:** We compare DP-SQLP with two baseline approaches for data utility.
* Repeated differential privacy query. Most of the existing DP mechanisms does not have capacity to track user contributions across multiple queries. Therefore, when handling data streams, one common workaround is to repeatedly apply the one-shot DP query to the growing data set in order to get the histogram update. Thus, the overall privacy budget usage is the composition of all queries.
* Incremental differential privacy processing. The one-shot differential privacy algorithm is applied separately to each micro-batch, and we can get the final result by aggregating the outputs from each micro-batch. Compared with baseline 1, baseline 2 requires a similar global user contribution bounding system as DP-SQLP.
For both baseline 1 and baseline 2, the one-shot differential privacy mechanism is executed by Plume [3] with the Gaussian mechanism. Each one-shot differential privacy execution guarantees \((\varepsilon,\delta)\)-differential privacy. We also adopt the optimal composition theorem for DP [27] to maximize the baseline performance.
**Metrics:** The data utility is evaluated based on 4 metrics calculated between the ground truth histogram and the differentially private histogram.
* Number of retained keys, which reflects how many keys are discovered during key selection process. It is also known as the \(\ell_{0}\) norm.
* \(\ell_{\infty}\) norm, which reflects the worst case error \[\ell_{\infty}=\max_{k\in\texttt{key space}}(|\hat{M}_{k}-M_{k}|).\]
* \(\ell_{1}\) norm, which is an aggregated error \[\ell_{1}=\sum_{k\in\texttt{key space}}(|\hat{M}_{k}-M_{k}|).\]
* \(\ell_{2}\) norm, also known as the euclidean norm \[\ell_{2}=\sqrt{\sum_{k\in\texttt{key space}}(|\hat{M}_{k}-M_{k}|^{2})}.\]
We choose \(\varepsilon=6\) and \(\delta=10^{-9}\) as the overall privacy budget for all experiments. Within DP-SQLP, the privacy budget used by the aggregation column is \(\varepsilon_{m}=\varepsilon/2\), \(\delta_{m}=\delta/3\), and the ones used by key selection is \(\varepsilon_{k}=\varepsilon/2\), \(\delta_{k}=\delta\times 2/3\). The parameter \(C\) is chosen based on the dataset property. In this experiment, we sampled 10% of one day's data and set \(C\) according to the 99 percentile of per user number of records. There are more discussions on choosing \(C\) in Section 5.4.
For both synthetic data and real-world data, We assume the dataset represents data stream within one day. We also shuffle users' records so that they are randomly distributed within the day. In experiments, the event-time window is also fixed to one day.
In addition to data utility, we also report the performance latency under various micro-batch sizes and number of workers.
### Synthetic Data
The synthetic data is generated to capture the long-tailed nature of real data. There are 10 millions unique users in the synthetic dataset. Each user draws a number of contributed records from a distribution with range \([1,10^{5}]\) and mean 10 according to a Zipf-Mandelbrot distribution. The parameters9 are chosen so that roughly 15% users contribute to more than 10 records. The key in each record is also sampled from a set of size \(10^{6}\), following a Zipf-Mandelbrot distribution10. This implies that roughly 1/3 of records have the first \(10^{3}\) keys.
Footnote 9: In Zipf-Mandelbrot, the sampling probability is proportional to \((x+q)^{-s}\), where \(q=26\), \(s=6.738\).
Footnote 10: \(q=1000\), \(s=1.4\).
The histogram query task we perform is a simple count query.
The per-record clamping limit \(L=1\) since the aggregation function is _COUNT_. We set \(C=32\).
All measurements are averaged across 3 runs. The experiments are performed with 100 micro-batches and 1000 micro-batches. Within one day, 100 micro-batches correspond to roughly 15 minute triggering interval, and 1000 micro-batches correspond to roughly 1.5 min triggering interval.
The results are shown in Table 1. There are significant data utility improvement comparing DP-SQLP with two baselines. With 100 micro-batches, the number of retained keys is increased by **65** times; the worst case error is reduced by **92%**; the \(\ell_{1}\) norm is reduced by **65.1%** and the \(\ell_{2}\) norm is reduced by **88.4%**. The utility improvement is even more significant with 1000 micro-batches. The number of retained keys is increased from 0 to 22,280; the worst case error is reduced by **93.9%**; the \(\ell_{1}\) norm is reduced by **67.2%** and the \(\ell_{2}\) norm is reduced by **90.2%**.
Another observation we have for DP-SQLP is its stability when the number of micro-batches increases. The utility of one-shot differential privacy mechanisms in baseline 1 and baseline 2 degrade quickly, due to
\begin{table}
\begin{tabular}{|l|r|r|r|} \hline \multirow{2}{*}{Metrics} & \multicolumn{3}{c|}{100 Micro-batches} \\ \cline{2-4} & DP-SQLP & Baseline 1 & Baseline 2 \\ \hline Keys & 28,338 & 435 & 191 \\ \(\ell_{\infty}\) Norm & 1,391 & 18,077 & 21,913 \\ \(\ell_{1}\) Norm & 17,741,225 & 50,835,203 & 58,551,587 \\ \(\ell_{2}\) Norm & 50,039 & 430,547 & 576,425 \\ \hline \hline \multirow{2}{*}{Metrics} & \multicolumn{3}{c|}{1000 Micro-batches} \\ \cline{2-4} & DP-SQLP & Baseline 1 & Baseline 2 \\ \hline Keys & 22,280 & 0 & 0 \\ \(\ell_{\infty}\) Norm & 1,563 & 25,497 & 25,497 \\ \(\ell_{1}\) Norm & 19,395,721 & 59,052,062 & 59,052,062 \\ \(\ell_{2}\) Norm & 58,237 & 594,382 & 594,382 \\ \hline \end{tabular}
\end{table}
Table 1: Data utility measure with synthetic data (\(\varepsilon=6,\delta=10^{-9}\))
the privacy budget split (baseline 1) and data stream split (baseline 2). This degradation sometimes is not linear. The number of retained keys in baseline 1 and 2 is reduced to 0 when the number of micro-batches grows from 100 to 1000. On the contrary, the utility degradation for DP-SQLP is not as significant. Indeed, when the number of micro-batches increases from 100 to 1000, for DP-SQLP, the number of retained keys is reduced by 21%, the worst case error is increased by 12%, the \(\ell_{1}\) norm is increased by 9%, and the \(\ell_{2}\) norm is increased by 16%.
In summary, DP-SQLP shows a significant utility improvement over one-shot differential privacy mechanisms when continuously generating DP histograms.
### Reddit Data
In the next step, we apply the same experiment to real-world data. Webis-ldr-17-corpus [41] is a popular dataset consisting of 3.8 million posts associated with 1.4 million users on the discussion website Reddit. Our task is to count the user participation per subreddit (specific interest group on Reddit).
We set \(C=17\) and the rest experiment settings are the same as the synthetic data. All measurements are averaged across 3 runs.
The results are summarized in Table 2, and we have similar observations as in the synthetic data experiments. DP-SQLP demonstrates significant utility improvements in the number of retained keys, \(\ell_{1}\) norm and \(\ell_{2}\) norm, as well as the performance stability when the number of micro-batches grows from 100 to 1000.
### Execution Performance
The end-to-end latency of a record consists of framework latency and micro-batch execution latency. The former is determined by the streaming framework. The latter is a critical indicator for the system scalability. In this section, we report the execution latency for each micro-batch, under different micro-batch sizes and number of workers.
The results are shown in Figure 9. All measurements are averaged across 2 runs, with shaded regions representing standard error. The execution latency grows sub-linearly as the micro-batch size increases. For example, with 150 workers, the execution latency grows 1.7 times while the data size increases 5 times from 1 GB to 5 GB.
\begin{table}
\begin{tabular}{|l|r|r|r|} \hline \multirow{2}{*}{Metrics} & \multicolumn{3}{c|}{100 Micro-batches} \\ \cline{2-4} & DP-SQLP & Baseline 1 & Baseline 2 \\ \hline Keys & 1,473 & 32 & 63 \\ \(\ell_{\infty}\) Norm & 102,250 & 267,147 & 103,546 \\ \(\ell_{1}\) Norm & 989,249 & 2,721,349 & 2,376,937 \\ \(\ell_{2}\) Norm & 127,721 & 322,739 & 156,472 \\ \hline \hline \multirow{2}{*}{Metrics} & \multicolumn{3}{c|}{1000 Micro-batches} \\ \cline{2-4} & DP-SQLP & Baseline 1 & Baseline 2 \\ \hline Keys & 1,181 & 9 & 3 \\ \(\ell_{\infty}\) Norm & 102,218 & 266,391 & 108,124 \\ \(\ell_{1}\) Norm & 1,074,724 & 3,081,542 & 3,074,655 \\ \(\ell_{2}\) Norm & 127,830 & 341,557 & 242,482 \\ \hline \end{tabular}
\end{table}
Table 2: Data utility measure with the Reddit data (\(\varepsilon=6,\delta=10^{-9}\))
Figure 9 also demonstrates horizontal scalability by trading machine resources with latency. When the total number of workers increases from 150 to 600, the execution latency is reduced by 41%, 40%, and 39% respectively for 1, 2, and 5 GB micro-batches.
To further test the scalability in terms of the size of key space, we generate another large synthetic dataset with 1 billion users. Each user draws the number of contributed records following Zipf-Mandelbrot distribution11, generating 6 billion records in total. The key in each record is sampled from 1 billion keys following uniform distribution. When setting the micro-batch size equals 1GB and using 5500 workers, the average execution latency is 306 seconds. DP-SQLP easily handle the large load without incurring any significant performance hit.
Footnote 11: \(q=26\), \(s=6.738\).
### Parameter Tuning
Tuning user contribution bounding is critical to achieve good data utility. If \(C\) is too small, lots of user records may be dropped due to user contribution bounding, which will lead to large histogram error. However, if \(C\) is very large, the noise and key selection threshold are scaled up accordingly. Choosing the right \(C\) is an optimization task.
In this section, we run the DP-SQLP with synthetic data using variable \(C\) from 1 to 50. Figure 10 shows how the change of \(C\) affects the number of retained keys, \(\ell_{1}\) norm, \(\ell_{2}\) norm and \(\ell_{\infty}\) norm. The optimal value for \(C\) (naturally) varies under different metrics. For example, the optimal \(C\) for \(\ell_{1}\) norm is around 25 whereas the optimal \(C\) for \(\ell_{2}\) norm is around 30. In comparison, the optimal \(C\) for the number of retained key is around 5. Therefore, the optimal value of \(C\) should be chosen according to the metric we care most about (e.g., \(\ell_{2}\) norm). In real applications, we could use the \(P99\) percentile value or DP \(P99\) percentile value from data sample as the starting point and perform few rounds of tests to search for the optimal point.
## 6 Conclusion and Future Work
In this paper, we presented a streaming differentially private system (DP-SQLP) that is designed to continuously release DP histograms using existing distributed stream processing systems. With user contribution bounding, streaming key selection, and hierarchical perturbation, we provide a formal \((\varepsilon,\delta)\)-user level DP guarantee for arbitrary data streams. In addition to the algorithmic design, we implemented our system using a streaming framework similar to Spark streaming, Spanner database, and F1 query engine from Google. The experiments were conducted using both synthetic data and Reddit data. We compared DP-SQLP with two baselines that use one-shot differential privacy algorithms (and privacy accumulation was composed over
Figure 10: Metrics under different contribution limit (\(\varepsilon=6,\delta=10^{-9}\))
time). The experiment results demonstrated a significant performance improvement in terms of data utility.
There are three main ways in which our system can be further extended. First, due to the nature of the underlying streaming system (i.e., an input driven stream), we cannot protect the event times of a user. It would be interesting to redesign stream processing systems that compatible with DP even in the context of protecting the event timestamps. Second, our algorithms are primarily designed to provide a centralized DP guarantee, where the final outcome of the system is guaranteed to be DP. It is worth exploring DP streaming system designs that allow stronger privacy guarantees like pan-privacy [17]. Third, we bound the contribution of each user globally by \(C\). However, for higher fidelity, it is important to explore approaches to perform per-key contribution bounding. Naive approaches that can address this issue can get complicated due to the fact that we are dealing with an input driven stream.
## Acknowledgement
We would like to thank Olaf Bachmann, Wei Hong, Jason Peasgood, Algis Rudys, Daniel Simmons-Marengo and Yurii Sushko for the discussions and support to this project.
|
2309.13188 | Masked Discriminators for Content-Consistent Unpaired Image-to-Image
Translation | A common goal of unpaired image-to-image translation is to preserve content
consistency between source images and translated images while mimicking the
style of the target domain. Due to biases between the datasets of both domains,
many methods suffer from inconsistencies caused by the translation process.
Most approaches introduced to mitigate these inconsistencies do not constrain
the discriminator, leading to an even more ill-posed training setup. Moreover,
none of these approaches is designed for larger crop sizes. In this work, we
show that masking the inputs of a global discriminator for both domains with a
content-based mask is sufficient to reduce content inconsistencies
significantly. However, this strategy leads to artifacts that can be traced
back to the masking process. To reduce these artifacts, we introduce a local
discriminator that operates on pairs of small crops selected with a similarity
sampling strategy. Furthermore, we apply this sampling strategy to sample
global input crops from the source and target dataset. In addition, we propose
feature-attentive denormalization to selectively incorporate content-based
statistics into the generator stream. In our experiments, we show that our
method achieves state-of-the-art performance in photorealistic sim-to-real
translation and weather translation and also performs well in day-to-night
translation. Additionally, we propose the cKVD metric, which builds on the sKVD
metric and enables the examination of translation quality at the class or
category level. | Bonifaz Stuhr, Jürgen Brauer, Bernhard Schick, Jordi Gonzàlez | 2023-09-22T21:32:07Z | http://arxiv.org/abs/2309.13188v1 | # Masked Discriminators for Content-Consistent Unpaired Image-to-Image Translation
###### Abstract
A common goal of unpaired image-to-image translation is to preserve content consistency between source images and translated images while mimicking the style of the target domain. Due to biases between the datasets of both domains, many methods suffer from inconsistencies caused by the translation process. Most approaches introduced to mitigate these inconsistencies do not constrain the discriminator, leading to an even more ill-posed training setup. Moreover, none of these approaches is designed for larger crop sizes. In this work, we show that masking the inputs of a global discriminator for both domains with a content-based mask is sufficient to reduce content inconsistencies significantly. However, this strategy leads to artifacts that can be traced back to the masking process. To reduce these artifacts, we introduce a local discriminator that operates on pairs of small crops selected with a similarity sampling strategy. Furthermore, we apply this sampling strategy to sample global input crops from the source and target dataset. In addition, we propose feature-attentive denormalization to selectively incorporate content-based statistics into the generator stream. In our experiments, we show that our method achieves state-of-the-art performance in photorealistic sim-to-real translation and weather translation and also performs well in day-to-night translation. Additionally, we propose the cKVD metric, which builds on the sKVD metric and enables the examination of translation quality at the class or category level.
masked discriminators, feature-attentive denormalization, generative adversarial networks (GANs), content-consistent, unpaired image-to-image translation
## I Introduction
Unpaired image-to-image translation aims at transferring images from a source domain to a target domain when no paired examples are given. Recently, this field has attracted increasing interest and has advanced several use cases, such as photorealism [1, 2, 3], neural rendering [4], domain adaptation [5, 6], the translation of seasons or daytime [1, 7, 8], and artistic style transfer [9, 10, 11]. Current work has primarily focused on improving translation quality [8, 12], efficiency [13, 14], multi-modality [15, 16], and content consistency [2, 3]. Due to the ill-posed nature of the unpaired image-to-image translation task and biases between datasets, content consistency is difficult to achieve. To mitigate content inconsistencies, several methods have been proposed that constrain the generator of GANs [15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. However, only constraining the generator leads to an unfair setup, as biases in the datasets can be detected by the discriminator: The generator tries to achieve continent consistency by avoiding biases in the output, while the discriminator is still able to detect biases between both datasets and, therefore, forces the generator to include these biases in the output, for example, through hallucinations. Constraining the discriminator [2, 25, 26] or improving the sampling of training pairs [2, 27] is currently underexplored, especially for content consistency on a global level, where the discriminator has a global view on larger image crops instead of a local view on small crops. In this work, we propose _masked conditional discriminators_, which operate on masked global crops of the inputs to mitigate content inconsistencies. We combine these discriminators with an efficient sampling strategy based on a pre-trained robust segmentation model to sample similar global crops. Furthermore, we argue that when transferring feature statistics from the content stream of the source image to the
Fig. 1: Results of our method.
generator stream, content-unrelated feature statistics from the content stream could affect image quality if the generator is unable to ignore this information since the output image should mimic the target domain. Therefore, we propose a _feature-attentive denormalization (FATE)_ block that extends feature-adaptive denormalization (FADE) [7] with an attention mechanism. This block allows the generator to selectively incorporate statistical features from the content stream into the generator stream. In our experiments, we find that our method achieves state-of-the-art performance on most of the benchmarks shown in Figure 1.
Our contributions can be summarized as follows:
* We propose an efficient sampling strategy that utilizes robust semantic segmentations to sample similar global crops. This reduces biases between both datasets induced by semantic class misalignment.
* We combine this strategy with masked conditional discriminators to achieve content consistency while maintaining a more global field of view.
* We extend our method with an unmasked local discriminator. This discriminator operates on local, partially class-aligned patches to minimize the underrepresentation of frequently masked classes and associated artifacts.
* We propose a feature-attentive denormalization (FATE) block, which selectively fuses statistical features from the content stream into the generator stream.
* We propose the class-specific Kernel VGG Distance (cKVD) that builds upon the semantically aligned Kernel VGG Distance (sKVD) [2] and uses robust segmentations to incorporate class-specific content inconsistencies in the perceptual image quality measurement.
* In our experiments, we show that our method achieves state-of-the-art performance on photo-realistic sim-to-real transfer and the translation of weather and performs well for daytime translation.
## II Related Work
**Unpaired image-to-image translation.** Following the success of GANs [28], the conditional GAN framework [29] enables image generation based on an input condition. Pix2Pix [30] uses images from a source domain as a condition for the generator and discriminator to translate them to a target domain. Since Pix2Pix relies on a regression loss between generated and target images, translation can only be performed between domains where paired images are available. To achieve unpaired image-to-image translation, methods like CycleGAN [17], UNIT [22], and MUNIT [15] utilize a second GAN to perform the translation in the opposite direction and impose a cycle-consistency constraint or weight-sharing constraint between both GANs. However, these methods require additional parameters for the second GAN, which are used to learn the unpaired translation and are omitted when inferring a one-sided translation. In works such as TSIT [7] and CUT [31], these additional parameters are completely omitted at training time by either utilizing a perceptual loss [32] between the input image of the generator and the image to be translated or by patchwise contrastive learning. Recently, additional techniques have achieved promising results, like pseudo-labeling [4] or a conditional discriminator based on segmentations created with a robust segmentation model for both domains [2]. Furthermore, there are recent efforts to adapt diffusion models to unpaired image-to-image translation [33, 34, 35].
**Content consistency in unpaired image-to-image translation.** Due to biases between unpaired datasets, the content of translated samples can not be trivially preserved [2]. There are ongoing efforts to preserve the content of an image when it is translated to another domain by improving various parts of the training pipeline: Several consistency constraints have been proposed for the generator, which operate directly on the translated image [16, 17, 18], on a transformation of the translated image [19, 20, 24, 36, 37], or on distributions of multi-modal translated images [21]. The use of a perceptual loss [32] or LPIPS loss [38] between input images and translated images, as in [7] and [2], can also be considered a consistency constraint between transformed images. In [39] content consistency is enforced with self-supervised in-domain and cross-domain patch position prediction. There are works that enforce consistency by constraining the latent space of the generator [15, 22, 23]. Semantic scene inconsistencies can be mitigated with a separate segmentation model [16, 24]. To avoid inconsistency arising from style transfer, features from the generator stream are masked before AdaIN [9, 40]. Another work exploits small perturbations in the input feature space to improve semantic robustness [3]. However, if the datasets of both domains are unbalanced, discriminators can use dataset biases as learning shortcuts, which leads to content inconsistencies. Therefore, only constraining the generator for content consistency still results in an ill-posed unpaired image-to-image translation setup. Constraining discriminators to achieve content consistency is currently underexplored, but recent work has proposed promising directions. There are semantic-aware discriminator architectures [2, 4, 25, 41] that enforce discriminators to base their predictions on semantic classes, or VGG discriminators [2], which additionally operate on abstract features of a frozen VGG model instead of the input images. Training discriminators with small patches [2] is another way to improve content consistency. To mitigate dataset biases during training for the whole model, sampling strategies can be applied to sample similar patches from both domains [2, 27]. Furthermore, in [26], a model is trained to generate a hyper-vector mapping between source and target images with an adversarial loss and a cyclic loss for content consistency. In contrast, our work utilizes a robust semantic mask to mask global discriminators with a large field of view, which provide the generator with the gradients of the unmasked regions. This leads to a content-consistent translation while preserving the global context. We combine this discriminator with an efficient sampling method that uses robust semantic segmentations to sample similar crops from both domains.
**Attention in image-to-image translation.** Previous work has utilized attention for different parts of the GAN framework. A common technique is to create attention mechanisms that allow the generator or discriminator to focus on important regions of the input [42, 43, 44, 45, 11] or to capture the relationship between regions of the input(s) [46, 47, 10]. Other works guide a pixel loss with uncertainty maps computed from attention maps [48], exploit correlations between channel maps with scale-wise channel attention [46], disentangle content and style with diagonal attention [49], or merge features from multiple sources with an attentional block before integrating them into the generator stream [50]. In [51], an attention-based discriminator is introduced to guide the training of the generator with attention maps. Furthermore, ViTs [52] are adapted for unpaired image-to-image translation [53, 54], and the computational complexity of their self-attention mechanism is reduced for high-resolution translation [54]. In contrast, our work proposes an attention mechanism to selectively integrate statistics from the content stream of the source image into the generator stream. This allows the model to focus on statistical features from the content stream that are useful for the target domain.
## III Method
We propose an end-to-end framework for unpaired image-to-image translation that transfers an image \(I_{a}\in\mathbb{R}^{3\times h\times w}\) from a source domain \(a\) to an image \(F_{b}\in\mathbb{R}^{3\times h\times w}\) from a target domain \(b\). Our goal is to design a method for content-consistent translations that utilizes a simple masking strategy for the global crops seen by the discriminators. We achieve this by combining an efficient segmentation-based sampling method that samples large crops from the input image with a masked discriminator that operates on these global crops. This is in contrast to EPE [2], which achieves content-consistent translation at the local level by sampling small, similar image crops from both domains. To further improve image quality, we use a local discriminator that operates on a batch of small image patches sampled from the global input crops utilizing our sampling method. An overview of our method is shown in Figure 2. Furthermore, we propose a feature-attentive denormalization (FATE) block that extends feature-adaptive denormalization (FADE) [7] with an attention mechanism, allowing the generator to selectively incorporate statistical features from the content stream of the source image into the generator stream.
### _Contend-based Similarity Sampling_
To minimize the bias between both datasets in the early stage of our method, we sample similar image crops with an efficient sampling procedure. This procedure uses the one-hot encoded semantic segmentations \(C_{a}\in\mathbb{R}^{d\times h\times w}\) and \(C_{b}\in\mathbb{R}^{d\times h\times w}\) of both domains, where \(d\) is the channel dimension of the one-hot encoding. In our case, these segmentations are created with the robust pre-trained MSeg model [55]. First, a mask \(M_{ab}\in\mathbb{R}^{1\times h\times w}\) is computed from the segmentations:
\[M_{ab}=\max_{d}(C_{a}\circ C_{b}), \tag{1}\]
where \(\circ\) denotes the Hadamard product. We can now sample semantically aligned image crops \(i_{a}\) and \(i_{b}\) from the images \(I_{a}\) and \(I_{b}\) with the crop \(m_{ab}\) from mask \(M_{ab}\). Thereby, we calculate the percentage of overlap of semantic classes between both image crops as follows:
\[\mathcal{P}_{match}(i_{a})=\{i_{b}\mid\mathrm{mean}(m_{ab})>t\}, \tag{2}\]
where \(t\) is the similarity sampling threshold. In our case, we sample crops where more than \(50\%\) of the semantic classes align (\(t>0.5\)). We use this procedure to sample crops \(c_{a}\)
Fig. 2: **Method overview. In our method, similar image crops from both domains (\(i_{a}\), \(i_{b}\)) and their corresponding conditions (\(c_{a}\), \(c_{b}\), \(z_{a}\)) are selected via a sampling procedure. In this sampling procedure, a mask \(M_{ab}\) is created from the conditions \(C_{a}\) and \(C_{b}\). This mask is used to sample crops from both datasets for which the semantic classes align by at least 50%. The cropped mask \(m_{ab}\) is also used to mask the generated fake image \(f_{b}\), the real images \(i_{b}\), and the corresponding conditions for the global conditional discriminators. Through the mask, these discriminators can only see the parts of the crop where the semantic classes align. To further improve image quality, a local discriminator is introduced that works on a batch of small patches selected from the crop using our sampling technique. This discriminator is not masked and works on patches where the semantic classes do not fully align.**
\(c_{b}\), and \(z_{b}\) from the discriminator conditions \(C_{a}\), \(C_{b}\), and the generator condition \(Z_{b}\) as well. The cropped mask \(m_{ab}\) is also used for our masked conditional discriminator.
### _Content-based Discriminator Masking_
To train a discriminator with a global field of view that facilitates the usage of global properties of the scene, while simultaneously maintaining content consistency, we mask the discriminator input from both domains with a content-based mask \(m_{ab}\). This mask erases all pixels from the discriminator input where the semantic classes do not align. This removes the bias between both datasets caused by the underlying semantic class distribution of the two domains without directly restricting the generator. The objective function of a conditional GAN with a masked discriminator that transfers image crops \(i_{a}\) to domain \(b\) can be then defined as follows:
\[\mathcal{L}_{madv} =\ \mathbb{E}_{i_{b},c_{b},m_{ab}}[\log D(i_{b}\circ m_{ab}|c_{b} \circ m_{ab})] \tag{3}\] \[+\ \mathbb{E}_{i_{a},z_{a},c_{a},m_{ab}}[\log(1-D(G(i_{a}|z_{a}) \circ m_{ab}|c_{a}\circ m_{ab}))].\]
To ensure that the discriminator does not use the segmentation maps as learning shortcuts, we follow [2] and create the segmentations of both datasets using a robust segmentation model such as MSeg [55]. With this setting, we are able to train discriminators with large crop sizes with significantly reduced hallucinations in the translated image.
### _Local Discriminator_
Masking the input of the discriminator may lead to the underrepresentation of some semantic classes. Therefore, we additionally train a local discriminator that operates on a batch of small patches sampled from the global crop. Our local discriminator is not masked but only sees patches where a certain amount of the semantic classes align. In our case, we sample patches with 1/8th the size of the global input crop where more than \(50\%\) of the semantic classes align. We use our sampling procedure from Section III-A to sample these patches. Using small, partially aligned patches ensures that semantic classes are less underrepresented while maintaining content consistency.
### _Feature-attentive Denormalization (FATE)_
Spatially adaptive denormalization (SPADE) [56] fuses resized semantic segmentation maps as content into the generator stream. Feature-adaptive denormalization (FADE) [7] generalizes SPADE to features learned through a content stream. As shown in Figure 3, the normalized features \(N(h)\) of the generator are modulated with the features \(f\) of the content stream using the learned functions \(\gamma\) and \(\beta\) as follows:
\[\mathrm{FADE}(h,f)=N(h)\circ\gamma(f)+\beta(f), \tag{4}\]
where \(\gamma\) and \(\beta\) are one-layer convolutions. This denormalization is applied in several layers of the generator. However, we argue that denormalization with content features is not always appropriate for transferring images to another domain because, as shown in [57, 58, 9, 9], image feature statistics contain not only content information but also style information. When transferring feature statistics from the content stream of the source image to the generator stream, style information from the source image could affect the final image quality if the generator cannot ignore this information since the output image should mimic the style of the target domain. Therefore, we propose an additional attention mechanism to selectively incorporate statistics from the content stream into the generator stream. This allows the model to only fuse the statistical features from the source image into the generator stream that are useful for the target domain. As shown in Figure 3, this attention mechanism relies on the features of the content stream and the features of the generator stream and attends to the statistics \(\gamma\) and \(\beta\). With this attention mechanism, we can extend FADE to feature-attentive denormalization (FATE) as follows:
\[\mathrm{FATE}(h,f)=N(h)\circ A(h,f)\circ\gamma(f)+A(h,f)\circ\beta(f), \tag{5}\]
where \(A\) is the attention mechanism and \(A(h,f)\) is the attention map for the statistics. We use a lightweight two-layer CNN with sigmoid activation in the last layer as the attention mechanism. More details can be found in Appendix A.
### _Training Objective_
Our training objective consists of three losses: a global masked adversarial loss \(\mathcal{L}^{global}_{madv}\), a local adversarial loss \(\mathcal{L}^{local}_{adv}\), and the perceptual loss \(\mathcal{L}_{perc}\) used in [7]. We define the final training objective as follows:
\[\mathcal{L}=\lambda^{global}_{madv}\mathcal{L}^{global}_{madv}+\lambda^{local} _{adv}\mathcal{L}^{local}_{adv}+\lambda_{perc}\mathcal{L}_{perc}, \tag{6}\]
where we use a hinge loss to formulate the adversarial losses and \(\lambda^{global}_{madv}\), \(\lambda^{local}_{madv}\), \(\lambda_{perc}\) are the corresponding loss weights.
## IV Experiments
### _Experimental Settings_
**Implementation details.** Our method is implemented in PyTorch 1.10.0 and trained on an A100 GPU (40 GB) with batch size \(1\). For training, we initialize all weights with the Xavier normal distribution [60] with a gain of \(0.02\) and use an Adam optimizer [61] with \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\). The initial learning rates of the generator and discriminators are set to \(0.0001\) and halved every \(d_{e}\) epochs. Learning rate
Fig. 3: **FADE and FATE.**
decay is stopped after reaching a learning rate of \(0.0000125\). We formulate our adversarial objective with a hinge loss [62] and weight the individual parts of our loss function as follows: \(\lambda_{madv}^{global}=1.0,\lambda_{madv}^{local}=1.0\), \(\lambda_{perc}=1.0\). In addition, we use a gradient penalty on target images [63, 64] with \(\lambda_{rp}=0.03\). The images of both domains are resized and cropped to the same size and randomly flipped before the sampling strategy is applied. In our experiments, we show that we achieve the best performance by cropping global patches of size 352\(\times\)352. We crop local patches with 1/8th the size of the global crop (i.a., 44\(\times\)44). The global discriminators are used on two scales. Crops are scaled down by a factor of two for the second scale. We train all our models for \(\sim 400\)K iterations. Training a model takes 4-8 days, depending on the dataset, model, and crop size. We report all results as an average across five different runs. We refer to Appendix A for more details regarding the training and model. Our implementation is publicly available at [https://github.com/BonifazStuhr/feamgan](https://github.com/BonifazStuhr/feamgan).
**Memory usage.** Our best model requires \(\sim\)25 GB of VRAM at training time and performs inference using \(\sim\)12 GB for an image of size 957\(\times\)526. Our small model, with a slight performance decrease, runs on consumer graphic cards with \(\sim\)9 GB of VRAM at training time and performs inference using \(\sim\)8 GB for an image of size 957\(\times\)526.
**Datasets.** We conduct experiments on four translation tasks across four datasets. For all datasets, we compute semantic segmentations with MSeg [55], which we use as a condition for our discriminator and to calculate the discriminator masks.
(1) _PFD_[65] consists of images of realistic virtual world gameplay. Each frame is annotated with pixel-wise semantic labels, which we use as additional input for our generator. We use the same subset as [2] to compare with recent work.
(2) _Viper_[66] consists of sequences of realistic virtual world gameplay. Each frame is annotated with different labels, where we use the pixel-wise semantic segmentations as additional input for our generator. Since Cityscapes does not contain night sequences, we remove them from the dataset.
(3) _Cityscapes_[67] consists of sequences of real street scenes from 50 different German cities. We use the sequences of the entire training set to train our models.
We use datasets (1-3) for the sim-to-real translation tasks _PFD\(\rightarrow\)Cityscapes_ and _Viper\(\rightarrow\)Cityscapes_.
(4) _BDD100K_[68] is a large-scale driving dataset. We use subsets of the training and validation data for the following translation tasks: _Day\(\rightarrow\)Night_, _Clear\(\rightarrow\)Snowy_.
**Compared methods.** We compare our work with the following methods.
* Color Transfer (CT) [69] performs color correction by transferring statistical features in 1\(\cancel{5}\) space from the target to the source image.
* MUNIT [15] achieves multimodal translation by recombining the content code of an image with a style code sampled from the style space of the target domain. It is an extension of CycleGAN [17] and UNIT [22].
* CUT [31] uses a patchwise contrastive loss to achieve one-sided unsupervised image-to-image translation.
* TSIT [7] achieves one-sided translation by fusing features from the content stream into the generator on multiple scales using FADE and utilizing a perceptual loss between the translated and source images.
* QS-Attn [47] builds upon CUT [31] with an attention module that selects significant anchors for the contrastive loss instead of features from random locations of the image.
* EPE [2] relies on a variety of gbuffers as input. Techniques such as similarity cropping, utilizing segmentations for both domains generated by a robust segmentation model as input to the conditional discriminators, and small patch training are used to achieve content consistency.
Since EPE [2] provides inferred images of size 957\(\times\)526 for the _PFD\(\rightarrow\)Cityscapes_ task, comparisons are performed on this resolution. For the _Viper\(\rightarrow\)Cityscapes_, _Day\(\rightarrow\)Night_, and _Clear\(\rightarrow\)Snowy_ tasks, we train the models using their official implementations. Furthermore, we retrain models as additional baselines for the _PFD\(\rightarrow\)Cityscapes_ task.
**Evaluation metrics.** Following prior work [2], we use the Frechet Inception Distance (FID) [70], the Kernel Inception Distance (KID) [71], and the semantically aligned Kernel VGG Distance (sKVD) [2] to evaluate image translation quality quantitatively. The sKVD metric was introduced in [2] and improved over previous metrics for mismatched layouts in source and target data. In addition, we propose the class-specific Kernel VGG Distance (cKVD), where a robust segmentation model is used before the sKVD calculation to mask input crops by class (or category). Thereby, for each given class, all source and target image crops are filtered using their segmentations by erasing the pixels of all other classes. We select crops where more then \(5\)% of the pixels belong to the respective class. Then, the sKVD is calculated class-wise on the filtered crops. Afterward, we can report the cKVD as an average over all classes or separately for each class to achieve a more fine-grained measurement. We follow [2] and use a crop size of \(1/8\) and sample source and target crop pairs with an similarity threshold of \(0.5\) between unmasked source and target segmentation crops. More information on the classes used in the cKVD metric can be found in Table IV of Appendix A. For the KID, sKVD, and cKVD metrics, we multiply the measurements by \(1000\) to improve the readability of results.
### _Comparison to the State of the Art_
We compare our models quantitatively and qualitatively with different baselines. First, we compare our results with EPE and the baselines provided by EPE [2]. Then, we train our own baselines on the four translation tasks for further comparison.
**Comparison to EPE.** A set of inferred images is provided for EPE and each of the baselines [2]. Therefore, we train our models on the same training set and use the inferred images from our best models for this comparison. We select our best models based on scores of various visual metrics and visual inspections of translated images. As shown in Figure 4 a) and b), our model relies solely on segmentation maps as additional input compared to EPE, which uses a variety of gbuffers. In addition, our model is trained with significantly fewer steps (\(\sim 400\)K iterations) compared to EPE and the baselines (1M iterations). As shown in Table I, our model outperforms the baselines and EPE in all commonly used metrics (FID and KID) and the sKVD metric. More surprisingly, our small model, which can be trained on consumer GPUs, outperforms all baselines and EPE as well.
However, our cKVD metric shows that our models have difficulty with the person and sky classes. Therefore, the average cKVD values are high and become low when we remove both classes from the average calculation (AVG\({}_{sp}\)). A possible reason for the weaker performance on the person class is our masking procedure. Since the masking procedure requires overlapping samples in both domains, the person class is not seen frequently during training. This can lead to inconsistencies (a glow) around the person class, as seen in Figure 11 of our limitations. The masking procedure also leads to a drop in performance in the sky class, as seen in Table III of our ablation study. As shown in the first row of Figure 4 and the results of Figures 18 and 19 of Appendix A, our model translates larger structures, such as lane markings, more consistently, but fails to preserve some in-class characteristics from the source dataset. This is evident, for example, in the structure of translated streets and the corresponding cKVD value (road). As shown in the second row and Appendix A, EPE achieves visually superior modeling of the reflective properties of materials (e.g., the car) but suffers from inconsistencies (erased objects) regarding the vegetation, which can be seen in the palm trees and the corresponding cKVD value (vegetation). The superior modeling of reflective properties can be attributed to the availability of gbuffers (i.a., glossines) in EPE's input.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{FID} & \multirow{2}{*}{KID} & \multirow{2}{*}{sKVD} & \multicolumn{6}{c}{cKVD} \\ \cline{3-14} & & & AVG & & AVG\({}_{sp}\) & sky & ground & road & terrain & vegetation & building & roadside-obj. & person & vehicle & rest \\ \hline ColorTransfer & 84.34 & 88.17 & 16.65 & 36.01 & 33.12 & 32.40 & **12.97** & 16.13 & 20.94 & 19.24 & 29.92 & 74.79 & 62.78 & 41.79 & **49.16** \\ MUNIT & 45.00 & 35.05 & 16.51 & 38.57 & 34.81 & 29.80 & 16.93 & 17.62 & 29.52 & 19.29 & **24.28** & 79.14 & 77.34 & 40.13 & 51.61 \\ CUT & 47.71 & 42.01 & 18.03 & 35.31 & 33.26 & **25.96** & 15.32 & 17.87 & **20.09** & 22.72 & 25.00 & 74.02 & **60.99** & 41.71 & 49.37 \\ EPE & 44.06 & 33.66 & 13.87 & **35.22** & **30.21** & 27.14 & 13.54 & **13.56** & 24.77 & 20.77 & 26.75 & **50.58** & 83.34 & 41.29 & 50.45 \\ \hline FeaMGAN-S (ours) & **43.27** & **32.59** & **12.98** & 40.23 & 32.69 & 38.10 & 13.29 & 15.34 & 26.29 & 20.17 & 27.32 & 61.57 & 102.65 & 42.83 & 54.73 \\ FeaMGAN (ours) & **40.32** & **28.59** & **12.94** & 40.02 & 31.78 & 46.70 & 13.72 & 15.60 & 23.23 & **17.69** & 25.57 & 66.65 & 99.24 & **39.38** & 52.40 \\ \hline \hline \end{tabular}
\end{table} TABLE I: **Quantitative comparison to the baselines provided by EPE.** We calculate all metrics on the provided inferred images of EPE and its baselines [2].
Fig. 4: **Qualitative comparison to EPE.** We compare our method with the provided inferred images of EPE [2].
By surpassing EPE in all commonly used quantitative metrics while maintaining content consistency, we are able to show that our model improves overall quantitative translation performance. However, our method has specific drawbacks that we discussed with the help of the cKVD metric and visual comparisons.
**Comparisons to retrained baselines.** We find that retraining the baselines with their original training setup for the PFD\(\rightarrow\)Cityscapes task significantly improves their performance on commonly used metrics compared to the baselines provided by EPE, as can be seen in Table II. However, as shown in Figure 5 and the random results of Figure 20 of Appendix A, content-consistency problems remain. This indicates again that simply relying on commonly used metrics does not provide a complete picture if content consistency is taken into account. When qualitatively comparing our model to the baselines for the PFD\(\rightarrow\)Cityscapes and Viper\(\rightarrow\)Cityscapes tasks in Figure 5, we observe that our method significantly reduces content inconsistencies. However, a limitation of our masking strategy are class boundary artifacts, which are particularly evident in the Day\(\rightarrow\)Night translation task (Figure 11). Since masking allows our method to focus on specific classes, we achieve state-of-the-art performance for the Clear\(\rightarrow\)Snowy translation task.
### _Ablation Study_
**Effectiveness of masked discriminator.** As shown in Figure 6 and the random samples in Figure 22 of Appendix A, our masking strategy for the discriminator positively impacts content consistency. Without masking, inconsistencies occur that correlate with biases between the class distributions of the source and target domains. As shown in [2], the distributions of certain classes in the spatial image dimension vary greatly between the PFD dataset and the Cityscapes dataset. For example, trees in Cityscapes appear more frequently in the top half of the image, resulting in hallucinated trees when the images are translated without accounting for biases. In the
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{PFD\(\rightarrow\)Cityscapes} & \multicolumn{3}{c}{Viper\(\rightarrow\)Cityscapes} & \multicolumn{3}{c}{Day\(\rightarrow\)Night} & \multicolumn{3}{c}{Clear\(\rightarrow\)Snowy} \\ \cline{2-13} & FID & KID & sKVD & cKVD & FID & KID & sKVD & cKVD & FID & KID & kKVD & cKVD & FID & KID & sKVD & cKVD \\ \hline Color Transfer & 91.01 & 94.82 & 18.16 & 50.87 & 89.30 & 83.51 & 20.20 & 51.23 & 125.90 & 140.60 & 32.58 & 56.52 & 46.85 & 19.44 & 14.91 & 42.89 \\ MUNIT & 40.36 & 29.98 & 14.99 & 43.24 & 47.96 & 30.35 & 14.14 & 59.62 & 42.53 & 31.83 & 15.02 & 50.83 & **44.74** & 17.48 & 11.65 & 48.10 \\ CUT & 49.55 & 44.25 & 16.85 & **37.53** & 60.35 & 49.48 & 16.80 & 51.02 & **34.36** & **20.54** & 10.16 & 53.55 & 46.03 & 15.70 & 14.71 & 43.91 \\ TSIT & **38.70** & **28.70** & **10.80** & 42.35 & **45.26** & **28.40** & **8.47** & 50.03 & 54.96 & 33.21 & 12.71 & 57.91 & 79.28 & 40.02 & 12.97 & 41.52 \\ QS-Attn & 49.41 & 42.87 & 14.01 & 38.57 & 55.62 & 39.31 & 12.99 & 63.22 & 46.67 & 21.47 & **7.58** & 52.02 & 60.91 & 18.85 & 14.19 & 44.00 \\ \hline FeaMGAN-S (ours) & 45.16 & 34.93 & 13.87 & 40.50 & 52.79 & 35.92 & 14.34 & **45.38** & 70.40 & 51.30 & 14.68 & **46.66** & 57.93 & 16.24 & 11.88 & **38.28** \\ FeaMGAN (ours) & 46.12 & 36.56 & 13.69 & 41.19 & 51.56 & 34.63 & 14.01 & **47.21** & 66.39 & 46.96 & 13.14 & **46.88** & 56.78 & **14.77** & **11.36** & 41.72 \\ \hline \hline \end{tabular}
\end{table} TABLE II: **Quantitative comparison to prior work.** Models were trained using their official implementations. Results are reported as the average across five runs. We refer to Table VII of Appendix A for an extended version of this table.
Fig. 5: **Qualitative comparison to prior work.** Models were trained using their official implementations. Randomly sampled results can be found in Figure 20 of Appendix A.
first and second row of Figure 6, we show that our masking strategy (Full) prevents these inconsistencies in contrast to our model trained without masking (w/o Dis. Mask). However, as shown in Table III, this comes with a quantitative tradeoff in performance on commonly used metrics.
**Effectiveness of local discriminator.** We compare our model trained with a local discriminator (Full) to the model trained without a local discriminator (w/o Local Dis. 352x352). As shown in Figure 6, the local discriminator leads to an increase in quantitative performance. Furthermore, we show the qualitative effects of the local discriminator in Figure 6, where we observe a decrease in glowing objects and a significant decrease of erased objects in the translation. An example of a glowing object is the palm tree in row two of Figure 6. An example of erased objects are the missing houses in the background of the images from row three. In addition, small inconsistencies near object boundaries are reduced, as shown by the randomly sampled results in Figure 22 of Appendix A (e.g., the wheels of the car in row one and three). Overall, we can conclude that local discriminators can reduce local inconsistencies, which might arise from the robust but not flawless segmentation maps used for masking.
**Effectiveness of segmentation-based sampling.** We compare our segmentation-based sampling method with random sampling and sampling based on VGG features. For the sampling strategy based on VGG features, we follow EPE [2] to calculate scores for 352\(\times\)352 crops of the input images. Crops with a similarity score higher than \(0.5\) are selected for training. As shown in Table III, our segmentation-based sampling strategy (Full) slightly outperforms the other sampling strategies in overall translation performance.
**Effectiveness of FATE.** For each spatial point ("pixel") in the input feature map, our feature-attentive denormalization block selects the features in the feature dimension to be incorporated into the output stream of the generator by de-normalization. We show the attention values of our feature-attentive denormalization block in Figure 9 by visualizing all attention values for a single feature across the entire feature map. Since a single feature represents a property of the input, a spatial pattern should emerge. This is expected especially in earlier layers, where the spatiality of the convolutional model's
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{FID} & \multirow{2}{*}{KID} & \multirow{2}{*}{sKVD} & \multicolumn{6}{c}{cKVD} \\ \cline{5-12} & & & & AVG & sky & ground & road & terrain & vegetation & building & roadside-obj. & person & vehicle & rest \\ \hline FeaMgan (Full) & 46.12 & 36.56 & 13.69 & 41.19 & 42.69 & 14.97 & 17.35 & 26.51 & **20.25** & 26.34 & 64.64 & 102.23 & **42.38** & 54.52 \\ w/o Dis. Mask & **37.10** & **25.58** & 14.73 & **39.65** & **26.70** & 15.81 & 16.65 & 31.02 & 22.97 & **25.39** & 67.01 & **93.78** & 44.23 & **52.91** \\ w/ FADE w/o FATE & 45.46 & 35.73 & **13.17** & 40.90 & 41.49 & 13.78 & 16.78 & 25.30 & 20.58 & 27.21 & **63.12** & 104.43 & 42.44 & 53.83 \\ w/ Random Crop & 47.88 & 38.48 & 13.37 & 40.18 & 39.88 & **12.90** & **14.65** & **25.09** & 21.89 & 27.32 & 64.32 & 98.81 & 43.08 & 53.86 \\ w/ VGG Crop & 51.23 & 42.46 & 13.56 & 40.62 & 40.32 & 13.38 & 15.67 & 26.47 & 21.09 & 27.28 & 65.23 & 99.61 & 43.19 & 53.94 \\ \hline w/o Local Dis. & & & & & & & & & & & & & & \\ - w/ 256x256 Crop & 48.57 & 38.89 & **12.89** & 41.26 & 42.31 & 13.57 & 15.98 & **25.28** & 22.18 & **26.56** & **61.13** & 107.48 & 42.44 & 55.62 \\ w/ 352x352 Crop & 47.26 & 37.75 & 14.38 & 39.30 & 34.44 & **13.09** & 15.84 & 25.83 & 21.50 & 27.20 & 61.24 & 98.25 & 42.24 & 53.38 \\ - w/ 464x464 Crop & **46.61** & **37.25** & 15.04 & **38.62** & **31.60** & 13.13 & **15.38** & 27.06 & 22.23 & 29.67 & 63.38 & **87.51** & 44.41 & 51.77 \\ - w/ 512x512 Crop & 55.89 & 49.12 & 15.94 & 39.35 & 36.48 & 14.68 & 16.06 & 26.87 & **19.61** & 27.37 & 62.40 & 98.90 & **40.32** & **50.86** \\ \hline \hline \end{tabular}
\end{table} TABLE III: **Quantitative evaluation for ablation study.** Results are reported as the average across five runs. We refer to Table VIII of Appendix A for an extended version of this table.
Fig. 6: **Qualitative ablations.** Results are selected from the best model. Randomly sampled results can be found in Figure 22 of Appendix A.
feature map is best preserved. As shown in Figure 9, our attention mechanism learns to attend to features that correlate with a property. Examples are the shadows of a scene (row 1), cars and their lighting (row 2), and vegetation (row 3). In addition, we find increasingly more white feature maps in deeper layers. This can be interpreted positively as an indication that the learned content (source) features in deeper layers are important for the translation task and that more shallow content features of earlier layers are increasingly ignored. However, this can also be interpreted negatively and could indicate that our simple attention mechanism is not able to separate deeper features properly.
Comparing FATE to FADE, we find that FATE leads to a subtle increase in training instability, resulting in slightly worse average performance over the five runs per model. However, FATE also leads to our best models. Therefore, we select the FATE block as the standard configuration for our model. The deviation from the average values for all runs can be found in Table VIII of Appendix A. The slight increased instability suggests that the attention mechanism of FATE can be further improved.
**Effect of global crop size.** We successively increase the global crop size of the generator and discriminators from 256\(\times\)256 to 512\(\times\)512 and examine the effects on translation performance. As shown in Figure 6, increasing the global crop size results in a better approximation of the target domain style. However, increasing the global crop size also leads to an increasing number of artifacts in the translated image. In Figure 8, we report the score of various metrics with respect to the global crop size. The commonly used metrics for measuring translation quality (IS, FID, and KID) show that translation quality increases steadily up to a global crop size of 464\(\times\)464, after which the results become unstable. The cKVD metric also shows an increase in average performance up to a crop size of 464\(\times\)464, mainly because translation quality for the underrepresented person class increases. This is intuitive since a larger crop size leads to a more frequent appearance of underrepresented classes during training. Furthermore, the sKVD metric shows a steady decline in consistency as the global crop size increases. Therefore, we choose a tradeoff between approximation of the target domain style, artifacts, and computational cost, and select 352\(\times\)352 as the global crop size for our model.
## V Conclusion
In this work, we have shown that content-based masking of the discriminator is sufficient to significantly reduce content inconsistencies that arise in unpaired image-to-image
Fig. 8: **Quantitative ablation of crop sizes.**
Fig. 7: **Qualitative ablation of crop sizes.** For each crop size, results are selected from the best model. Randomly sampled results can be found in Figure 21 of Appendix A.
translation. Furthermore, artifacts caused by the masking procedure can be significantly reduced by introducing a local discriminator that utilizes a segmentation-based similarity sampling technique. Moreover, our similarity sampling technique leads to a further increase in performance when applied to global input crops. We have also shown that our feature-based denormalization block is able to attend to specific content features, such as features of shadows, but can slightly increase training instability. In addition, we have proposed the cKVD metric to examine translation quality at the class or category level. In our experiments, we have found that these techniques lead to state-of-the-art performance on photo-realistic sim-to-real transfer and the translation of weather. Although our method performs well in Day\(\rightarrow\)Night translation, the remaining limitations of our approach are especially evident in this task.
**Limitations.** We remark on limitations regarding the dataset, sampling, method, and implementation. Probably the most significant limitations are the complex public datasets currently available and in use, as they are not specifically designed for unpaired translation. Collection strategies and datasets that mitigate biases between source and target domains would be beneficial. Furthermore, our sampling strategy only works on an image basis and could be extended across the entire dataset to sample more significant pairs for training. Although our method works for large crops, there is still a crop size limit that must be taken into account when tuning the hyperparameters. In addition, our method for mitigating content inconsistencies depends on the segmentation model. In theory, the number of classes could be used to control how fine-grained the content consistency should be, which leads to flexibility but allows for errors depending on the segmentation quality. This can result in artifacts such as glowing objects, as shown in Figure 11. Intra-class inconsistencies that may arise from intra-class biases ignored by the loss, such as small textures, represent another problem. Intra-class inconsistencies are currently underexplored in unpaired image-to-image translation and are an interesting direction for future research. Finally, we would like to point out that the efficiency of our implementation could be further improved. Apart from these limitations, our method achieves state-of-the-art performance in complex translation tasks while mitigating inconsistencies through a masking strategy that works by applying few tricks. Simple masking strategies have proven to be very successful in other fields. Therefore, we believe that masking strategies for unpaired image-to-image translation represent a promising direction for further research.
**Ethical and responsible use.** Considering the limitations of current methods, unpaired image-to-image translation methods should be trained and tested with care, especially for safety-critical domains like autonomous driving. A major concern is that it is often unclear or untested whether the transferred content can still be considered consistent for subsequent tasks in the target domain. Even though measures exist for content-consistent translation, they do not allow for the explainability of what exactly is being transferred and changed by the model on a fine-grained level. With our proposed cKVD metric we contribute to this field by allowing class-specific translation measurements - a direction that we hope is the right one. However, even if the content is categorically consistent at a high (class) level, subcategories (like parts of textures) may still be interchanged. At a lower level, content consistency and style consistency are intertwined (e.g., a yellow stop sign). Another privacy and security question is whether translation methods are (or will) be able to (indirectly) project sensitive information from the target domain to the translated images (e.g., exchange faces from simulation with faces of existing persons during the translation). A controllable (class-level and in-class-level) consistency method could help to resolve such issues.
## Acknowledgments
The authors would like to sincerely thank all reviewers for their helpful feedback, which contributed to the quality of this paper. The authors would also like to sincerely thank Markus Klenk for proofreading this work.
Fig. 9: **FATE attention maps. Results are selected from the best model.**
Figure 10: **Additional qualitative results.**
Figure 11: **Limitations.**
## References
* [1]M. Aliraz and S. Osindero (2014) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Cited by: SSII-A.
* [2]M. Aliraz and S. Osindero (2014) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Cited by: SSII-A.
* [3]M. Aliraz, H. Liu, D. Xu, P. H. Torr, and N. Sebe (2021) Attentiongan: unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems. Cited by: SSII-A.
* [4]M. Aliraz, M. Aliraz, and S. Osindero (2020) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Cited by: SSII-A.
* [5]M. Aliraz, M. Aliraz, and S. Osindero (2021) Dual diffusion implicit bridges for image-to-image translation. In International Conference on Learning Representations, Cited by: SSII-A.
[MISSING_PAGE_POST]
* [44] C. Yang, T. Kim, R. Wang, H. Peng, and C.-C. J. Kuo, "Show, attend, and translate: Unsupervised image translation with self-regularization and attention," _IEEE Transactions on Image Processing_, vol. 28, no. 10, pp. 4845-4856, 2019.
* [45] L. Zhang, X. Chen, R. Dong, and K. Ma, "Region-aware knowledge distillation for efficient image-to-image translation," _arXiv preprint arXiv:2205.12451_, 2022.
* [46] H. Tang, S. Bai, and N. Sebe, "Dual attention gans for semantic image synthesis," in _Proceedings of the 28th ACM International Conference on Multimedia_, 2020, pp. 1994-2002.
* [47] X. Hu, X. Zhou, Q. Huang, Z. Shi, L. Sun, and Q. Li, "Qs-attn: Query-selected attention for contrastive learning in i2i translation," in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2022, pp. 18 291-18 300.
* [48] H. Tang, D. Xu, N. Sebe, Y. Wang, J. J. Corso, and Y. Yan, "Multi-channel attention selection gan with cascaded semantic guidance for cross-view image translation," in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2019, pp. 2417-2426.
* [49] G. Kwon and J. C. Ye, "Diagonal attention and style-based gan for content-style disentanglement in image generation and translation," in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 13 980-13 989.
* [50] W. Liu, Z. Piao, Z. Tu, W. Luo, L. Ma, and S. Gao, "Liquid warping gan with attention: A unified framework for human image synthesis," _IEEE Transactions on Pattern Analysis and Machine Intelligence_, vol. 44, no. 9, pp. 5114-5132, 2021.
* [51] Y. Lin, Y. Wang, Y. Li, Y. Gao, Z. Wang, and L. Khan, "Attention-based spatial guidance for image-to-image translation," in _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, 2021, pp. 816-825.
* [52] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly _et al._, "An image is worth 16x16 words: Transformers for image recognition at scale," _arXiv preprint arXiv:2010.11929_, 2020.
* [53] D. Torbunov, Y. Huang, H. Yu, J. Huang, S. Yoo, M. Lin, B. Viren, and Y. Ren, "Uvegan: Unet vision transformer cycle-consistent gan for unpaired image-to-image translation," in _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, 2023, pp. 702-712.
* [54] W. Zheng, Q. Li, G. Zhang, P. Wan, and Z. Wang, "Itvr: Unpaired image-to-image translation with transformers," _arXiv preprint arXiv:2203.16015_, 2022.
* [55] J. Lambert, Z. Liu, O. Sener, J. Hays, and V. Koltun, "Mseg: A composite dataset for multi-domain semantic segmentation," in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2020, pp. 2879-2888.
* [56] T. Park, M.-Y. Liu, T.-C. Wang, and J.-Y. Zhu, "Semantic image synthesis with spatially-adaptive normalization," in _Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition_, 2019, pp. 2337-2346.
* [57] L. A. Gatys, A. S. Ecker, and M. Bethge, "Image style transfer using convolutional neural networks," in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 2414-2423.
* [58] C. Li and M. Wand, "Combining markov random fields and convolutional neural networks for image synthesis," in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 2479-2486.
* [59] Y. Li, N. Wang, J. Liu, and X. Hou, "Demystifying neural style transfer," _arXiv preprint arXiv:1701.01016_, 2017.
* [60] X. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks," in _Proceedings of the thirteenth international conference on artificial intelligence and statistics_. JMLR Workshop and Conference Proceedings, 2010, pp. 249-256.
* [61] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," _arXiv preprint arXiv:1412.6980_, 2014.
* [62] J. H. Lim and J. C. Ye, "Geometric gan," _arXiv preprint arXiv:1705.02894_, 2017.
* [63] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, "Improved training of wasserstein gans," _Advances in neural information processing systems_, vol. 30, 2017.
* [64] L. Mescheder, A. Geiger, and S. Nowozin, "Which training methods for gans do actually converge?" in _International conference on machine learning_. PMLR, 2018, pp. 3481-3490.
* [65] S. R. Richter, V. Vineet, S. Roth, and V. Koltun, "Playing for data: Ground truth from computer games," in _Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14_. Springer, 2016, pp. 102-118.
* [66] S. R. Richter, Z. Hayder, and V. Koltun, "Playing for benchmarks," in _Proceedings of the IEEE International Conference on Computer Vision_, 2017, pp. 2213-2222.
* [67] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, "The cityscapes dataset for semantic urban scene understanding," in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 3213-3223.
* [68] F. Yu, H. Chen, X. Wang, W. Xian, Y. Chen, F. Liu, V. Madhavan, and T. Darrell, "Bdd100x: A diverse driving dataset for heterogeneous multitask learning," in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2020, pp. 2636-2645.
* [69] E. Reinhard, M. Ahlikhmin, B. Gooch, and P. Shirley, "Color transfer between images," _IEEE Computer graphics and applications_, vol. 21, no. 5, pp. 34-41, 2001.
* [70] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, "Gans trained by a two time-scale update rule converge to a local nash equilibrium," _Advances in neural information processing systems_, vol. 30, 2017.
* [71] M. Birkowski, D. J. Sutherland, M. Arbel, and A. Gretton, "Demystifying mmd gans," _arXiv preprint arXiv:1801.01401_, 2018.
## Appendix
### The FeaMGAN Architecture
**Generator**. As shown in Figure 12, our generator consists of a content stream encoder, a content stream, a generator stream encoder, and a generator stream. The content stream encoder shown in Figure 15 is utilized to create the initial features of the source image and condition. These initial features are the input to the content stream, which creates features for multiple levels with residual blocks. The statistics of these features are then integrated into the generator at multiple levels utilizing the residual FATE blocks shown in Figure 13. The generator stream utilizes the encoder shown in Figure 14 to create the initial latent from which the target image is generated. To further enforce content consistency, we do not use a variational autoencoder to obtain an deterministic latent. In addition, we found that utilizing additional residual blocks in the last layers of the generator stream improves performance, likely due to further refinement of the preceding upsampled features. We use spectral instance normalization for the residual blocks in the content stream and spectral batch normalization for the residual blocks in the generator stream. The convolutional layers in the generator stream encoder have the following numbers of filters: \([256,512,1024]\). The residual blocks in the generator have the following numbers of filters: \([1024,1024,1024,512,256,128,64,64,64,64]\). The numbers of filters of the convolutional layers in the content streams encoder are \([64,64]\). The numbers of filters in the content stream match those of the output of the preceding residual block in the generator stream at the respective level: \([64,128,256,512,1024,1024,1024]\). For all residual blocks, we use \(3\times 3\) convolutions and \(1\times 1\) convolutions for the skip connections. \(\gamma\) and \(\beta\) in the FATE and FADE blocks are created with \(3\times 3\) convolutions. Throughout the generator, we use a padding of \(1\) for the convolutions - we only downsample with strides and downsampling layers. We utilize the "nearest" upsampling and downsampling from Pytorch. For our small model, we halve the number of filters.
## Additional Results
We show additional results of our experiments in Figures 18, 19, 20, 21, and 22. In Table VII, we report additional results from our cKVD metric and the stability of all results over five runs. Furthermore, we report the stability of all results from the ablation study in Table VIII. We note that the results for most baselines and for our method show non-negligible deviations in many tasks.
Fig. 16: The attention module used in the FATE block to attend to the statistics of the features.
Fig. 12: **Generator architecture**. Arrows with dashed lines indicate connections at multiple levels between the two streams.
Fig. 13: The FATE residual block used in the generator stream. to the statistics of the features.
Fig. 14: The generator stream encoder used to encode the input image and condition for the generator stream.
Fig. 15: The content stream encoder used to encode the input image and condition for the content stream.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Dataset & Resolution & fps & Used Train/Val Data & Task & Input Resolution & Input Cropping \\ \hline PFD [65] & 1914\(\times\)1052 & - & all images & _PFD\(\rightarrow\)Cityscapes_ & 957\(\times\)526 & - \\ Viper [66] & 1920\(\times\)1080 & \(\sim\)15 & all train/val data, but no night sequences & _Viper\(\rightarrow\)Cityscapes_ & 935\(\times\)526 & - \\ Cityscapes [67] & 2048\(\times\)1024 & 17 & all sequences of the train/val data & _PFD\(\rightarrow\)Cityscapes_ & 1.052\(\times\)526 & 957\(\times\)526 \\ BDD100K [68] & 1280\(\times\)720 & 30 & train: first 100k, val: first 40k & _Day\(\rightarrow\)Night_ & 935\(\times\)526 & - \\ \hline \hline \end{tabular}
\end{table} TABLE V: **Additional details of the used datasets.**
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Task & Epochs & Schedule & Decay & Local Discriminator Batch Size \\ \hline _PFD\(\rightarrow\)Cityscapes_ & 20 & half learning rate stepwise, learning rate \(\geq 0.0000125\) & after each 3rd epoch & 32 \\ _Viper\(\rightarrow\)Cityscapes_ & 5 & half learning rate stepwise, learning rate \(\geq 0.0000125\) & after each epoch & 32 \\ _Day\(\rightarrow\)Night_ & 5 & half learning rate stepwise, learning rate \(\geq 0.0000125\) & after each epoch & 32 \\ _Clear\(\rightarrow\)Snowy_ & 10 & half learning rate stepwise, learning rate \(\geq 0.0000125\) & after each epoch & 32 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: **Additional training details.**
Fig. 17: **Discriminator architecture**. Arrows with dashed lines indicate connections at multiple levels between the two components.
## Appendix A Appendix
Fig. 18: **Qualitative comparison to EPE.** We compare our method with the provided inferred images of EPE [2].
## Appendix A Appendix
Fig. 19: **Qualitative comparison to EPE.** We compare our method with the provided inferred images of EPE [2]. Results are randomly sampled from the best model.
Figure 20: **Qualitative comparison to prior work.** Results are randomly sampled from the best model.
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
|
2310.20435 | Assessing the Sustainability and Trustworthiness of Federated Learning
Models | Artificial intelligence (AI) plays a pivotal role in various sectors,
influencing critical decision-making processes in our daily lives. Within the
AI landscape, novel AI paradigms, such as Federated Learning (FL), focus on
preserving data privacy while collaboratively training AI models. In such a
context, a group of experts from the European Commission (AI-HLEG) has
identified sustainable AI as one of the key elements that must be considered to
provide trustworthy AI. While existing literature offers several taxonomies and
solutions for assessing the trustworthiness of FL models, a significant gap
exists in considering sustainability and the carbon footprint associated with
FL. Thus, this work introduces the sustainability pillar to the most recent and
comprehensive trustworthy FL taxonomy, making this work the first to address
all AI-HLEG requirements. The sustainability pillar assesses the FL system
environmental impact, incorporating notions and metrics for hardware
efficiency, federation complexity, and energy grid carbon intensity. Then, this
work designs and implements an algorithm for evaluating the trustworthiness of
FL models by incorporating the sustainability pillar. Extensive evaluations
with the FederatedScope framework and various scenarios varying federation
participants, complexities, hardware, and energy grids demonstrate the
usefulness of the proposed solution. | Alberto Huertas Celdran, Chao Feng, Pedro Miguel Sanchez Sanchez, Lynn Zumtaugwald, Gerome Bovet, Burkhard Stiller | 2023-10-31T13:14:43Z | http://arxiv.org/abs/2310.20435v1 | # Assessing the Sustainability and Trustworthiness of Federated Learning Models
###### Abstract
Artificial intelligence (AI) plays a pivotal role in various sectors, influencing critical decision-making processes in our daily lives. Within the AI landscape, novel AI paradigms, such as Federated Learning (FL), focus on preserving data privacy while collaboratively training AI models. In such a context, a group of experts from the European Commission (AI-HLEG) has identified sustainable AI as one of the key elements that must be considered to provide trustworthy AI. While existing literature offers several taxonomies and solutions for assessing the trustworthiness of FL models, a significant gap exists in considering sustainability and the carbon footprint associated with FL. Thus, this work introduces the sustainability pillar to the most recent and comprehensive trustworthy FL taxonomy, making this work the first to address AI-HLEG requirements. The sustainability pillar assesses the FL system environmental impact, incorporating notions and metrics for hardware efficiency, federation complexity, and energy grid carbon intensity. Then, this work designs and implements an algorithm for evaluating the trustworthiness of FL models by incorporating the sustainability pillar. Extensive evaluations with the FederatedScope framework and various scenarios varying federation participants, complexities, hardware, and energy grids demonstrate the usefulness of the proposed solution.
Sustainable AI, Carbon Footprint, Federated Learning.
## I Introduction
Over the past few decades, Artificial Intelligence (AI) has undergone pervasive integration into various facets of society, encompassing applications such as recreational gaming, disease diagnosis, text and art generation, or autonomous driving [1]. The relevance obtained by AI has amplified the necessity of sustainability, which traverses environmental, social, economic, and ethical dimensions. Delving into the specifics, utilizing Deep Learning (DL) models, predominantly characterized by resource-intensive computational demands during training and evaluation, leads to a significant carbon footprint. Simultaneously, DL systems heavily rely on massive data, and unsustainable data management methodologies incur superfluous energy consumption. Furthermore, ethical considerations assume paramount significance in sustainable AI, aiming to preclude negative repercussions, including bias, discrimination, and privacy infringements.
In conjunction with robustness, transparency, fairness, and accountability, sustainable AI assumes a central role in nurturing long-term societal acceptance and establishing trust in AI systems. As the adoption of AI technologies continues to proliferate across industries and impact many societal aspects, ensuring the trustworthiness of AI becomes paramount. In this direction, governing bodies and regulatory authorities worldwide recognize the necessity of addressing trustworthy AI [2]. For instance, the High-level Expert Group on Artificial Intelligence (AI-HLEG [3]) in Europe has played a pivotal role in formulating legal frameworks and guidelines designed to shape and oversee the development of trustworthy AI [4]. In a more granular context, the AI-HLEG defines seven prerequisites for trustworthy AI, which are: 1) human agency and oversight, 2) technical robustness and safety, 3) privacy and data governance, 4) transparency, 5) fairness, 6) environmental well-being, and 7) accountability.
As highlighted by the AI-HLEG, data privacy is a challenging and active research topic within trustworthy AI. In 2016, Google introduced Federated Learning (FL) [5], an innovative paradigm that enables multiple clients to collaboratively train models without necessitating the exchange of private data. Nowadays, FL confronts multifaceted challenges, spanning scalability, single point of failure, architectural design, or privacy and security concerns, among others [6]. However, while FL inherently incorporates privacy-preserving features, trustworthy AI remains a pivotal dimension within FL systems.
In this context, prior works [7], [8] defined a baseline by formulating taxonomies for trustworthy ML, DL, and FL. Other works, such as [9], implemented algorithms and frameworks for assessing the trustworthiness of FL systems. However, environmental well-being is completely missing in those works. More in detail, Carbon dioxide equivalent (CO\({}_{2}\)eq), a unit based on the global warming potential (GWP) of different greenhouse gases, has not been considered while assessing the FL trustworthiness, as articulated by AI-HLEG. In this sense, hardware efficiency, federation complexity, or energy grid carbon intensity should be considered and studied while assessing the trust
worthiness of FL to raise awareness and design optimum federation configurations.
To improve the previous challenges, the main contributions of this work are:
* The review of the state of the art regarding sustainable and trustworthy AI. As a result, it has been designed a novel Trustworthy FL taxonomy composed of seven pillars (privacy, robustness, fairness, accountability, federation, explainability, and sustainability). The sustainability pillar is novel, and it is composed of three notions (carbon intensity, hardware efficiency, and federation complexity) and ten metrics.
* The design and implementation of an algorithm to evaluate the sustainability and trustworthiness of FL models (source code available in [10]). The proposed algorithm improves related work in implementing metrics assessing the sustainability of FL models. In particular, three notions and ten metrics have been proposed for FL sustainability computation, considering the CO\({}_{2}\)eq impact of heterogeneous FL models. The algorithm combines the ten sustainability metrics with 41 already proposed in the literature for the remaining six trustworthy FL pillars to give an overall score of trustworthy AI.
* The deployment of the algorithm in a real FL framework, called FederatedScope [11], and the evaluation of its performance in different scenarios with several configurations in terms of hardware efficiency, federation complexity, and carbon-intensity of energy grids. The obtained results demonstrated the suitability of the framework while considering sustainability as another factor to measure the trustworthiness of FL.
The remainder of this paper is structured as follows. Section II contains findings from the literature review on trustworthiness and sustainability in FL. Section III presents a detailed analysis of the sustainability pillar and its metrics. Section IV presents the design and implementation details of the proposed algorithm. Section V validates the algorithm in a use case and presents the results of the performed experiments. Section VI discusses the current limitations in sustainability computation for FL. Finally, Section VII provides conclusions and future work.
## II Related Work
This section reviews recent and relevant work done in the literature regarding trustworthy FL evaluation and carbon emission estimation for AI/FL-based computing.
### _Trustworthy FL Evaluation_
Table I summarizes the existing trustworthy FL taxonomies and their coverage of trustworthy FL pillars defined by the AI-HLEG. The taxonomy from Shi et al. [12] reviewed the issue of fairness in FL and its evaluation mechanisms. This study only covers the pillar of fairness and partially the federation one since it discusses fair client selection. Liu et al. [13] provided a taxonomy covering the pillar of privacy, robustness, and partially the pillar federation. Tariq et al. [8] proposed an architecture for FL trustworthiness. Its taxonomy covers privacy, fairness, explainability, and robustness pillars and includes requirements two, three, and five defined by the AI-HLEG. Zhang et al. [14] also surveyed trustworthy FL, but focusing on the legal aspects of security, privacy, and robustness pillars. The taxonomy that covers the most pillars and requirements defined by the AI-HLEG is the trustworthy FL taxonomy from Sanchez et al. [9]. The taxonomy contains the pillars i) privacy, ii) robustness, iii) fairness, iv) explainability, v) accountability, and vi) federation. For each pillar, notions and metrics are defined. In total, 36 metrics are defined that can be used to evaluate the trustworthiness score of a given FL system.
After reviewing the literature, the most important limitation becomes present when comparing the taxonomy to the requirements defined by the AI-HLEG and the existing taxonomies. The environmental impact of an FL system is not considered in the taxonomy, but environmental well-being has clearly been defined as one of the seven requirements for trustworthy AI by governing bodies [3]. Since [9] is the most advanced taxonomy that covers six of the seven requirements defined by the AI-HLEG, it is employed as the basis for extension considering the environmental impact of the system.
### _Estimating Emissions of AI/FL_
Most works focus on estimating the carbon emissions of specific models. Lucconi et al. [15] provided a survey on aspects that influence the CO\({}_{2}\)eq of ML. Strubell et al. [16] estimated the financial and environmental costs of large natural language processing (NLP) models by analyzing the training and fine-tuning process. Lucconi et al. [17] estimated the carbon emissions of the large language model BLOOM having 176 billion parameters to be 50.5 tonnes of CO\({}_{2}\)eq emission. Patterson et al. [18] estimated the energy consumption and computed the carbon emissions of the language models T5, Meena, GShard, Switch Transformer, and GPT-3 and highlighted opportunities to improve energy efficiency and CO\({}_{2}\)eq emission such as sparsely activated DNNs and using energy grids with low carbon intensity. While the mentioned works focus mainly on energy consumption, George et al. [19] point out that water consumption to cool large data- and server centers also contributes heavily to the environmental impact of AI models and estimated the water consumption needed to run Chat-GPT.
In the field of FL, Qui et al.[20] provided a first look into the carbon footprint of FL models by incorporating parameters that are special to FL and comparing the emissions produced by FL models vs. emissions produced by centralized ML models. They concluded that FL models could emit up to two orders of magnitude of CO\({}_{2}\)eq if the data is not identically distributed, which is often the case in FL. Similarly to estimating the carbon emissions of AI/FL models, tools to track carbon emissions and
apply standardized measurements for better comparison of model emissions have been developed [17]. CodeCarbon [21] and the Experimental Emissions Tracker [22] can be used to track emissions during the training process, while the ML CO\({}_{2}\)eq Calculator [23] can be used to calculate the emissions after training.
Despite the effort and work done in this research field, to the best of our knowledge, no work directly considered the carbon emissions related to FL setups. Besides, none have incorporated the emissions produced by FL models into trustworthy FL despite environmental well-being clearly being defined as one of the seven key requirements for trustworthy AI/FL by the AI-HLEG [3].
## III The Sustainability Pillar of Trustworthy FL
This section describes the notions and metrics that make up the sustainability pillar of trustworthy FL. This pillar includes the carbon intensity of the energy grid, the efficiency of the underlying hardware, and the complexity of the federation. Besides, this section describes the complete taxonomy generated after adding the sustainability pillar to the most recent and complete existing trustworthy FL taxonomy.
### _Carbon Intensity_
The carbon intensity of electricity varies in different parts of the world depending on the energy mix used to produce electricity. The United Nations Intergovernmental Panel on Climate Change (IPCC) [24] has provided a median value of grams of CO\({}_{2}\)eq per kWh for different energy fuels. Wind and nuclear emit the least CO\({}_{2}\)eq, with 12g and 11g of CO\({}_{2}\)eq per kWh, and coal the most, with 820g of CO\({}_{2}\)eq per kWh. Thus, an FL system that has used 500 kWh of energy to be trained would have emitted 5.5 kg of CO\({}_{2}\)eq if it were trained on electricity produced by nuclear power and 410 kg of CO\({}_{2}\)eq if it were trained on electricity produced by coal only. This showcases that the energy grid used to train FL systems plays a huge role in the carbon emissions produced. Similarly, the carbon intensity of the energy grid of countries varies by a remarkable factor. British Petroleum has published in their annual review of the world energy statistics [25] that the least carbon-intensive energy grid is used by the African country Lesotho with 20g of CO\({}_{2}\)eq per kWh, and the most carbon-intensive energy grid is used by the South African country Botswana with 795 of CO\({}_{2}\)eq per kWh in 2022.
Therefore, this notion seeks to measure the carbon impact of FL according to the following two metrics.
* **Client/Server Carbon Intensity**. These two metrics measure the carbon intensity of the energy grid utilized in the FL process from the perspectives of both the clients and the server. The value of these two metrics ranges from 20g of CO\({}_{2}\)eq to 795 of CO\({}_{2}\)eq by looking at the countries' energy grids, according to the IPCC report [24]. Theoretically, with the energy sources available today, the lowest possible energy grid would have 11g of CO\({}_{2}\)eq per kWh only using wind energy and the highest possible 820g of CO\({}_{2}\)eq only using coal energy. The energy grids used by clients can be determined by the location of the federation clients (retrieved from the IP address). The carbon intensity of the energy grid utilized by clients is determined by calculating the average of all the carbon intensities. For the carbon intensity of the energy grid used by the server, the energy grid of the country the server operates in is taken. Equation 1 illustrates the calculation process of this metric. \[T_{Intensity}=S_{Intensity}+\frac{1}{n}\sum_{i=1}^{n}C_{nIntensity}\] (1) Where \(T_{Intensity}\) represents the total grid carbon energy intensity, \(S_{Intensity}\) represents the server grid carbon intensity, and \(C_{nIntensity}\) represents the grid carbon intensity of each client \(n\).
### _Hardware Efficiency_
The second notion that significantly impacts the energy consumption and, thus, the emissions of an FL system is the efficiency of the underlying hardware. Efficient
\begin{table}
\begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline _Authors_ & \multicolumn{6}{c}{_Pillars/AI-HLEG Requirements_} \\ \cline{2-9} _(Year)_ & Privacy & Fairness & Robustness & Accountability & Explainability & Federation & Sustainability \\ \cline{2-9} 3. Privacy and data & 3. Privacy and data & 5. University, non-discrimination, and fairness & 2. Technical safety & 7. Accountability and / 1. Human energy and oversight & 4. Transparency reliability & 2. Technical safety / 5. \\ \hline Shi et al. [12] & No & Yes & No & No & Partially & No \\ \hline Liu et al. [13] & Yes & No & Yes & No & Partially & No \\ \hline Tariq et al. [8] & Yes & Yes & Yes & No & Yes & No \\ \hline Zhang et al. [14] (2023) & Yes & No & Yes & No & No & No \\ \hline Sanchez et al.[9] (2023) & Yes & Yes & Yes & Yes & Yes & Yes & No \\ \hline This work & Yes & Yes & Yes & Yes & Yes & Yes & Yes \\ \hline \end{tabular}
\end{table}
Table I: Existing Trustworthy FL Taxonomies and Their Coverage of Pillars and AI-HLEG Requirements
hardware consumes less power to perform computational tasks. Lower power consumption translates to reduced energy requirements, leading to lower CO2eq emissions. On the contrary, inefficient hardware generates more heat, necessitating additional cooling mechanisms, such as air conditioning or fans, that contribute to more CO2eq emissions [23]. In FL systems, both the process of training local models and the aggregation of these models globally require heavy computational resources. Thus, the efficiency of the underlying hardware plays a significant role in the emissions produced by the FL system.
The performance of CPUs and GPUs can be described by different metrics, such as clock speed, Floating-Point Operations Per Second, or Instructions Per Second (IPS) [26]. It is important to note that none of these metrics provide a complete picture of the performance of the processing units, and different metrics are more relevant in certain use cases. Further, manufacturers of CPUs and GPUs often do not fully disclose the metrics of their products, which makes comparing them difficult. To solve this issue, lots of benchmarking software to evaluate the processor's performance across a range of tasks has been proposed. In terms of heat production of a processor, Thermal Design Power (TDP) is used as a specification in the industry [27]. It indicates the maximum amount of heat a computer component, such as a CPU or GPU, is expected to generate under normal operating conditions. TDP is typically expressed in watts and represents the maximum power consumption and heat dissipation expected under typical workloads. The smaller the number for TDP, the lower the power consumption of the processor. Therefore, the Hardware Efficiency notion proposes the following metrics.
* **Client/Server Hardware Efficiency**. To evaluate the efficiency of the underlying hardware in terms of computing power per unit of power consumed, it makes sense to divide the benchmark performance through the TDP, defining the power performance of the processor. A processor with a high power performance score is able to do a lot of computation with low energy consumption, and it is thus more efficient in terms of resource consumption [27]. It is measured in performance per Watt using Equation 2 and 3. \[H_{E}=\frac{H_{BP}}{H_{TDP}}\] (2) \[Total_{E}=S_{E}+\frac{1}{n}\sum_{i=1}^{n}{C_{n}}_{E}\] (3) Where \(H_{E}\) is the hardware efficiency score, \(H_{BP}\) is the hardware benchmark performance, \(H_{TDP}\) is the hardware TDP, \(S_{E}\) is the server hardware efficiency, and \(C_{nE}\) is the hardware efficiency of each client \(n\).
### _Federation Complexity_
The complexity and size of the federation impact the consumed energy and, thus, the emissions produced. Generally, the more complex the model, the higher the number of participants and the higher the energy consumption [20]. Therefore, the federation complexity notion considers the following metrics.
* **Number of Training Rounds**. This metric measures the number of federation training rounds. Each training round consumes energy for i) training the model on the client's side, ii) aggregating the model parameters on the server side, and iii) exchanging models between the client side and server side. Therefore, more training rounds emit more CO2eq.
* **Dataset Size**. This metric measures the size of the dataset used by clients to train the FL models. Larger datasets need more computational resources regarding power, memory, and time to fit the model. Thus, larger datasets need more energy than smaller datasets and also produce more CO2eq [23].
* **Model Size.** This metric measures the size of the model that is trained in the FL system. Large models typically require more computational resources and time to process each iteration, which results in higher energy consumption [23] at the client's side. Also, aggregating large models on the server side typically uses more energy than aggregating small models due to the number of weights. Furthermore, large models thus also introduce a communication overhead, again leading to more energy usage and CO2eq emissions.
* **Number of Clients.** This metric measures the number of clients in the federation. The more clients participate in the federation, the more energy is used [20] for i) training, ii) aggregation, and iii) communication, and thus, the more CO2eq are emitted.
* **Client Selection Rate.** This metric measures the client selection rate in the federation. Often, only a percentage of clients is selected per round [20]. The larger this percentage, the larger the communication overhead from the uplink communication, and the larger the CO2eq emissions.
* **Number of Local Training Rounds**. This metric measures the number of local training rounds within one federation training round. The higher the number of local training rounds, the higher the computational overhead on the client's side and the higher the energy consumption [23], [20].
### _Additional Pillars of Trustworthy FL_
The six pillars defined by Sanchez et al. [9] together with the new one cover the seven requirements for trustworthy AI defined by the AI-HLEG [3] and constitute a comprehensive taxonomy. A visual representation of this taxonomy, including seven pillars, 23 notions, and 51 metrics, is presented in Figure 1. An overview of each pillar is given below. More detailed information can be found in [9].
#### Ii-D1 Privacy
FL inherently provides a certain level of data privacy. However, it requires assumptions about the integrity of the various actors and entities within the federation. When participants are honest, but the aggregating server is 'honest-but-curious,' mechanisms to
prevent information leakage are imperative. When all federation members exhibit 'honest-but-curious' behavior, the focus should shift to ensuring secure communication to prevent information leakage. Additionally, the potential for information leakage from external malicious attacks must be considered. To address these issues, this pillar considers four notions. The first emphasizes the adoption of privacy-preserving methods to enhance resilience against privacy attacks. The second notion involves metrics that quantify information gain or loss, considering the risk of information leakage inherent in the FL process. The final two notions relate to the probability of knowledge inference from client updates, necessitating a comprehensive and scientifically grounded approach to maintaining data privacy in FL models.
#### Iii-B2 Robustness
Robustness in AI systems is imperative to safeguard against vulnerabilities to malicious applications and potential harm to humans. Within this context, existing literature delineates three distinct notions of robustness. The first notion, as highlighted in prior work, underscores the necessity for FL models to exhibit resilience against adversarial attacks, manifested through the introduction of perturbations or erroneous inputs. The second notion emphasizes the crucial need for robustness in both hardware and software utilized by participants in the training and deployment of FL models, a measure critical for thwarting cyberattacks. Finally, the third notion calls for reliability and robustness in the performance and customization of FL algorithms.
#### Iii-B3 Fairness
Data-induced unfairness represents a significant challenge in AI, and this issue is particularly pronounced in FL due to the potential heterogeneity in the quantity and quality of data contributed by different clients. In this context, Client Selection Fairness emerges as the inaugural notion of this pillar, emphasizing the imperative for equitable participant inclusion. Beyond this, fairness in AI can be disaggregated into group-level and individual-level fairness. The former advocates for the absence of discrimination against any particular group, while the latter ensures equitable treatment of similar individuals, irrespective of their group affiliation. Transposing these fairness notions to FL, Group-level Fairness addresses disparities at the group level, whereas Performance Fairness and Class Distribution cater to individual-level fairness. Specifically, Performance Fairness ensures proportionality between a client's data contribution and
Figure 1: Trustworthy FL Taxonomy
their received rewards. Concurrently, Class Distribution scrutinizes label imbalances across the datasets of individual participants, ensuring a holistic approach to fairness in FL.
#### Iii-B4 Explainability
AI guidelines stipulate the necessity for transparency across AI processes. Transparency within this context is frequently articulated as interpretability, a concept that is often erroneously equated with explainability. Interpretability is delineated as a model's inherent attribute that facilitates human understanding. Conversely, explainability pertains to the capacity to articulate the technical intricacies of AI systems. For models that are intrinsically interpretable, direct analysis can be enough for explanation. However, for models lacking this inherent interpretability, post-hoc methods, constituting the second notion of this pillar, become indispensable for enhancing their interpretability. In the realm of FL, where ML/DL models play a pivotal role in the training process, the imperative for explainability extends to the algorithmic model itself. Nonetheless, the imperative for data privacy in FL introduces complexities, as it restricts access to and analysis of raw data, necessitating innovative solutions to uphold explainability without compromising data privacy.
#### Iii-B5 Accountability
Accountability stands as one of the seven imperative requirements for Trustworthy AI. The primary aspect of accountability is addressed through FactSheet Completeness [28]. IBM Research pioneered the concept of a FactSheet, a comprehensive document designed to meticulously record various facets of the entire ML/DL pipeline. Parallel to this, Monitoring emerges as another crucial notion of accountability. It underscores the responsibility of each participant to diligently verify that the FL models are constructed, developed, and deployed in strict alignment with the predetermined architectural and procedural guidelines. This ensures that despite the availability of comprehensive documentation, an active effort is made by all stakeholders to uphold the integrity and accountability of the FL models throughout their lifecycle.
#### Iii-B6 Federation
The management of FL encompasses complex challenges pertaining to communication, efficiency, resource constraints, and security. Coordinating the learning processes across thousands of clients, while ensuring the integrity and security of the model, presents a formidable challenge. The convergence of global models may be impeded by data heterogeneity across clients, while inconsistencies in clients, networks, and limited resources may lead to client dropouts and training failures, adversely affecting the quality of the model. The critical notions within this pillar are identified as Client and Model Management, which delves into the administration of client and model information within the system, and Optimization Algorithm, which plays a pivotal role in influencing the model's performance and robustness.
## IV Sustainable and Trustworthy FL Algorithm
This section provides the details of the algorithm in charge of assessing the sustainability and trustworthiness of FL models. The main contribution of this algorithm, compared to the literature, is the design and implementation of three notions and ten metrics dealing with the sustainability pillar and their integration with six other existing pillars (privacy, robustness, fairness, accountability, federation, and explainability). The following assumptions (A), functional requirement (FR), non-functional requirements (NF), and privacy constraint (PC) were considered during the algorithm design phase.
* A_1: The central server is honest. It is maintained by a trusted owner, and it does not interfere with the FL protocol maliciously.
* A_2: Clients of the federation are honest but curious. They trustfully report their metrics and statistics without maliciously interfering with the FL protocol.
* FR_1: The three notions and ten metrics of the Sustainability pillar must be represented in the algorithm. In addition, each of the remaining six trustworthy FL pillars must be considered, meaning that at least one metric from each pillar must be considered in the final score.
* FR_2: The final trustworthiness score must be a combination of the trustworthiness scores from all notions and pillars.
* NF_1: The algorithm should add minimal computation overhead and complexity to the server, participants, and FL model.
* NF_2: The algorithm should be modular and configurable.
* PC_1: The algorithm must not store sensitive data from the FL model.
* PC_2: The algorithm must not leak or share sensitive data from clients, the server, and the FL model with third parties.
* PC_3: The metrics calculations can occur at the client's local devices, the central server, or collaboratively between both.
### _Sustainability Pillar: Notions and Metrics_
Table II shows the notions and metrics explained in Section III and considered in the algorithm for the sustainability pillar. Descriptions, inputs, outputs, and normalization details are provided for each metric. For metric computation, the CodeCarbon package [21] is leveraged to obtain the emissions related to the hardware employed by the server/clients and the emissions related to the location of the nodes in the FL setup. This package has been selected by the most representative solutions in the literature, as described in Section II. Besides, for the calculation of _Hardware Efficiency metrics_, the most popular benchmarking software for processors is PassMark [27]. It computes a performance score by running standardized tests that simulate real-world workloads, such as executing complex mathematical calculations. PassMark has provided a database with Power Performance measurement for over 3000 CPUs and 2000 GPUs published on Kaggle, which can be used to evaluate the client and server processor efficiency in the algorithmic prototype design.
In addition to the previous ten metrics, the proposed algorithm also implements the 41 metrics belonging to the remaining six pillars proposed in [9].
### _Algorithm Design_
Figure 2 shows the overview of the algorithm design. The proposed algorithm considers the following inputs.
* _Emissions_. It contains the IP of clients and server, CPU and GPU models, and config files of the federation needed to compute the ten sustainability metrics (see Table II).
* _FL Model_. It contains information about the model configuration and model personalization.
* _FL Framework Configuration_. It contains information about the number of clients, the client selection mechanisms, the aggregation algorithm, and the model hyperparameters.
* _FactSheet_. It contains essential details for the accountability of the training process, federation, and the individuals involved [28].
* _Statistics_. It contains information about the client class balance, client test performance loss, client test accuracy, client clever score, client feature importance, client participation rate, client class imbalance, client average training time, model size, and average upload/download bytes.
These input sources serve as the foundation for deriving the sustainability metrics outlined in Table II and the metrics belonging to the remaining six pillars proposed in [9]. The resulting metric values are then normalized to ensure a consistent range. It is essential to note that each metric can encompass distinct input sources and may be computed at different stages of the federated learning (FL) model creation process, namely pre-training, during-training, or post-training, by various participants within the federation, be it clients or servers. Once the normalized metric outputs are determined, they are assigned weights and combined to produce a score for each notion. Each pillar incorporates one or more notions, assessed based on predefined yet adjustable weights for each metric. Consequently, the same procedure is reiterated to derive pillar scores through the weighting and aggregation of notion scores. Ultimately, the overall trust score of the FL model is determined as a custom amalgamation of the pillar scores.
### _Algorithm Deployment_
Once designed, the algorithm was implemented and deployed in a well-known FL framework called FederatedScope [11]. After the deployment, the following steps show how the sustainability and trustworthy FL scores are calculated.
1. _Setup_: Start the federation by initiating FederatedScope. It takes the federation configuration as input and initiates clients and the server, as well as the proposed trustworthiness calculation algorithm. Additionally, it populates the FactSheet with pre-training metrics such as the number of clients in the federation and the number of training rounds.
2. _Model Broadcast_: The server broadcasts the global model to selected clients in the federation.
3. _Local Training_: The selected clients train their local models with their local private dataset. At this point, clients use the CodeCarbon package to obtain metrics relevant to the sustainability pillar computation.
\begin{table}
\begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline \hline _Metric_ & _Description_ & _Input_ & _Output_ & _Normalized Output_ \\ \hline \hline Avg, carbon intensity of clients & Average carbon intensity of energy grid used by clients & \begin{tabular}{c} _Notion: Carbon Intensity of Energy Source_ \\ \end{tabular} & \begin{tabular}{c} _Notion: Carbon Intensity of Energy Source_ \\ \end{tabular} & \begin{tabular}{c} _Notion:_ \\ \end{tabular} &
\begin{tabular}{c} _Intensity of Energy Source_ \\ \end{tabular}
4. _Report Emissions Metrics_: Selected clients report metrics such as the hardware models and energy grid, which are stored in the Emissions file.
5. _Model Sharing_: Selected clients then share their updated model parameters with the server.
6. _Federated Aggregation_: the Aggregator is used by the server and performs secure aggregation over the model updates received from selected clients.
7. _Evaluation_: After each training round, the clients perform model evaluation and call the proposed algorithm to perform metric calculations.
8. _Next training round_: Steps two to eight are repeated until all the training rounds are finished.
9. _Propagate Evaluation Results_: Once the final training round is finished and the collaborative training stops, the evaluation results get propagated to the FactSheet through the algorithm.
10. _Trust Score Computation_: The algorithm computes the overall trust score from the FactSheet and report, including the trustworthiness scores stored in the output directory of FederatedTrust.
The execution of the FederatedScope training process, together with the evaluation of the FL sustainability and trustworthiness, is depicted in Algorithm 1.
## V Evaluation and Results
This section evaluates the proposed algorithm through a pool of experiments. Firstly, it includes a quantitative analysis of its functionality. Then, it analyzes how the proposed system can effectively help users to better understand the sustainability of the FL systems and support decision-making processes.
### _Functionality Evaluation_
Four use cases (UC) are conducted to examine the functionality of the sustainability pillar. They consider several levels of federation complexity, diverse degrees of carbon intensity in the energy grid utilized by both clients and the server, and different hardware efficiencies of the CPUs employed by the clients and the server. The setups for these four cases are depicted in Table III. In the following experiments, each metric carries equal weight when calculating the notion score. In addition, when determining the sustainability pillar score, the carbon intensity of the energy source metric is assigned a weight of 0.5, while the hardware efficiency and federation complexity metrics are each assigned a weight of 0.25.
#### V-A1 Low Carbon Intensity and High Hardware Efficiency
UC A represents the optimal situation with minimal CO\({}_{2}\)eq emissions. In this scenario, the server and all five clients utilize the Intel Core i7-1250U CPU, which boasts exceptional efficiency with a power performance of 1447, the greatest recorded by PassMark thus far. Moreover, the federation complexity remains low, characterized by a limited number of clients, global and local training rounds, as well as a small client selection rate, dataset size, and model size. Furthermore, both the clients and server are situated in Albania, which possesses one of the least carbon-intensive energy grids. Therefore, as depicted in Table IV, UC A obtains a carbon intensity of energy source notion score of 1, a hardware efficiency notion score of 1, and a federation complexity notion score of 0.98, resulting in the highest result with an overall sustainability score.
#### V-A2 High Carbon Intensity and Low Hardware Efficiency
UC B illustrates a worst-case scenario with inefficient hardware, a highly complex federation resulting in high energy consumption, and high carbon intensity of the electricity grid used, resulting in substantial CO\({}_{2}\)eq emis
\begin{table}
\begin{tabular}{l c c c c} \hline & _UC A_ & _UC B_ & _UC C_ & _UC D_ \\ \hline \hline \multirow{2}{*}{Cients Loc.} & Albania & \multicolumn{2}{c}{50\% in Kosovo} & Switzerland & South Africa \\ \cline{2-5} Server Loc. & Albania & South Africa & Switzerland & South Africa \\ \hline \multirow{2}{*}{Cients Hardware} & \multirow{2}{*}{i7-1250U} & \multirow{2}{*}{AMD FX-9500} & 35\% E5-4620 & \multirow{2}{*}{i5-1335U} \\ & & & 25\% E5-2650 & \\ \hline \multirow{2}{*}{Server Hardware} & \multirow{2}{*}{i7-1250U} & \multirow{2}{*}{W2104} & \multirow{2}{*}{i8-4620} & \multirow{2}{*}{i7-1250U} \\ \cline{2-2} \cline{5-5} Remarks & & & & \\ \hline \multirow{2}{*}{No. of Clients Rate} & \multirow{2}{*}{5} & \multirow{2}{*}{1000} & \multirow{2}{*}{1000} & \multirow{2}{*}{8} \\ \cline{2-2} \cline{5-5} Selection Rate & & & & \\ \hline \multirow{2}{*}{Local Results} & 1 & 90 & 90 & 1 \\ \hline \multirow{2}{*}{Dataset Size} & \multirow{2}{*}{100} & \multirow{2}{*}{1.10E+06} & \multirow{2}{*}{1.10E+06} & \multirow{2}{*}{100} \\ \cline{2-2} \cline{5-5} Model size & & & & \\ \hline \multirow{2}{*}{Model size} & \multirow{2}{*}{98,000} & \multirow{2}{*}{1.00E+13} & \multirow{2}{*}{1.00E+13} & \multirow{2}{*}{99,300} \\ \cline{2-2} \cline{5-5} & & & & \\ \hline \end{tabular}
\end{table}
Table III: Setups for Functionality Evaluation Experiment
Figure 2: Algorithm Design
sions. In this scenario, a server utilizes an Intel Xeon W-2104 CPU with a low power performance measurement of 51.67. All 1000 clients have an AMD FX-9590 CPU, which exhibits a low power performance of 30.76. Consequently, the overall hardware used shows inefficiency, achieving a hardware efficiency notion score of 0.01. Moreover, the federation complexity is significant, involving 1000 global and 90 local training rounds, with a federation complexity notion score of 0.13. Additionally, the server is located in South Africa, which has one of the most energy-intensive grids, emitting 709g CO\({}_{2}\)eq per kWh. Half of the clients are situated in Kosovo, which operates a carbon-intensive energy grid generating 769g of CO\({}_{2}\)eq per kWh. The other half of the clients are based in Gambia, which relies on a carbon-intensive electricity grid releasing 700g of CO\({}_{2}\)eq per kWh. Consequently, the average carbon intensity of the electricity grid used by clients totals 734.5g of CO\({}_{2}\)eq per kWh, with a carbon intensity of energy source notion score of 0.09. Combining these three notions with the weighted average, the overall score for the sustainability pillar is 0.09 for UC B, which represents a worst-case scenario in terms of the sustainability pillar using inefficient hardware and carbon-intensive electricity grids in combination with a complex federation.
#### Iv-B3 Low Carbon Intensity and Low Hardware Efficiency
UC C represents a scenario where the hardware used is inefficient, and the federation is complex, leading to high energy consumption. However, the carbon intensity of the electricity grid is low, resulting in medium CO\({}_{2}\)eq emissions. In this case, the server utilizes an Intel Core i7-6800K CPU with a power performance of 76.29. Among the clients, 40% use an Intel Xeon E5-4620 CPU with a power performance of 100.24, 35% use an Intel Xeon E5-4627 with a power performance of 71.69, and 25% use an Intel Xeon E5-2650 with a power performance of 105.21. Overall, the hardware is considered inefficient, achieving a hardware efficiency notion score of 0.01. The federation complexity is high, with a large number of clients, global training rounds, local training rounds, and parameters in the DNN model, resulting in a federation complexity notion score of 0.17. However, both the server and clients are located in Switzerland, where the energy grid has a low carbon intensity of 32g CO\({}_{2}\)eq per kWh, achieving a
\begin{table}
\begin{tabular}{l l l l l} \hline \hline _Metric_ & _UC A_ & _UC B_ & _UC C_ & _UC D_ \\ \hline \hline
**Sustainability Pillar** & 1.00 & 0.09 & 0.55 & 0.53 \\ \hline
**Carbon Intensity of Energy** & 1.00 & 0.09 & 1.00 & 0.11 \\
**Source Notion (weight 0.5)** & 1.00 & 0.09 & 1.00 & 0.11 \\ \hline - Avg. Carbon Intensity of Energy Grid Clients & 1.00 & 0.08 & 1.00 & 0.11 \\ \hline - Carbon Intensity of Energy Grid Server & 1.00 & 0.11 & 1.00 & 0.11 \\ \hline
**- Hardware Efficiency** & 1.00 & 0.01 & 0.04 & 0.94 \\
**Notion (weight 0.25)** & 1.00 & 0.01 & 0.05 & 0.87 \\ \hline - Avg. Hardware Efficiency & 1.00 & 0.02 & 0.04 & 1.00 \\ Server & & & & \\ \hline
**Federation Complexity** & 0.98 & 0.13 & 0.17 & 0.96 \\
**Notion (weight 0.25)** & & & & \\ \hline - Number of Training & 1.00 & 0.17 & 0.17 & 1.00 \\ \hline - Number of Clients & 1.00 & 0.17 & 0.17 & 1.00 \\ - Client Selection Rate & 0.89 & 0.00 & 0.22 & 0.77 \\ \hline - Avg. Number of Local Training Bounds & 1.00 & 0.17 & 0.10 & 1.00 \\ \hline - Average Dataset Size & 1.00 & 0.20 & 0.20 & 1.00 \\ \hline - Model Size & 1.00 & 0.14 & 0.14 & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table IV: Sustainability Score for Functionality Evaluation
\begin{table}
\begin{tabular}{l l l l l} \hline \hline _Metric_ & _UC A_ & _UC B_ & _UC C_ & _UC D_ \\ \hline \hline
**Sustainability Pillar** & 1.00 & 0.09 & 0.55 & 0.53 \\ \hline
**Carbon Intensity of Energy** & 1.00 & 0.09 & 1.00 & 0.11 \\
**Source Notion (weight 0.5)** & 1.00 & 0.08 & 1.00 & 0.11 \\ \hline - Avg. Carbon Intensity of Energy Grid Clients & 1.00 & 0.11 & 1.00 & 0.11 \\ \hline - Carbon Intensity of Energy Grid Server & 1.00 & 0.01 & 1.00 & 0.11 \\ \hline
**- Hardware Efficiency** & 1.00 & 0.01 & 0.04 & 0.94 \\
**Notion (weight 0.25)** & & & & \\ \hline - Avg. Hardware & 1.00 & 0.01 & 0.05 & 0.87 \\ Efficiency Clients & & & & \\ \hline - Hardware Efficiency & & & & \\ \hline Server & 1.00 & 0.02 & 0.04 & 1.00 \\ \hline
**Federation Complexity** & & & & \\ \hline
**Notion (weight 0.25)** & & & & \\ \hline - Number of Training & 1.00 & 0.17 & 0.17 & 1.00 \\
**-** & Client Selection Rate & 0.89 & 0.00 & 0.22 & 0.77 \\ \hline - Avg. Number of Local Training Bounds & 1.00 & 0.17 & 0.10 & 1.00 \\ \hline - Average Dataset Size & 1.00 & 0.20 & 0.20 & 1.00 \\ \hline - Model Size & 1.00 & 0.14 & 0.14 & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table IV: Sustainability Score for Functionality Evaluation
carbon intensity of energy source notion score of 1. By combining these three notions, the overall score for the sustainability pillar is 0.55 for UC C.
#### Iv-A4 High Carbon Intensity and High Hardware Efficiency
UC D utilizes highly efficient computational hardware but has a high carbon intensity in its grid, leading to a moderate level of CO\({}_{2}\)eq emissions, in contrast to UC C. In UC D, the server utilizes the Intel Core i7-1250U CPU power performance of 1447, while all eight clients use the Intel Core i5-1335U with a power performance of 1268. Additionally, the federation complexity is low, with a small number of clients, global training rounds, local training rounds, and a small client selection rate, dataset size, and model size. Consequently, the hardware efficiency notion score and federation complexity notion score are 0.94 and 0.96, respectively. However, both the clients and server are situated in South Africa, where the carbon intensity of the energy source is 709g CO\({}_{2}\)eq per kWh, resulting in a carbon intensity of energy source notion score of 0.11. Therefore, the final sustainability score is 0.53, similar to UC C.
In conclusion, this experiment provides empirical evidence supporting the effectiveness of the proposed sustainability pillar, which enables the quantitative measurement of CO\({}_{2}\)eq emissions in various contexts of FL systems. Moreover, this sustainability pillar yields accurate and interpretable sustainability scores based on such measurements.
### _Effectiveness Evaluation_
Nevertheless, validating the calculated sustainability pillar could enhance the credibility of the trust score is a complex task. This difficulty primarily stems from the absence of the ground truth, rendering quantitative analysis notably challenging. Therefore, this experiment analyzes and validates the effectiveness and value-adding properties of the sustainability pillar through a hypothetical case study.
Assuming a multinational IT consulting company based in Luxembourg, with two research and development centers located in Zurich, Switzerland, and Johannesburg, South Africa. Both branches have simultaneously proposed an FL-based training proposal, with their respective training configurations outlined in the Table V. However, due to limited resources, only one proposal can be implemented. As the director of the research and development centers, the decision-maker aims to follow the guidance of the AI-HLEG. It intends to evaluate the trust score of the two proposals using the algorithm proposed in this work. This calculation will ultimately determine which proposal should be adopted.
Table V presents the configurations of the two proposals, which exhibit a high degree of similarity. The primary distinction lies in the fact that Proposal A, involving the Johannesburg team, necessitates a greater number of clients to participate in the training process and entails a substantially higher number of training rounds compared to Proposal B, which is proposed by the Zurich team. Additionally, both teams intend to conduct the training process at their local facilities.
The director utilized the proposed system to upload the proposals submitted by the two teams. This system then computed and evaluated the scores of various pillars, such as robustness, privacy, and fairness, ultimately aggregating
\begin{table}
\begin{tabular}{l l l} \hline _Metric_ & _Proposal A_ & _Proposal B_ \\ \hline \hline Model & ConvNet2 & ConvNet2 \\ \hline Local Rounds & 100 & 10 \\ \hline Dataset & FEMNIST & FEMNIST \\ \hline Data Split & 0.6/0.2/0.2 & 0.6/0.2/0.2 \\ (Train, Val., Test) & 0.6/0.2/0.2 & 0.6/0.2/0.2 \\ \hline Batch Size & 50 & 50 \\ \hline Loss & CrossEntropyLoss & CrossEntropyLoss \\ \hline Consistent Label & False & False \\ \hline Number of Clients & 1000 & 10 \\ \hline Client Selection Rate & 0.3 & 0.6 \\ \hline Federation Rounds & 1000 & 10 \\ \hline Clients Hardware & Intel i7-8650U & Intel i7-8650U \\ \hline Server Hardware & Intel i7-8650U & Intel i7-8650U \\ \hline Client Location & South Africa & Switzerland \\ \hline Server Location & South Africa & Switzerland \\ \hline Differential Privacy & Epsilon 10 & Epsilon 10 \\ \hline Aggregation Method & FedAvg & FedAvg \\ \hline \end{tabular}
\end{table}
Table V: The FL Configuration of the Proposals from the Two Branches
Figure 3: Results of Evaluation of the Proposed Algorithm for Proposal A (top) and Proposal B (bottom)
them to generate a trust score. In this experiment, equal weight was assigned to all the pillars during calculations.
However, there is a lack of established methods, equations, and practical calculation techniques for computing all the notions and metrics mentioned in Section III. As a result, the goal of the implementation is to create a simplified prototype that incorporates basic principles, concepts, and metrics that can be calculated. All the computed notions are presented in Table VI.
The results of the system, as depicted in Figure 3, indicate that both proposals have similar scores in different aspects, including explainability, accountability, and federation. This similarity can be attributed to the proximity of their respective configurations. As indicated in the Table VI, both proposals demonstrated low levels of robustness as they were not optimized for resisting attacks. Regarding privacy, proposal B outperformed proposal A due to its significant number of nodes, which introduced more uncertainty and improved overall privacy. Besides, proposal B exhibited a greater fairness score compared to proposal A due to its superior level of client selection fairness, and the performance of the model among the clients is even.
Before the inclusion of the sustainability pillar, the trust scores for the two proposals were relatively similar, with proposal A receiving a score of 0.58 and proposal B receiving a score of 0.63, indicating a minimal difference of 0.05. This posed a challenge in determining which proposal aligned more closely with the concept of trustworthiness. However, with the introduction of the sustainability pillar, the data presented in the Table VI reveals that proposal B exhibited notable advantages regarding carbon intensity of energy source and federation complexity. As a result, the final trust scores were adjusted to 0.53 and 0.65 for proposal A and proposal B, respectively, resulting in an increased discrepancy of 0.12. Ultimately, proposal B emerged as the winner due to its superior performance in sustainability.
In summary, this experiment serves as a hypothetical case study to illustrate that the sustainability pillar effectively enhances users' comprehension of the environmental impacts of using FL systems and offers valuable assistance in the decision-making process.
## VI Discussion
This section discusses the most relevant limitations noticed during the design and implementation process of the proposed algorithm. The intention is to seek future improvements and iterations over the pillar notions, metrics, and their calculation process.
Coming to limitations in terms of the sustainability pillar, the magnitude in which the single metrics influence the CO\({}_{2}\)eq emissions are uncertain but are weighted equally. For example, the number of training rounds and clients in the federation have the same weight in this prototype design. Still, more training rounds might contribute more to the final CO\({}_{2}\)eq emissions than the number of clients in the federation. Similarly, at the notion level, it is unclear if the efficiency of the hardware notion and the federation complexity notion influence the CO\({}_{2}\)eq emissions equally. Thus, the weighting of the metrics and notions might only partially reflect their influence on the federation's environmental impact. So, further investigation is needed in this regard.
For the carbon intensity of the energy source notion, the average carbon intensity of the country's energy grid is an approximation. It is due to the carbon intensity of the electricity grid fluctuates within a country and a day or season. However, for the purpose used, it is a fairly good approximation. Regarding the hardware efficiency notion, only the efficiency of CPUs and GPUs is considered. To be more accurate, the efficiency of other components, such as RAM, could be integrated. Additionally, the power performance metric depends on PassMark benchmarking scores and is not comparable to other benchmarking software scores. Further, if emissions want to be measured, the emitted CO\({}_{2}\)eq for producing the hardware should be included. This, however, is fairly difficult to do.
Finally, there are some additional aspects, like privacy-preserving technologies used in the federation, that might be relevant for carbon emissions estimations. For example, if a federation uses homomorphic encryption as privacy protection, it would increase energy consumption and emissions due to its computational complexity. Further, FL systems often use methods to detect malicious clients or free-riders, such as clustering or the H-MINE algorithm. Such methodologies are computationally heavy and may increase the computational costs, energy consumption, and CO\({}_{2}\)eq emissions
\begin{table}
\begin{tabular}{l l l} \hline \hline _Metric_ & _Proposal A_ & _Proposal B_ \\ \hline \hline
**Robustness Pillar** & 0.33 & 0.30 \\ \hline – Resilience to Attacks & 0.27 & 0.40 \\ \hline – Algorithmic Robustness & 0.51 & 0.00 \\ \hline – Client Reliability & 0.23 & 0.50 \\
**Privacy Pillar** & 0.55 & 0.49 \\ \hline - Differential Privacy & 1.00 & 1.00 \\ \hline – Indistinguishability & 0.00 & 0.00 \\ \hline – Uncertainty & 0.65 & 0.47 \\ \hline
**Fairness Pillar** & 0.16 & 0.59 \\ \hline – Selection Fairness & 0.47 & 0.76 \\ – Performance Fairness & 0.00 & 1.00 \\ \hline - Class Distribution & 0.00 & 0.00 \\ \hline
**Explainability Pillar** & 0.90 & 0.90 \\ \hline – Interpreability & 0.80 & 0.80 \\ \hline – Post-hoc Methods & 1.00 & 1.00 \\ \hline
**Accountability Pillar** & 0.73 & 0.73 \\ \hline - Factsheet Completeness & 0.73 & 0.73 \\ \hline
**Federation Pillar** & 0.79 & 0.79 \\ \hline – Client Management & 1.00 & 1.00 \\ \hline – Optimization & 0.57 & 0.57 \\ \hline
**Sustainability Pillar** & 0.25 & 0.79 \\ \hline - Carbon Intensity of & 0.11 & 0.98 \\ \hline Energy Grid Server & 0.28 & 0.28 \\ \hline – Hardware Efficiency & 0.49 & 0.91 \\ \hline \hline \end{tabular}
\end{table}
Table VI: Pillar and Notion Scores for Two Proposals
## VII Conclusion and Future Work
This work introduces the sustainability pillar to the trustworthy FL taxonomy, aiming to assess the environmental impact of FL systems. This new pillar comprises ten metrics belonging to three main notions: hardware efficiency, federation complexity, and the carbon intensity of the energy grid. Together, these notions provide a comprehensive evaluation of an FL system's resource consumption and environmental impact, highlighting the importance of efficient hardware and low-carbon energy sources. Additionally, this work designs and implements an algorithm for evaluating FL trustworthiness by incorporating the sustainability pillar. Using the CodeCarbon Python package, the algorithm now considers the hardware models used and the carbon intensity of the energy grid based on the geographical locations of clients and servers. Extensive evaluations across various scenarios reveal that FL systems with low complexity, efficient hardware, and a clean energy grid receive high sustainability and trustworthiness scores.
Future work will refine the sustainability scores by investigating and adjusting the weights of individual metrics related to carbon emissions and other pillars. This includes considering the computational costs of privacy-preserving methods, like Differential Privacy and Homomorphic Encryption, and malicious client detection techniques. Enhancing the security of the FederatedTrust prototype, expanding its compatibility with various frameworks, and adapting it to decentralized federations are also potential avenues for improvement. Additionally, incorporating unimplemented metrics from the other six pillars of the taxonomy could further enhance the prototype's comprehensiveness.
|
2309.10495 | Positron Acceleration in Plasma Wakefields | Plasma acceleration has emerged as a promising technology for future particle
accelerators, particularly linear colliders. Significant progress has been made
in recent decades toward high-efficiency and high-quality acceleration of
electrons in plasmas. However, this progress does not generalize to
acceleration of positrons, as plasmas are inherently charge asymmetric. Here,
we present a comprehensive review of historical and current efforts to
accelerate positrons using plasma wakefields. Proposed schemes that aim to
increase the energy efficiency and beam quality are summarised and
quantitatively compared. A dimensionless metric that scales with the
luminosity-per-beam power is introduced, indicating that positron-acceleration
schemes are currently below the ultimate requirement for colliders. The primary
issue is electron motion; the high mobility of plasma electrons compared to
plasma ions, which leads to non-uniform accelerating and focusing fields that
degrade the beam quality of the positron bunch, particularly for high
efficiency acceleration. Finally, we discuss possible mitigation strategies and
directions for future research. | G. J. Cao, C. A. Lindstrøm, E. Adli, S. Corde, S. Gessner | 2023-09-19T10:14:18Z | http://arxiv.org/abs/2309.10495v3 | # Positron acceleration in plasma wakefields
###### Abstract
Plasma acceleration has emerged as a promising technology for future particle accelerators, particularly linear colliders. Significant progress has been made in recent decades toward high-efficiency and high-quality acceleration of electrons in plasmas. However, this progress does not generalize to acceleration of positrons, as plasmas are inherently charge asymmetric. Here, we present a comprehensive review of historical and current efforts to accelerate positrons using plasma wakefields. Proposed schemes that aim to increase the energy efficiency and beam quality are summarised and quantitatively compared. A dimensionless metric that scales with the luminosity-per-beam power is introduced, indicating that positron-acceleration schemes are currently below the ultimate requirement for colliders. The primary issue is _electron motion_; the high mobility of plasma electrons compared to plasma ions, which leads to non-uniform accelerating and focusing fields that degrade the beam quality of the positron bunch, particularly for high efficiency acceleration. Finally, we discuss possible mitigation strategies and directions for future research.
## I Introduction
The high-energy-physics community is currently prioritizing the development of an electron-positron Higgs factory, as emphasized in recent reports from both the US Snowmass process [1] and the European Strategy for Particle Physics Update [2; 3]. Linear electron-positron colliders provide clean collisions of elementary particles, suppress synchrotron radiation, and enable future upgrades to higher energies. However, if built using conventional technology--radio-frequency (rf) acceleration--these machines are typically very long and consequently very expensive. For this reason, advanced-accelerator technologies are being considered as a way to reduce the resources required to build such a collider.
Advanced accelerators aim to reduce the footprint by significantly increasing the accelerating gradient. Currently, two rf-based, mature linear-collider designs have been proposed: the International Linear Collider (ILC) [4; 5; 6; 7; 8; 9] and the Compact LInear Collider (CLIC) [10; 11; 12; 13]. The ILC acceleration gradient is \(35\,\mathrm{MV/m}\), with a total length is \(20\,\mathrm{km}\) including two \(6\,\mathrm{km}\) accelerator arms. This design allows collisions at \(\sqrt{s}=250\,\mathrm{GeV}\), adequate for a Higgs factory. A 1-TeV collider using this technology would extend to at least \(40\,\mathrm{km}\). CLIC aims to operate at an acceleration gradient of \(100\,\mathrm{MV/m}\) with a center-of-mass energy at \(\sqrt{s}=380\,\mathrm{GeV}\). This collider would have a total length of \(11\,\mathrm{km}\), with a potential upgrade to \(\sqrt{s}=3\,\mathrm{TeV}\) and a total length of \(50\,\mathrm{km}\). These designs are both pushing the limit of available resources. Beyond ILC and CLIC, other designs have been proposed, including the Cool Copper Collider (C\({}^{3}\)) [14], which could reach up to \(120\,\mathrm{MeV/m}\). Ultimately, the maximum achievable gradient in all the above machines is limited by electrical breakdown in the metallic rf cavity [15]. Advanced accelerators, however, can surpass this limit by using structures that are more resistant to breakdown.
Advanced-accelerator concepts include structure-based wakefield accelerators [16; 17] as well as plasma-based accelerators. The latter makes use of the "broken-down" nature of plasmas to overcome the gradient limit in rf accelerators. As a result, plasmas can sustain electric fields of order
\[E_{0}[\mathrm{V/m}]\approx 96\sqrt{n_{e}[\mathrm{cm}^{-3}]}, \tag{1}\]
which for typical plasma densities \(n_{e}\approx 10^{14}\)-\(10^{18}\,\mathrm{cm}^{-3}\) range from \(1\) to \(100\,\mathrm{GV/m}\)[18; 19]. This field is up to a thousand times higher than in conventional accelerators.
Early ideas of accelerating particles in a plasma were proposed in 1956 [20; 21]. However, the research field, in its modern form, started independently in 1979 with a seminal paper by Tajima and Dawson [22] demonstrating that electrons could be accelerated in the plasma-density wave excited (or _driven_) by an intense laser pulse. Five years later, Chen, Dawson [23] and Ruth _et al._[24] proposed to drive these waves using relativistic charged-particle beams. The electromagnetic fields in the plasma-density wave (or _wake_) behind the laser or beam driver are known as _plasma wakefields_.
Initial concepts considered small perturbations of the plasma density, now known as the _linear_ regime [27]. Later, Rosenzweig _et al._[28] realized that operating
with stronger perturbations, in the so-called _nonlinear_ or _blowout_ regime, provided more favourable conditions for accelerating electrons with high efficiency and high beam quality. In this regime, plasma electrons are expelled radially outwards by an intense driver, creating a bubble-shaped sheath of plasma electrons surrounding a cavity containing only plasma ions [see Fig. 1(a)]. These ions, which are uniformly distributed and effectively immobile on the timescale of electron motion, attract the plasma electrons back toward the axis. The inward motion of the sheath electrons creates a longitudinal electric field that can accelerate electrons. Additionally, the exposed ion charge produces a transverse electric field that varies linearly with the transverse offset, thereby focusing electron bunches while preserving their area in transverse phase space (known as _emittance_[29]). Acceleration extracts energy from the wakefield, which will therefore reduce in amplitude--a process known as _beam loading_[30]. This process can be used to shape the accelerating field [see Fig. 1(b)] such that all particles are accelerated uniformly [31], allowing energy-efficient acceleration with low energy spread.
Experimental research into acceleration in plasma wakefields has progressed significantly over the past four decades. The first acceleration of electrons in a plasma was demonstrated at the Argonne National Lab in 1988 [32]. Later experiments demonstrated electron injection and acceleration in nonlinear plasma wakefields [33; 34]. Major milestones in beam-driven plasma-wakefield acceleration (PWFA) include: energy doubling of 42 GeV electrons [35]; energy-efficient acceleration of an externally injected bunch [25]; and high-gradient, high-efficiency acceleration of electrons while preserving a low energy spread [36]. Similarly, in laser-driven plasma-wakefield acceleration (LWFA), milestones include: generation of high-quality beams [37; 38; 39]; 8 GeV energy gain [40]; and the demonstration of LWFA-based free-electron lasers [41; 42]. Several challenges still remain, such as reaching high-overall energy efficiency [43], use of multiple stages [44], ion motion [45; 46], hosting and beam break-up (BBU) instabilities [47; 48; 49], spin polarization [50] and high repetition rate [51; 52]. Briefly stated, ongoing experimental and theoretical research is rapidly maturing the technology, indicating that plasma acceleration of electrons may soon be compatible with a high-energy-physics application.
Nevertheless, plasma-based acceleration of electrons is not sufficient for a fully plasma-based electron-positron collider; acceleration of positrons is also required. Unfortunately, unlike in rf accelerators, the above-mentioned progress of electron acceleration in plasmas does not readily extend to positrons. Presently, the beam quality
Figure 1: Particle-in-cell simulations of the plasma-density wave and on-axis longitudinal field \(E_{z}\) excited by an electron or positron driver. (a) An electron driver excites a nonlinear plasma wake, or blowout, with strongly accelerating and focusing fields. (b) A trailing electron bunch is accelerated, extracting some of the energy in the wakefield; a process known as beam loading. (c) A positron drive bunch can also excite a nonlinear wake. Here, only the front half of a Gaussian is used, such that no positrons experience acceleration. (d) Using a full Gaussian bunch, the front half drives the wakefield and the rear half loads the wakefield and is accelerated. Adapted from Refs. [25] and [26].
and energy efficiency achievable in plasma-based positron accelerator schemes, both in experiments and simulations, is insufficient to reach the requirements of a collider.
Plasma is a unique accelerating medium in that it responds asymmetrically to particles of positive and negative charge. This is because plasmas are composed of lower-mass (more mobile) electrons and higher-mass (less mobile) ions--an aspect that is exploited in the blowout regime for electron acceleration. For positrons, however, the situation is not as fortunate. In nonlinear plasma wakefields driven by electrons (i.e., a blowout), the only region that both accelerates and focuses positrons is where the plasma electrons cross the axis [at \(\xi=-200\,\mu\)m in Fig. 1(a)]; a spatially very small region in which the plasma-electron density is highly non-uniform. This means that the accelerating and focusing fields are non-uniform and nonlinear, respectively, which induces large energy spread and emittance growth. All the favourable features of the blowout regime are therefore lost--previously referred to as the positron problem.
Considering instead nonlinear plasma wakefields driven by positrons, the situation is no better. In this "suck-in" regime, plasma electrons are sucked into the positron bunch, after which electrons cross the axis and create a blowout-like structure [see Fig. 1(c)]. The resulting wakefield can be used to accelerate positrons and, if beam loaded, can also keep plasma electrons on axis such that a positron bunch can be focused [see Fig. 1(d)]. However, while this scheme can be energy efficient [26], the accelerating and focusing fields still vary transversely in a way that does not preserve low energy spread and low emittance. In short, neither the blowout nor the suck-in regime is ideal for positrons. The big question is: can we find a suitable regime that can accelerate positrons with high gradient, high efficiency and high beam quality?
In this review, we start by specifying the requirements of a collider in Sec. II. A history of experimental and theoretical progress on positron acceleration in plasma follows in Sec. III. Several new schemes have recently been proposed to overcome the remaining challenges. These schemes are summarized in Sec. IV. To compare the performance of the schemes, a new dimensionless parameter proportional to the luminosity-per-power--characterizing both the positron bunch and the acceleration process--is employed. The resulting comparison is presented in Sec. V, which revealed a problem related to electron motion that currently limits the performance of plasma-accelerated positrons--a topic discussed in depth in Sec. VI. Finally, concluding remarks and an outlook is presented in Sec. VII.
## II Critical requirements for linear colliders
The goal of plasma-based positron acceleration is to, in an affordable manner, deliver high-energy positrons for an electron-positron collider. Physics determines the required center-of-mass energy, typically in the range 0.25-15 TeV, as well as the required collision rate, or _luminosity_, which is typically around \(10^{34}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\). This luminosity can be calculated as
\[\mathcal{L}\approx\frac{fN^{2}}{4\pi\sigma_{x}\sigma_{y}}, \tag{2}\]
where \(f\) is the collision frequency, \(N\) is the electron or positron bunch population (here assumed identical), and \(\sigma_{x/y}\) is the root-mean-square (rms) beam size of the colliding bunches in the horizontal/vertical plane. Alternatively, it can be useful to express the luminosity as
\[\mathcal{L}\approx\frac{1}{8\pi m_{e}c^{2}}\frac{P_{\mathrm{wall}}}{\sqrt{ \beta_{x}\epsilon_{nx}}}\frac{\eta N}{\sqrt{\beta_{y}\epsilon_{ny}}}, \tag{3}\]
where \(P_{\mathrm{wall}}\) is the wall-plug power required, \(\eta\) is the wall-plug-to-beam energy-transfer efficiency, \(\epsilon_{nx/ny}\) is the _normalized_ (i.e., energy-independent) emittance and \(\beta_{x/y}\) is the beta function [53], while \(m_{e}\) and \(c\) are the electron mass and speed of light in vacuum, respectively. Considering that the construction cost scales with the length of the linear collider, minimizing the construction cost requires high accelerating gradient; similarly, since the running cost scales with the wall-plug power, minimizing the running cost while maintaining luminosity [Eq. 3] requires high charge and low emittance (i.e., high beam quality) as well as high energy efficiency.
### Accelerating gradient
The highest achievable gradient in an rf cavity is around 200 MV/m [54]. To justify switching accelerator technology, a minimum accelerating field of 1 GV/m is typically required for plasma accelerators. Equation 1 indicates, therefore, that a plasma density of at least \(10^{14}\,\mathrm{cm}^{-3}\) will be required. Likely, even higher in-plasma accelerating gradients (\(>10\,\mathrm{GV/m}\)) and therefore higher densities will be required (\(>10^{16}\,\mathrm{cm}^{-3}\)), because the _effective_ gradient averaged longitudinally across multiple stages can be significantly reduced due to lengthy staging optics [44]. This minimum plasma density places restrictions on the length of the accelerating bunch \(\sigma_{z}\): to be contained within the accelerating phase of the plasma wave, the bunch length must be less than approximately one plasma skin depth, \(\sigma_{z}\lesssim k_{p}^{-1}\), where \(k_{p}=\sqrt{n_{e}e^{2}/\epsilon_{0}m_{e}c^{2}}\) is the plasma wavenumber, \(\epsilon_{0}\) is the vacuum permittivity, and \(m_{e}\) and \(e\) are the electron mass and charge, respectively. This means that bunches must typically be shorter than 50 um rms (assuming a plasma density of \(10^{16}\,\mathrm{cm}^{-3}\)). The same argument applies to the drive bunch.
### Energy efficiency
The combination of high particle energy, high charge and high collision frequency translates to high beam power. The wall-plug power needed to generate this beam power is defined by the energy-transfer efficiency. It is instructive to split this overall efficiency into three sub-efficiencies:
\[\eta=\eta_{\mathrm{prod}}\times\eta_{\mathrm{depl}}\times\eta_{\mathrm{extr}}, \tag{4}\]
where \(\eta_{\mathrm{prod}}\) is the driver-production efficiency, or the fraction of the wall-plug power that ends up in the drive beam; \(\eta_{\mathrm{depl}}\) is the energy-depletion efficiency, or the fraction of the drive-beam energy transferred to the plasma wake; and \(\eta_{\mathrm{extr}}\) is the extraction efficiency, or the fraction of the wakefield energy extracted by the accelerating beam. The overall efficiency \(\eta\) of conventional colliders is around 5-10% [4, 10].
The maximum achievable production efficiency depends on the type of driver; it is typically larger for electron drivers (as high as 50% [10]) compared to that for positrons, protons and laser pulses [55]. For electron-driven plasma accelerators, experiments have demonstrated a depletion efficiency above 50% [43], and simulations indicate that this can be extended beyond 90% [56, 57]. Such high depletion efficiencies require stable propagation of the driver, avoiding effects such as the _hose instability_[47, 58] and _head erosion_ (i.e., the divergence of the head of the driver [59, 60]). The extraction efficiency can be calculated using the ratio of the energy gained by the trailing bunch to the energy lost by the driver,
\[\eta_{\mathrm{extr}}=-\frac{Q_{\mathrm{trailing}}\Delta\langle E_{\mathrm{ trailing}}\rangle}{Q_{\mathrm{driver}}\Delta\langle E_{\mathrm{driver}}\rangle}, \tag{5}\]
where \(Q\) is the charge and \(\Delta\langle E\rangle\) is the change in centroid energy of the respective bunches. To compete with conventional machines, and assuming 50% production and depletion efficiencies, a 20-40% extraction efficiency is required.
### Beam quality
Beam quality directly affects the luminosity through two parameters: bunch charge and normalized emittance. Ultimately, there is no fixed requirement for charge and emittance--it is possible to have higher emittance as long as there is more charge (according to Eq. 3) and vice versa. However, conventional colliders typically use charges of order 1 nC and normalized emittances of order 10 by 0.01 mm mrad in the horizontal and vertical planes, respectively. These emittances are asymmetric, resulting in "flat" beams at the collision point, to suppress disruptive beam-beam effects, or _beamstrahlung_[61, 62]. Such requirements place tight constraints on preservation of charge and emittance throughout the accelerator, which can be particularly challenging in plasma accelerators [63].
It addition, the luminosity is indirectly affected by another beam quality: the energy spread. A small energy spread is desired to maintain a well-defined collision energy (i.e., a narrow _luminosity spectrum_). A tighter restriction, however, comes from collider final-focusing systems, which can only provide sufficiently small beta functions (\(\beta_{y}\lesssim 1\) mm) if the energy spread is small [64]; typically less than 1% rms. This problem, known as _chromaticity_, also applies to transport between stages, where large energy spreads can lead to emittance growth [65]. Note that these requirements apply to the _uncorrelated_ energy spread (i.e., within a longitudinal bunch slice); a _correlated_ energy spread, or _chirp_, can potentially be removed by _dechirping_ prior to final focusing [66, 67, 68].
The last important beam quality is _spin polarization_[69], which is required to study spin-dependent electroweak processes [70]. Spin polarization can also be challenging to preserve in a plasma accelerator [50].
### Stability
Two types of stability are required in a linear accelerator: avoidance of exponentially growing instabilities that arise from positive feedback loops (resonances), such as the beam-breakup instability [71, 72, 48]; and operation within the error tolerance of all input parameters, which includes alignment and temporal synchronization. Instability manifests in the form of loss of beam quality, such as increased normalized emittance and eventually loss of charge. It can also affect luminosity beyond a direct effect on beam quality: for instance, significant transverse jitter at the interaction point may prevent collisions even if emittance and charge is preserved. Ultimately, the accelerator must maintain sufficient stability to ensure the effect on the luminosity is small.
In summary, the above comprise a challenging list of "top-down" requirements for any linear accelerator. The following section describes two decades of "bottom-up" plasma-wakefield research towards delivering these requirements for positrons.
## III A history of accelerating positrons in plasma wakefields
Plasma-based positron-acceleration research started in the early 2000s. The first numerical study, performed by Lee _et al._[73], compared electron-driven and positron-driven nonlinear plasma wakes. The results showed that in a homogeneous plasma, a positron bunch drives comparatively lower-amplitude wakefields than those driven by an identical electron bunch (see Fig. 2). However, they also found that in a hollow plasma channel [74], where no plasma exists on axis, the wakefield amplitudes
can be more comparable between electrons and positrons. This section presents theoretical and experimental work focused on these two plasma profiles: homogeneous plasmas in Sec. III.1 and hollow plasma channels in Sec. III.2.
Only limited experimental research has been directed toward positron acceleration in plasma wakefields. This is due to a general lack of experimental facilities that can provide positron bunches with high charge and high energy. So far, all experiments have been performed at the SLAC National Accelerator Laboratory, which produced intense positron bunches for the Stanford Linear Collider (SLC) in the 1990s [84], as illustrated in Fig. 3. Selected experimental milestones are highlighted in Table 1.
### Positron acceleration in homogeneous plasmas
In the 1990s, one of the greatest challenges for linear colliders was focusing beams to the sub-micron level in order to reach high luminosity. This prompted the launch of the Final Focus Test Beam (FFTB) facility [85, 86] at SLAC, which delivered short electron and positron bunches at energies up to \(47\,\mathrm{GeV}\). Several advanced focusing and acceleration techniques were also tested, initially including plasma lensing of electrons and positrons (the E-150 experiment [87, 88]) and plasma-wakefield acceleration of electrons (the E-157 experiment [89, 90]). Later experiments continued the E-157 experiment by also investigating plasma-wakefield acceleration of positrons (the E-162 experiment [91]).
Motivated by the promise of ultra-compact final focusing for linear colliders [92], the E-150 plasma-lens experiment demonstrated focusing of electrons and then of positrons in 2000. In the experiment, reported by Ng _et al._[75], a \(28.5\,\mathrm{GeV}\) positron beam traversed a 3-mm-thick nitrogen gas jet ionized by the positron beam itself but assisted by an Nd:YAG laser pulse. The plasma densities were not measured at the time, but using a simulated value of \(5\times 10^{17}\,\mathrm{cm}^{-3}\) yielded good agreement with experimental data. The beam density was around \(2\times 10^{16}\,\mathrm{cm}^{-3}\), implying that the experiment was operated in the linear (or overdense) plasma-lens regime, in which plasma electrons neutralize the electric field of the positron bunch. The self-focusing effect was provided by the azimuthal magnetic field of the bunch. The main experimental results are shown in Fig. 4. This was the first experiment demonstrating positrons interacting with plasma wakefields.
Following successful plasma-wakefield experiments with electrons (E-157), the E-162 experiment demonstrated both meter-scale transport and acceleration of
Figure 3: Schematic of the positron source at SLAC. Positron bunches are produced by sending electrons through a high-\(Z\) target, subsequently transported through a return line to a damping ring. After damping, the positrons are accelerated and compressed by two bunch compressors before delivery to the experimental area, where plasma acceleration occurs. From Ref. [26].
Figure 2: First PIC simulation comparing the energy change of electrons (a) and positrons (b) in a plasma wakefield, indicating an asymmetric response. From Ref. [73].
positron bunches. This experiment also made use of a \(28.5\,\mathrm{GeV}\) beam, containing \(1\)-\(2\times 10^{10}\) positrons compressed to a bunch length of \(700\,\mathrm{\SIUnitSymbolMicro m}\) rms and focused to a beam size of \(40\,\mathrm{\SIUnitSymbolMicro m}\) rms, but used a \(1.4\,\mathrm{m}\)-long lithium-vapor plasma source ionized by an ultraviolet laser to a density of up to \(1.8\times 10^{14}\,\mathrm{cm}^{-3}\). In 2003, Blue _et al._[77] demonstrated the first acceleration of positrons in a plasma, as shown in Fig. 5. Here, a streak camera and a Cherenkov radiator were used to measure energy loss and gain of different slices within the positron bunch. Using a similar setup, Hogan _et al._[78] showed that the focusing strength increased from the head to the tail of a positron bunch. The plasma density required to optimally focus the tail was found to be approximately 7 times lower than that needed for an identical electron bunch. This asymmetry occurs because the positron bunch attracts plasma electrons, resulting in an on-axis density spike with increasing electron density toward the tail of the bunch, as compared to the uniform ion density observed for electron bunches.
Although the on-axis electron-density spike focuses positrons, it results in nonlinear focusing and rapid emittance growth. As a result, a halo of diverged positrons will form around the core of the bunch. In a subsequent FFTB experiment, Muggli _et al._[79] investigated this effect by quantifying the fraction of positron charge contained in the halo, as illustrated in Figs. 6(a)-(d). This experiment showed that the halo contained as much as 40% of the total charge after \(1.4\,\mathrm{m}\) of propagation. Supporting PIC simulations indicate that the normalized
Figure 5: First observation of positron acceleration in plasma. The energy of various slices within the positron bunch was measured using a streak camera with a 1 ps temporal resolution (left plot). When the plasma is on (red), the bunch head loses energy while the tail gains energy; this is compared to when the plasma is off (blue). The peak accelerating field, averaged over \(1.4\,\mathrm{m}\), is \(56\,\mathrm{MeV}\)/\(\mathrm{m}\). Moreover, a scan of plasma density (right plot) shows the experimental measurements (blue) as well as simulated predictions (red) of the change in centroid energy, indicating higher-amplitude wakefields at higher plasma densities. From Ref. [77].
Figure 6: Halo formation and emittance growth for positrons in a plasma. Transverse profiles were measured without plasma (a, c) and with plasma (b, d). The fits to the profiles show that after propagation through a plasma, the fraction of charge contained in the surrounding halo (dashed blue lines) was significantly larger compared to the core (dashed purple lines). A matching PIC simulation shows a large emittance growth in the horizontal (e) and vertical plane (f), both for the full projected beam (dashed black line) and for various longitudinal slices (colored lines; numbered from head to tail). Adapted from Ref. [79].
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Year** & **Description** & **Reference** \\ \hline
2001 & First plasma focusing of positrons & Ng _et al._[75] \\
2003 & First guiding of positrons in a near-hollow plasma channel & Marsh _et al._[76] \\
2003 & First broad-band deceleration and acceleration of positrons & Blue _et al._[77] \\
2003 & First meter-scale transport of positron bunches & Hogan _et al._[78] \\
2008 & First observation of positron halo formation and emittance growth & Muggli _et al._[79] \\
2015 & First multi-GeV energy gain for positrons & Corde _et al._[26] \\
2016 & First demonstration of a hollow-plasma-channel accelerator & Gessner _et al._[80] \\
2017 & First acceleration of a distinct positron bunch & Doche _et al._[81] \\
2018 & First measurement of positron-driven transverse wakefields in a hollow channel & Lindstrom _et al._[82] \\
2023 & First efficient energy transfer between positron bunches in a hollow plasma channel & Gessner _et al._[83] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experimental milestones in plasma-based positron acceleration research. The year refers to year of publication.
emittance increased by a factor 10-100, not only for the projected beam but also for individual longitudinal slices [see Figs. 6(e) and (f)].
After the shutdown of FFTB in 2006, a new facility was planned, reusing part of the accelerator but moving and upgrading the experimental area. The Facility for Advanced aCelerator Experimental Tests (FACET) [93; 94] started operation in 2012.
In the intervening years, numerical studies investigated the transition between linear and nonlinear wakefields for positrons, as well as issues related to efficiency and beam quality. Lu _et al._[95] found that the linear wakefield theory breaks down at lower beam densities for positrons compared to electrons. Another study by Zhou _et al._[96] found that an electron bunch drives a stronger wakefield than an equivalent positron bunch, in agreement with Lee _et al._[73]. By investigating the dynamics of the plasma electrons, Zhou _et al._ observed that when electrons flow into the positron bunch, the excess negative charge acts just like a (weaker) electron bunch with a positively charged head. In 2007, Lotov [97] demonstrated that high-efficiency (up to 60%) positron acceleration is possible in an electron-driven plasma wakefield with optimized positron beam loading, though with energy spreads up to several percent. It was noted that the attainable efficiency is higher for weakly nonlinear wakes than for strongly nonlinear wakes. Complementary to this finding was a study by An _et al._[98], which demonstrated similarly high efficiencies and that the weakly nonlinear regime offers higher efficiency than the linear regime in exchange for a higher energy spread. Other studies showed that the positron emittance growth increases for higher plasma densities [99], and that the energy spread of a longitudinal slice can increase to the 10% level after only 50 cm of propagation in a plasma [100].
In 2015, Corde _et al._[26] reached a major milestone at the FACET facility: experimental demonstration of high-gain (5.75 GeV), high-gradient (3.8 GV/m) and high-efficiency (30%) acceleration of positrons in a plasma. Here, the incoming positron bunch had an energy of 20.35 GeV, a charge of \(\sim\)2.2 nC and a bunch length of 30-50 \(\mathrm{\SIUnitSymbolMicro m}\). Approximately 100-200 pC of charge was accelerated in a 1.3 m-long lithium plasma at a density of \(8\times 10^{16}\) cm\({}^{-3}\)--a significantly higher density than in previous experiments, enabled by the use of shorter bunches. The results are illustrated in Fig. 7. In this experiment, the head of the positron bunch drove a strongly nonlinear wakefield while the tail loaded the wakefield, extracting a significant fraction of the energy deposited in the wake [Fig. 1(d)]. Without the presence of the bunch tail, the accelerating region is defocusing for positrons [see Fig. 1(c)] as the plasma electrons flow outward behind the bunch head, forming a blowout-like structure. However, some of the plasma electrons remain on axis due to the focusing field of the positron bunch tail [see Fig. 1(d) around \(\xi=-100\,\mathrm{\SIUnitSymbolMicro m}\)], a process known as _transverse beam loading_, resulting in an accelerating region which is focusing for the positrons. This scheme is referred to as _self-loaded_ plasma-wakefield acceleration. A final energy spread of 1.8% rms was achieved, which for the 22% relative energy gain corresponds to a uniformity of the accelerating field of around 8% rms.
Up to this point, all positron experiments had been performed using single bunches--that is, bunches with a Gaussian-like current profile. However, to reduce the energy spread of the accelerated positrons, a driver-trailing bunch pair is required. With the FACET facility's ability to produce such bunch pairs, as used in PWFA experiments for electrons [25], a follow-up experiment was performed by Doche _et al._[81]. In this experiment, a chirped (i.e, time-energy correlated) positron bunch with a mean energy 20.35 GeV traversed a "W-shaped" mag
Figure 7: First multi-GeV acceleration of positrons in a plasma wakefield at FACET. (a) An imaging spectrometer focused at 22.85 GeV shows accelerated positrons with a defined peak in the energy spectrum (black line) at 24.75 GeV and an energy spread of 1.8% rms based on a Gaussian fit (red dashed line). (b) The highest energy gain of the peak was approximately 5.75 GeV; here the imaging energy was 25.35 GeV. (c) To measure the energy-transfer efficiency, the continuous energy spectrum of the decelerated particles was also measured. From Ref. [26].
netic chicane, within which the beam was energetically dispersed (i.e., there was a transverse-energy correlation). In the chicane, a notch collimator blocked the central part of the energy spectrum before the bunch was again undispersed, resulting in a double-bunch structure with a leading drive bunch (\(20.55\,\mathrm{GeV}\)) and a trailing bunch (\(20.05\,\mathrm{GeV}\)). The charges of the driver and trailing bunches were approximately \(480\,\mathrm{pC}\) and \(260\,\mathrm{pC}\), respectively, with corresponding bunch lengths of \(30\,\mathrm{\SIUnitSymbolMicro m}\) and \(40\,\mathrm{\SIUnitSymbolMicro m}\). The \(1.3\,\mathrm{m}\)-long lithium plasma was ionized to a density of \(1\times 10^{16}\,\mathrm{cm}^{-3}\). Using this setup, the experiment demonstrated the first acceleration of a distinct positron bunch in a plasma wakefield driven by another positron bunch (see Fig. 8). Here, an energy-transfer efficiency of about \(40\%\) and a final energy spread of \(1.0\%\) rms were achieved; given the relative energy gain of \(7\%\), this energy spread corresponds to a field uniformity of approximately \(14\%\).
In the above experiment [81], the beam emittance was varied by inserting titanium foils of different thicknesses, as a way to investigate the transition between the nonlinear and quasi-linear regimes. This was motivated by the quality-preserving features of the quasi-linear regime, which could mitigate the emittance growth seen in strongly nonlinear wakefields. Measurements showed that higher emittances resulted in smaller energy gain; a sign of lower-amplitude plasma wakefields. Simulations using the experimental parameters indicate that the low-emittance beam (\(100\times 10\,\mathrm{mm}\,\mathrm{mrad}\)) drove a nonlinear plasma wakefield while the high-emittance beam (\(270\times 60\,\mathrm{mm}\,\mathrm{mrad}\)) drove a quasi-linear plasma wakefield. Additionally, a negative correlation was seen between the trailing-bunch charge and the amplitude of the wakefield, as well as between the charge and the energy spread--observations consistent with beam loading.
Finally, in a separate numerical study, Fujii _et al._[101] examined the often-neglected, but important issue of positron-beam extraction at the end of acceleration; they found that a plasma-density down ramp can be detrimental and lead to increased divergence, in contrast to electrons, for which ramps lead to reduced divergence [102; 103; 104; 105; 106; 107; 108]. An alternative method was therefore proposed: gradual reduction in wake amplitude via head erosion, as this keeps the positrons in phase with the focusing field.
Summarizing the progress of positron acceleration in homogeneous plasmas, experiments have demonstrated acceleration with high gradient, high energy-transfer efficiency and reasonably low energy spread. However, acceleration of positron bunches with both low emittance and low energy-spread remains a major challenge.
### Positron acceleration in hollow plasma channels
The alternative approach is to use a hollow plasma channel, which can be described as a tubular plasma surrounding an un-ionized (hollow) core. This concept was originally proposed by Tajima in 1983 [74], known at the time as the _plasma-fiber accelerator_. The motivation was to increase the acceleration length, in two ways:
Figure 8: First acceleration in a plasma of a distinct positron bunch. The drive and trailing bunches were first sent individually through a plasma, shown in (a) and (b), respectively. Energy loss can be observed in both bunches with negligible energy gain, as the plasma wavelength is much longer than the bunch length. (c) However, when both bunches propagated through the plasma, approximately \(85\,\mathrm{pC}\) of the trailing-bunch charge was accelerated. The spectral peak at \(21.5\,\mathrm{GeV}\) was accelerated by \(1.45\,\mathrm{GeV}\) in \(1.3\,\mathrm{m}\), corresponding to an accelerating gradient of \(1.12\,\mathrm{GV}\)/\(\mathrm{m}\). The drive-to-main efficiency was estimated to be \(40\%\) with a final energy spread of \(1.0\%\). From Ref. [81] (CC BY 4.0).
Figure 9: Experimental concept for creating a hollow plasma channel. A laser is used to ionize a gas (red line) by combining the use of an axicon, to create a longitudinally uniform plasma filament, and a kinoform, to create a higher-order Bessel profile in the transverse plane. From Ref. [109] (CC BY 3.0).
by avoiding the reduced velocity of light in a plasma, thereby suppressing phase slippage between the accelerating electrons and the laser pulse (known as _dephasing_); and to provide optical guiding of lasers, thereby suppressing the divergence of the wavefront of the laser (known as _diffraction_). While initially based on _overdense_ plasmas (\(\omega<\omega_{p}\), where \(\omega\) and \(\omega_{p}\) are the laser and plasma frequencies, respectively), the concept was later extended to using _underdense_ plasmas (\(\omega>\omega_{p}\)), which provide favourable laser-guiding characteristics [110, 111, 112].
Later work by Chiou and Katsouleas [113] highlighted that hollow plasma channels provide several advantages: transversely uniform accelerating fields, which enable low energy spread; zero focusing fields within the channel, which enable preservation of emittance; as well as high energy efficiency through beam loading. However, around the same time, Schroeder _et al._[114] calculated that bunches travelling off-axis in the channel would excite a transverse wakefield that acts to deflect the bunch, and could therefore lead to a beam-breakup instability.
The first proposal to use hollow plasma channels for positron acceleration came from Lee _et al._ in 2001 [73], motivated by the higher-amplitude wakefields achievable compared to those in a homogeneous plasma. This led to hollow channels being proposed for the positron arm of plasma-based electron-positron colliders, including in the _plasma afterburner_ concept proposed in 2002 [115, 116].
Motivated by these collider concepts, a first attempt at realizing the hollow channel was performed as part of the E-162 [91] positron experiment at FFTB. In 2003, Marsh _et al._[76] reported the propagation of positron bunches through a meter-scale near-hollow plasma channel, comparing it to propagation through a homogeneous plasma. In this experiment, the channel was produced by blocking the central portion of the UV laser that ionized the plasma 200 ns before the arrival of the positron bunch--an approach that produced an on-axis density depression rather than a truly hollow channel. To properly exploit the advantages of a hollow channel, complete absence of plasma in the channel would be required.
Several ideas were proposed regarding how to realize the hollow channel experimentally. Kirby _et al._[117] proposed inserting an circular obstruction into a gas jet. However, another approach by Fan _et al._[118] was found to be more promising: ionizing the gas using a tubular laser pulse created using a combination of an axicon and a spiral phase plate (known as a _kinoform_[119]). This axicon-kinoform setup, which produces a plasma that is longitudinally uniform and a high-order Bessel function transversely, is illustrated in Fig. 9.
In 2011, Kimura _et al._[109] performed the first self-consistent simulations of positron acceleration in hollow plasma channels produced by a kinoform optic, finding that 1 m-long plasma channels with a density of order
Figure 10: Experimental layout of the first demonstration of plasma wakefields in a hollow plasma channel. An ionizing laser passes through a kinoform, resulting in an annular transverse profile (a), and is coupled to the beam axis using a holed mirror. Shortly after, a positron bunch propagates through the channel. The transverse profile of the positron bunch is captured on a downstream yttrium-aluminium garnet (YAG) screen. Comparing images without (b) and with (c) a plasma channel, no difference in beam size was observed, indicating the absence of focusing fields in the channel. (d) The hollow-channel density profile was inferred by scanning the channel offset in both transverse planes with respect to the beam axis. When the beam interacts with the plasma of the channel wall, it experiences a focusing force, resulting in higher divergence and a larger beam size on the YAG screen. Aligning the beam to the channel centre, its energy spectrum was measured downstream using a dipole spectrometer and a LANEX screen. A histogram of the centroid energy loss for 315 shots is shown in (e). Adapted from Ref. [80] (CC BY 4.0).
\(10^{16}\,\mathrm{cm}^{-3}\) could be produced and would sustain accelerating fields as high as \(3\,\mathrm{GV/m}\). Another conclusion was that the positron beam could in principle ionize the gas within the channel, but this can be avoided with a sufficiently low beam density (e.g., larger than \(20\,\mathrm{\SIUnitSymbolMicro m}\) rms beam size and \(20\,\mathrm{\SIUnitSymbolMicro m}\) rms bunch length for a \(2.9\,\mathrm{nC}\) bunch, assuming hydrogen gas). These simulations laid the groundwork for hollow-channel experiments at FACET.
Using the kinoform method to generate a hollow channel, in a 2014 experiment performed at FACET, Gessner _et al._[80] demonstrated the first plasma wakefield driven by positrons in a truly hollow plasma channel. The experimental setup and results are shown in Fig. 10. An \(8\,\mathrm{cm}\)-long hollow channel with a radius of \(250\,\mathrm{\SIUnitSymbolMicro m}\) and a thickness of \(50\,\mathrm{\SIUnitSymbolMicro m}\) was formed in lithium vapor at a density of \(8\times 10^{16}\,\mathrm{cm}^{-3}\). This channel was ionized by a Ti:sapphire laser with a \(J_{7}\) Bessel profile (using a 7-step kinoform), which arrived \(3\,\mathrm{ps}\) prior to the arrival of the positron bunch. The bunch had an energy of \(20.35\,\mathrm{GeV}\), a charge \(0.53\,\mathrm{nC}\), and a length of \(35\,\mathrm{\SIUnitSymbolMicro m}\) rms. Measurements of the beam size were used to show that the channel center was fully devoid of ionized gas [see Fig. 10(d)]. When the beam propagated through the channel, changes in the energy spectrum of approximately \(20\,\mathrm{MeV}\) were observed [see Fig. 10(e)], implying longitudinal wakefields of approximately \(230\,\mathrm{MV/m}\)--in agreement with PIC simulations.
Following these results, the first acceleration of a positron bunch in a hollow plasma channel was demonstrated in 2016 [120, 121]. The experiment was performed using FACET's two-bunch configuration, where a separate positron trailing bunch was accelerated in the wake of a positron drive bunch. A drive bunch of energy \(20.5\,\mathrm{GeV}\) and a trailing bunch of energy \(20.1\,\mathrm{GeV}\), with a combined charge of \(560\,\mathrm{pC}\), were sent into the hollow plasma channel. As in the single-bunch experiments, a kinoform was used to create the channel with a similar radius but now \(25\,\mathrm{cm}\) long and with a reduced plasma density of \(3\times 10^{15}\,\mathrm{cm}^{-3}\). A linear wakefield was excited in the channel, accelerating the trailing bunch by approximately \(20\,\mathrm{MeV}\), corresponding to a peak gradient of around \(70\,\mathrm{MeV}\)/m. The energy-transfer efficiency reached a maximum of \(18\%\) when the drive-to-trailing-bunch charge ratio was approximately 5:1, with a median transformer ratio of \(0.55\)[83]. The experimental results are depicted in Fig. 11.
While at this point, hollow-channel positron acceleration appeared to be a working solution, one major problem remained: the transverse instability, as noted by Schroeder _et al._[114] in 1999. Misaligned beams propagating off-axis induce a strong transverse wakefield that quickly deflects the beam--a problem that is aggravated by the lack of on-axis focusing fields. The transverse wakefield \(W_{x}\) (per offset \(\Delta x\) of the driving particle) at a short distance \(z\) behind a particle is fundamentally linked to the longitudinal wakefield \(W_{z}\) through
\[\frac{W_{x}}{\Delta x}=-\frac{\kappa(a,b)}{a^{2}}\int_{0}^{z}W_{z}dz^{\prime}, \tag{6}\]
also known as the _short-range wake theorem_[48], where \(\kappa(a,b)\) is a numerical coefficient dependent on the inner and outer channel radii \(a\) and \(b\), respectively [82, 121]. Equation 6 implies that the transverse wakefield increases with larger offsets \(\Delta x\), which leads to a resonance and therefore an instability [122]. It also implies that if the hollow-channel radius is reduced to increase the longitudinal wakefield (scaling approximately as \(W_{z}\sim 1/a\)), the transverse wakefield increases even more rapidly (\(W_{x}\sim 1/a^{3}\) assuming the above scaling for \(W_{z}\)). The resulting instability inevitably leads to catastrophic beam loss.
The effect of a misaligned beam was measured experimentally in the 2016 two-bunch experiments at FACET. By observing the transverse deflection of the trailing bunch when propagating in a misaligned channel, Lindstrom _et al._[82] measured the transverse wakefield in a hollow plasma channel. Using the same beam and plasma parameters as reported in Refs. [83, 120, 121], the deflection angle of the trailing bunch was measured with the downstream spectrometer screen and correlated with
Figure 11: First positron acceleration in a hollow plasma channel. Energy gain of the trailing bunch (a) and energy loss of the drive bunch (b) were measured for 500 shots (red bars), and compared to spectra measured without plasma (blue bars). At a bunch separation of \(330\,\mathrm{\SIUnitSymbolMicro m}\), the energy-transfer efficiency (c) and transformer ratio (d) were calculated using the centroid energy change of the bunches. Adapted from Refs. [80] (CC BY 3.0) and [83] (CC BY 4.0).
the channel offset [see Fig. 12(a)]. This offset was measured by imaging the transverse profile of the ionizing laser using downstream cameras. The main source of the channel offset was a 30-40 \(\mathrm{\SIUnitSymbolMicro m}\) rms transverse laser jitter (the positron beam jittered by less than 5 \(\mathrm{\SIUnitSymbolMicro m}\) rms). The wakefield measurement was repeated at various drive-to-trailing bunch separations (50-600 \(\mathrm{\SIUnitSymbolMicro m}\)), peaking at a separation of 200 \(\mathrm{\SIUnitSymbolMicro m}\), in good agreement with both analytical models and PIC simulations [see Fig. 12(b)]. A second, indirect estimation of the transverse wakefield was performed by utilizing Eq. 6 and a measurement of the longitudinal wakefield [see Fig. 12(c)]. Ultimately, these measurements confirmed the presence of the strong transverse wakefields and the tendency for hollow channels to be unstable.
In parallel, several aspects and variations of the hollow-channel concept were studied with numerical simulations. Yi _et al._[123, 124] proposed a proton-driven hollow channel for positron (and proton) acceleration, in order to accelerate by up to 1 TeV in a single accelerator stage, using the in-flowing electrons to keep the bunch focused. This was later extended to using trains of proton bunches by Li _et al._[125]. Amorim _et al._[126] proposed a scheme where an intense, tightly-focused positron drive bunch causes strong ion motion, creating a (near-)hollow channel with wakefields that can both accelerate and focus positrons. Moreover, Golovanov _et al._[127] extended the analytical description of hollow channels from linear [121, 114] to nonlinear wakefields. Finally, Wu _et al._ proposed using hollow channels to both dechirp [128] and linearize [129] electron/positron bunches in longitudinal phase space.
Since the end of the FACET program in 2016, no other plasma-based positron-acceleration experiments have been performed. The possibility for FACET-II [130]--the next-generation facility that started operation in 2021--to deliver positrons is presently unclear. In LWFA, positron-acceleration experiments have not yet been performed, partly due to the challenges of sourcing and injecting positron beams. That said, experiments have demonstrated generation of intense ultra-relativistic positron beams using high-energy electrons from LWFAs [131, 132, 133, 134], which may be used for future positron experiments.
In summary, plasma-based positron acceleration was successfully demonstrated in experiments, showing high gradients (\(>\)GV/m), high gains (\(>\)GeV), and high efficiency (\(\sim\)40%). However, while reasonable charge was accelerated (100-200 pC), the energy spread per energy gain (i.e., the field uniformity) was too high (\(\sim\)10%), as was the final emittance--where measured, the emittance growth was large or the propagation unstable. Nevertheless, while sufficient beam quality has not yet been demonstrated, the experiments and theoretical investigations have inspired a wave of new schemes (Sec. IV).
## IV Proposed schemes
The minimum objective for any plasma-based positron accelerator is to create a region that is both accelerating and focusing. Firstly, to generate a longitudinal field that accelerates positrons, there must be a net radial
Figure 12: First measurement of the transverse wakefields in a hollow plasma channel. (a) The slope (red line) of the correlation between angular deflection and the channel offset multiplied by the drive-bunch charge (blue points) can be used to calculate the amplitude of the transverse wakefield. (b) This measurement (orange error bars) is compared to an analytical model (black dotted line) and PIC simulations (gray squares). (c) The longitudinal wakefield was also measured and used for an indirect estimate of the transverse wakefield [blue area in (b)]. The bunch separation was measured using electro-optical sampling. From Ref. [82] (CC BY 4.0).
current of plasma electrons outwards within each longitudinal slice of the bunch. Secondly, to generate a transverse field that focuses positrons, there must be a net negative charge density (i.e., surplus plasma electrons) locally within the bunch.
The proposed schemes, summarized below, all fulfill the minimum objective. These either optimize or modify existing schemes tested in experiments (i.e., homogeneous plasmas and hollow plasma channels; see Sec. III); modifying the driver, the plasma or both. Here we discuss the principles behind each scheme, their advantages and limitations, as well as example parameters from PIC simulations. An important difference between previous experiments and the proposed schemes is that the former were all positron-driven, whereas the latter are all electron-driven.
Note that only schemes with externally injected, relativistic positron bunches are reviewed, which excludes several proposed schemes for generating and injecting positron bunches [135, 136, 137, 138, 139, 140, 141, 142, 143, 101].
### Homogeneous plasmas with Gaussian beams
In homogeneous plasmas, two regimes are considered: (1) the quasi-linear regime, where fields are strong but several advantages of the linear regime are retained; and (2) the nonlinear regime, which can support even stronger fields and higher bunch charges. While these regimes have all been studied extensively both in theory and in experiments, recent numerical work has focused on finding optimal parameters.
#### iv.1.1 Quasi-linear regime
The linear regime characterizes plasma-density perturbations \(\delta n\) that are small compared to the ambient plasma density \(n_{0}\). In this case, the first-order or the linear perturbation term, \(\delta n/n_{0}\), dominates over higher-order terms, which means the plasma-electron motion can be described by the wave equation
\[\left(\frac{\partial^{2}}{\partial t^{2}}+\omega_{p}^{2}\right)\frac{\delta n }{n_{0}}=S_{\rm driver}, \tag{7}\]
where \(\omega_{p}\) is the plasma frequency and \(S_{\rm driver}\) is a driver source term, either from a particle beam (\(S_{\rm beam}=-\omega_{p}^{2}n_{b}/n_{0}\) where \(n_{b}\) is the peak beam density) or a laser beam (\(S_{\rm laser}=\frac{1}{2}c^{2}\nabla^{2}a^{2}\) where \(a^{2}\) is the normalized laser intensity) [145]. The solution to Eq. 7 is a density perturbation that is zero in front of the driver and sinusoidal behind it. The resulting electric fields, as described by Keinigs _et al._[27], are also sinusoidal in the longitudinal direction, but evanescent in the transverse (see Fig. 13). In the linear regime, the response of the plasma electrons to positrons and electrons is completely symmetric, with a phase difference of \(180^{\circ}\). This symmetry breaks when the density perturbation approaches the ambient density (\(\delta n/n_{0}\approx 1\))--often known as the quasi-linear regime.
High efficiency can be achieved with beam loading, which in the linear regime is simply a destructive interference between the fields of the driver and trailing bunch. Efficient beam loading therefore requires the charge of the trailing bunches to be approximately that of the driver. For wide drivers (with beam size \(\sigma_{r}>k_{p}^{-1}\)) the field extends to the size of the beam, which means that the trailing bunch must have matching beam size to extract the wakefield energy [146]. On the other hand, if the driver is narrow (\(\sigma_{r}<k_{p}^{-1}\)), the fields extend transversely over a characteristic range \(k_{p}^{-1}\) regardless of beam size--it is therefore possible to beam load with potentially very narrow trailing bunches (\(\sigma_{r}\ll k_{p}^{-1}\)). Low correlated energy spread is possible in the linear regime through beam loading with a tailored current profile [30], whereas low uncorrelated energy spread can be achieved by using narrow bunches, for which all particles experience only the near-uniform fields close to the axis.
Low emittance also implies narrow bunches. As an example, assume collider-level emittances -- 0.5 mm mrad averaged over both transverse planes, and beams with energies of order 10 GeV in a quasi-linear plasma wave at a density of \(1\times 10^{16}\) cm\({}^{-3}\): the transverse beam size will be approximately \(\sigma_{r}\approx 0.01k_{p}^{-1}\)[146]. In principle, emittance will be preserved in the quasi-linear regime,
Figure 13: (a) Accelerating field and (b) transverse focusing force in a quasi-linear plasma wakefield excited by a \(20.55\) GeV positron driver (labelled “D”) with emittance \(270\times 60\) mm mrad and beam density \(n_{b}/n_{0}=0.6\). Here, the plasma density is \(n_{0}=10^{16}\) cm\({}^{-3}\). A 20.05 GeV trailing positron bunch (labelled “T”) with the same emittance and a charge of \(\sim\)130 pC reached a peak beam density of \(n_{b}/n_{0}=0.15\). From Ref. [81] (CC BY 4.0).
as the focusing fields are linear close to the axis. However, when combined with the charge required for high-efficiency beam loading, the beam density quickly exceeds the plasma density. This means the trailing bunches do not operate in the quasi-linear, but instead in the nonlinear regime. High beam density results in an on-axis electron spike which can make it challenging to preserve beam emittance. A quantitative description of this beam-density limit is given in Sec. VI.
Positron acceleration experiments in the quasi-linear regime by Doche _et al._[81] (described in Sec. III.1) demonstrated gradients of order \(1\,\mathrm{GV/m}\), charge of around \(100\,\mathrm{pC}\), energy efficiency of around \(40\%\), energy spread (per gain; i.e., field uniformity) of order \(10\%\), for emittances of around \(270\times 60\,\mathrm{mm}\,\mathrm{mrad}\). Here, the emittance was not shown to be preserved. In simulations by Hue _et al._[146], optimized for emittance and uncorrelated energy-spread in the quasi-linear regime, an emittance as low as \(0.64\,\mathrm{mm}\,\mathrm{mrad}\) and a charge of \(4\,\mathrm{pC}\) can be accelerated with a high gradient of \(1.3\,\mathrm{GV/m}\), a high efficiency of \(30\%\) and good quality (emittance preserved and less than \(1\%\) uncorrelated energy spread). In this study, the correlated energy spread was not fully preserved, but assumed removed by dechirping prior to collisions.
In summary, the quasi-linear regime can almost deliver collider relevant positron beams with low emittance and low energy spread at high gradient and efficiency, falling short only with respect to accelerated charge--this is a few orders of magnitude lower than required for colliders. The charge can in principle be increased, but to maintain the same beam density (to stay in the quasi-linear regime), the emittance must increase proportionally.
#### iii.2.2 Nonlinear regime
The nonlinear regime is defined by density perturbations of order the plasma density or higher. These waves are driven by either particle beams with densities exceeding the plasma density (\(n_{b}/n_{0}\geq 1\)) or lasers with normalized intensity exceeding unity (\(a^{2}\geq 1\)). The induced transverse plasma-electron motion is significant in this regime, which results in bubble formation and a more compact region both transversely and longitudinally, in which positrons can be both accelerated and focused--here, the accelerating gradient is higher and the focusing forces larger compared to the linear regime.
The total volume of the favourable region for positrons depends on the strength of the nonlinearity, which is often quantified by the _normalized driver charge_[147, 148]:
\[\tilde{Q}=\frac{k_{p}^{3}N}{n_{0}}=4\pi r_{e}k_{p}N, \tag{8}\]
where \(N\) is the bunch population and \(r_{e}\) is the classical electron radius. In the _strongly nonlinear_ regime [28] (\(\tilde{Q}\gg 1\)) the volume is negligible, whereas in the _weakly nonlinear_ regime [95] (\(\tilde{Q}\approx 1\); also known as _moderately nonlinear_[146, 149] or _quasi-nonlinear_ regime [150, 151, 152]) the volume is small but non-negligible. Lotov [97] and An _et al._[98] found that \(\tilde{Q}\approx 1\)-\(10\) (or equivalent for lasers [153]) provides the optimum conditions for positron acceleration.
In the presence of a high-density positron bunch, beam loading occurs [see Fig. 14(a)]: the energy is extracted from the accelerating field (longitudinal beam loading), and moreover, the length of the focusing region is extended (transverse beam loading) because the positrons keep the electron close to the axis--an effect often known as self-loading [26] (also seen in Fig. 1). Ultimately, this effect allows high-efficiency acceleration of higher-charge bunches (similar to the drive-bunch charge). However, as shown in Fig. 14(b), this self-loading leads to nonlinear transverse fields and consequently higher emittances for Gaussian bunches. Therefore, while its focusing fields are different from those in the quasi-linear regime, the nonlinear regime shares the same trade-off: higher charge and efficiency implies higher emittance and energy spread because of nonlinear fields.
As an example, an optimized simulation, shown in Fig. 14, indicate that an emittance of approximately \(8\,\mathrm{mm}\,\mathrm{mrad}\) can be preserved when accelerating \(102\,\mathrm{pC}\)
Figure 14: Positron acceleration in nonlinear wakefields, driven by an electron bunch of charge \(534\,\mathrm{pC}\) (\(\tilde{Q}=2\), \(n_{b}/n_{0}=27\)), length \(\sigma_{z}=40\,\mathrm{\SIUnitSymbolMicro m}\) and beam size \(\sigma_{r}=5\,\mathrm{\SIUnitSymbolMicro m}\). The plasma density is \(7.8\times 10^{15}\,\mathrm{cm}^{-3}\). (a) Density map of the plasma, electrons and positrons (beam current shown as dashed gray lines), and the on-axis longitudinal field (red line). (b) Transverse force seen by the positrons (dashed red box). (c) Emittance evolution of the positrons, both slice (dashed) and projected (solid red). (d) The accelerating field, shown at several propagation distances, is largely stable. (e) The energy (blue line) and relative energy spread (red line) of the positron bunch throughout the propagation. (f) Final longitudinal phase space of the positron bunch, indicating a low correlated energy spread. (g) Final energy spectra of the two bunches, indicating high driver energy depletion. From Ref. [154].
positron bunches at an energy efficiency of 26% and an accelerating gradient of 1.6 GV/m [154]. Here, emittance preservation in longitudinally non-uniform focusing fields was achieved using slice-by-slice matching. The energy spread was then dominated by the uncorrelated energy spread, at a level of 2.39% [see Fig. 14(f)]. Another example used a bunch of charge 23 pC and an emittance of approximately 1 mm mrad, resulting in an efficiency of 40% in an accelerating gradient of 4.8 GV/m [146]. In this example, the uncorrelated energy spread was 1% rms, but the projected energy spread was larger (at the level of 10% rms).
A limitation of this regime is the degree to which the field structure can depend on the exact driver-beam density--in particular in the transition between the weakly and strongly nonlinear regime. This implies strict tolerances on driver parameters as well as the need to avoid significant evolution of the driver during propagation, which may have implications on the energy-depletion efficiency of the driver [see Fig. 14(g)].
A variation on this scheme utilizes the nonlinear wakefield for simultaneous acceleration of electrons. This can increase the overall efficiency--simulations indicate at least 10% higher efficiency--by sharing the energy extraction between the trailing electron and positron bunches [154]. Alternatively, Wang _et al._[155] argue that highly efficient electron beam loading in the strongly nonlinear regime can also create an elongated on-axis plasma-electron filament, which can be used for positron acceleration, and that the width of this filament can be controlled through the plasma temperature.
This regime can potentially be tested experimentally also without a conventional positron source, by using an injection scheme proposed by Wang _et al._[135, 136, 137]. Here, the electron driver passes through a foil prior to the plasma, in which electron-positron pairs are produced. Further studies have shown that while the trapping conditions are excellent, the trapping efficiency decreases as the gap between the foil and plasma wakefield increases. Such a gap may be necessary for practical experiments [101].
In summary, the nonlinear regime can offer either higher acceleration gradient or higher charge compared to the quasi-linear regime, but also remains limited by the trade-off between charge (and thereby efficiency) and beam quality (i.e., emittance and energy spread).
### Modified drivers: Donut-shaped laser or electron beams
It is possible to create an accelerating and focusing region for positrons even in the strongly nonlinear regime, by modifying the transverse shape of the driver. Vieira and Mendonca [156] proposed to use donut-shaped (or Laguerre-Gaussian [157, 158]) laser pulses for this purpose, as illustrated in Fig. 15. Similar wakefields can also be excited by a donut-shaped electron bunch [159, 160] or overlapping but non-neutral electron and positron bunches [161]. Such drivers create an electron filament that can focus positrons by guiding plasma electrons through the hollow core of the driver and onto the axis. Nonlinear accelerating wakefields are created by expelling plasma electrons outwards, much like in a regular blowout.
The main advantage of this scheme is that the on-axis electron filament exists also when the fields are strongly nonlinear, independently of whether the fields are beam loaded. This allows the accelerating field to be significantly higher, as well as more charge to be accelerated, compared to that achieved in the quasi-linear and weakly nonlinear regime with Gaussian drivers (see Sec. IV.1). Another advantage is the additional degrees of freedom available for tailoring the electron filament--as an example, Yu _et al._[162] showed that the strength and shape of the on-axis focusing can be controlled by varying the
Figure 15: (a) Simulated donut wakefields driven by a Laguerre-Gaussian laser propagating in the direction of the arrow. (b) The plasma-electron density is perturbed by the laser. Here, \(r_{m}\) is the radius of peak laser intensity, \(R_{b}\) the diameter of the blowout, and \(\Delta\) is the approximate width of the electron sheath and radius of the electron filament. (c) The transverse fields are focusing for positrons on axis. A lineout at \(z=105c/\omega_{p}\) (blue line) is compared to a theoretical model (red line). (d) The corresponding longitudinal field is accelerating for positrons in the front half of the blowout structure. The green box in (b–d) represents this accelerating and focusing region. Adapted from Ref. [156].
relative intensity of higher-order laser modes.
In principle, the donut wakefield allows acceleration of very low emittance and low energy-spread bunches, due to the linear focusing fields and uniform accelerating fields close to the axis [see Fig. 15(c)]. Jain _et al._[159] found that for a donut-shaped electron driver, emittances as low as \(\approx$0.04\,\mathrm{mm}\,\mathrm{m}\mathrm{m}\mathrm{m}\mathrm{a}\mathrm{r} \mathrm{a}\mathrm{r}$\) and energy spreads less than \(0.4\%\) rms can be preserved. Here, the accelerating gradient was \(8.9\,\mathrm{GV}/\mathrm{m}\). However, the accelerated bunch had a charge of only \(14\,\mathrm{pC}\), whereas the driver had \(5.2\,\mathrm{n}\mathrm{C}\), leading to very low energy efficiency (approx. \(0.17\%\)). If the wakefield is beam loaded more strongly with a higher-charge positron bunch, the beam density is consequently higher, which alters the shape of the transverse focusing fields--exactly the same problem encountered in the weakly nonlinear regime (see Sec. IV.1.2). The resulting trade-off between efficiency and beam quality is discussed in Sec. VI.
Another potential issue is that any evolution of the driver may impact the shape of the wakefields. While donut-shaped lasers can maintain their approximate shape, and thereby a region that focuses and accelerates positrons, until energy is nearly depleted (several hundred plasma wavelengths) [156], the exact shape of the wakefield will evolve--not optimal for beam-quality preservation. However, Pathak _et al._[163] note that a filamentation instability can occur, leading to transverse breakup of the laser pulse; this instability can be suppressed by using a parabolic plasma-density profile.
Unlike Laguerre-Gaussian laser beams, which have non-zero angular momentum, donut-shaped electron bunches typically do not. As a result, these bunches can collapse onto the axis, forming a regular blowout wake. Jain [164] found that this collapse happens for electron drivers with a donut radius smaller than approximately \(1.8k_{p}^{-1}\), but these drivers can propagate stably for larger donut radii. Moreover, use of non-zero-divergence electron drivers can lead to head erosion [59, 60]. Finally, similar to the filamentation instability observed for laser pulses, the azimuthal Weibel instability may also be a problem, causing filamentation of the electron driver [165].
A semi-optimized parameter set, balancing beam quality and efficiency, was obtained by Hue _et al._[146]: a positron bunch of charge \(189\,\mathrm{pC}\) and equilibrium emittance of \(1.5\,\mathrm{mm}\,\mathrm{m}\mathrm{a}\mathrm{r}\mathrm{a}\mathrm{r}\) can be accelerated with \(3.5\%\) efficiency at a gradient of approximately \(40\,\mathrm{GV}/\mathrm{m}\). A conclusion from these simulations was that the donut width should be optimized (here, \(\sigma_{r}\approx 0.4k_{p}^{-1}\)), as this leads to more transversely uniform accelerating fields. Note that these (single-step) simulations were performed with electron drivers with a donut radius of \(1k_{p}^{-1}\), which may therefore have suffered an on-axis collapse.
No positron experiments have yet operated with this scheme. However, donut-shaped laser pulses can be generated [166, 158] and have been interacted with plasmas: an experiment by Nakanii _et al._[167] showed no acceleration of electrons, but a strong decomposition of laser modes (i.e., the filamentation instability). Donut-shaped electron bunches can also be produced [168, 169], but have not yet been injected into plasma.
In summary, donut-shaped drivers can provide higher fields and higher charge than Gaussian drivers, but are ultimately limited by the same efficiency-quality trade-off. Additionally, propagation of these drivers can be very unstable.
### Modified plasmas: Inhomogeneous channels
As an alternative to modifying the profile of the driver, a region for positron acceleration and focusing can also be created in the nonlinear regime by modifying the profile of the plasma. Three such schemes have been proposed, all electron-driven and in the nonlinear regime: (1) a finite-radius plasma channel; (2) a two-column plasma channel with additional ionization from a co-propagating laser; and (3) a thin hollow channel filled with hot plasma electrons, created through ion motion.
#### iv.3.1 Finite-radius plasma channel
The singularity-like electron spike seen in the strongly nonlinear regime, which creates a volume too small to accelerate and focus positrons, is a consequence of the highly coherent motion of plasma electrons. One way to distribute longitudinally where the electrons cross the axis is to use a finite-radius channel, proposed by Diederichs _et al._[170]. Here, the plasma-column radius must be smaller than the maximum blowout radius (\(R_{p}\lesssim R_{b}\)). This results in electrons outside the channel experiencing a nonlinear focusing force from the plasma ions, which leads to decoherence of the electron motion, as illustrated in Fig. 16(a). Collectively, these electrons form an elongated filament on axis: an extended region of accelerating and focusing fields [as shown in Fig. 16(b)].
Although the plasma electrons are spread out longitudinally, they are still highly localized transversely. A thin on-axis filament forms, resulting in what appears as a step-like focusing field close to the axis: \((E_{r}-cB_{\phi})/E_{0}\approx-\alpha\,\mathrm{sgn}(r)\)[170], where \(\alpha\) is the normalized amplitude of the transverse field close to the axis, \(\mathrm{sgn}(r)\) is the sign function and \(r\) is the radial position--this field is shown in Fig. 17(a). Since this focusing field is nonlinear, the emittance will not be fully preserved for a Gaussian transverse profile. However, by quasi-matching (i.e., approximate matching, leading to emittance growth of a few %) to the non-Gaussian equilibrium profile [172], which has a beam size given by
\[\sigma_{r}^{3}\approx 1.72\frac{\epsilon_{r}^{2}}{k_{p}\alpha\gamma}, \tag{9}\]
where \(\gamma\) is the Lorentz factor of the beam, the emittance stays approximately preserved to within a few percent,
as shown in Fig. 17(b). This quasi-matching condition assumes a constant \(\alpha\) (i.e., a radius-independent focusing field), which sets an upper limit on the beam emittance. A unique feature of this scheme is that even when beam loaded, the transverse fields maintain their step-like shape (with a decreased amplitude \(\alpha\)), which implies that emittance can still be preserved.
Energy spread can be minimized by using an optimized current profile [171], as shown in Fig. 17(c), producing near-zero correlated energy spreads. However, the uncorrelated energy spread will depend on the emittance [see Fig. 17(d)], as the beam samples the transversely non-uniform accelerating field. Keeping the uncorrelated energy spread below 1%, an optimized parameter set provides a positron bunch with charge 52 pC, an emittance of 0.38 mm mrad and projected energy spread of 0.7% rms (corresponding to 0.86% per gain), accelerated in a field of gradient 30 GV/m with an energy efficiency of 3%. Another parameter set [170], not optimized for energy spread, has an emittance of 0.75 mm mrad, a charge of 84 pC and energy efficiency of 4.8% (but with a few-percent-level energy spread).
It is unclear if high efficiency is attainable in this scheme. An electron bunch can extract energy from the longitudinal field created by radially inward-moving plasma electrons; conversely, a positron bunch can extract energy from the field created by radially outward-moving plasma electrons. However, in the accelerating and focusing region for positrons in a finite-radius plasma channel, there are overlapping populations (or annuli) of inward- and outward-moving plasma electrons at each longitudinal position. This incoherent motion reduces the field energy available to the positrons, and therefore the energy efficiency.
Both the driver and the trailing bunch propagate stably in the channel. Transverse instability of the driver can be suppressed by ion motion and energy spread [173], whereas longitudinally non-uniform and transversely nonlinear focusing fields suppress instabilities of the trailing bunch [174].
Finite-radius channels can be experimentally realized using for instance laser ionization with axicons [175] or by beam ionization [176]. Plans exist to demonstrate this scheme at FACET-II [130] (the E-333 experiment [177]), assuming positrons and electrons can be delivered simultaneously.
In summary, the finite-radius scheme can support very high quality and acceleration gradients, but likely not high efficiency.
Figure 16: Simulation of a finite-radius plasma channel, showing (a) plasma-electron trajectories (colored based on initial radius \(X_{0}\)) and density (blue color map) of a plasma channel of radius \(k_{p}R_{p}=2.5\), driven by a high-charge (\(\tilde{Q}\approx 44\)) electron bunch with beam size \(k_{p}\sigma_{r}=0.3\) and bunch length \(k_{p}\sigma_{\xi}=\sqrt{2}\); (b) the corresponding transverse (red–blue color map) and on-axis longitudinal (blue line) wakefields. From Ref. [170] (CC BY 4.0).
Figure 17: Beam quality in the finite-radius scheme, showing: (a) the step-like transverse focusing fields at \(k_{p}\xi=-11.6\) (zoomed-out in the inset); (b) emittance evolution for a beam with an initial normalized emittance of \(k_{p}\epsilon_{x}=0.1\); (c) beam-loading optimisation and the resulting bunch-current profiles; and (d) the uncorrelated energy-spread evolution for beams with different emittances. Adapted from Refs. [170] and [171] (CC BY 4.0)
#### iii.3.2 Laser-augmented blowout in a two-column plasma
The laser-augmented scheme, proposed by Reichwein _et al._[178], uses an alternative geometry to the on-axis plasma-electron filament normally used for positron focusing. Using a combination of beam ionization and a trailing laser pulse for additional ionization of a wider channel, the singularity-like electron-density spike behind a blowout is widened transversely (as opposed to longitudinally as in the finite-radius scheme). This enables focusing and acceleration of a _ring-shaped_ positron bunch, as illustrated in Fig. 18.
This scheme is unique in its use of the blowout sheath for focusing, but this also increases the transverse size, and thereby the emittance, of the positron bunch. Simulations show that for a bunch of charge \(15\,\mathrm{pC}\), the emittance saturates at about \(31\,\mathrm{mm}\,\mathrm{mrad}\) and the energy spread at \(1.7\%\) (\(3.4\%\) per gain), while accelerating in a field of gradient \(20\,\mathrm{GV}/\mathrm{m}\) with an efficiency of approximately \(5.5\%\). The charge and efficiency can potentially be increased by using a cone-shaped beam, matching the shape of the electron sheath at the head of the second bubble, although this may be challenging to realize in experiments.
#### iii.3.3 Thin, warm, hollow plasma channel
Hollow-channel acceleration has two main challenges: beam-breakup instability from transverse wakefields [114, 82], and unwanted ionization of gas on axis [109]. Ion motion [45] can be used to create a truly hollow plasma channel from a homogeneous plasma. As discussed in Sec. III.2, Amorim _et al._[126] proposed to use an intense positron bunch to create such a channel and use it for positron acceleration with nonlinear fields. Making this concept more experimentally viable, Silva _et al._[179] proposed using two intense electron bunches--one to generate the hollow channel and the other to create accelerating and focusing fields for positrons.
As illustrated in Fig. 19(a-c), the formation of the channel starts with the first drive beam creating a nonlinear blowout, where the longitudinally averaged trans
Figure 19: Simulated thin, warm hollow channel development, showing: (a) early-stage plasma density perturbation, no hollow channel has been formed at this time; (b) the average transverse fields, focusing for the ions (inside the gray box); and (c) the start of on-axis ion accumulation and hollow channel formation. A second electron beam creates (d) the transverse wakefields and the on-axis longitudinal field (black line). The dashed box corresponds to the region around where the positron bunch can be accelerated. (e) If beam loaded with an optimized current profile, the field is partly flattened. (f) The emittance evolution of the positron bunch shows only marginal emittance growth. Adapted from Ref. [179].
Figure 18: (a) In the laser-augmented blowout scheme, two plasma columns are made: a Gaussian electron bunch (yellow) beam ionizes a thin column, and a trailing laser pulse (purple) ionizes a wider column. The trailing positron bunch (red) is donut or ring-shaped such that the entire bunch is inside the blowout sheath at the beginning of the second bubble, also shown in (b). Both the (c) focusing force and (d) accelerating field are shown, indicating the location of the positron bunch (black dots). Adapted from Ref. [178] (CC BY 4.0).
verse fields focus the ions. After initially accumulating on axis, the ions diverge and create a thin, hollow structure around the axis. At this point, the wakefield has decayed, leaving plasma electrons with high temperature (around 2-9 keV in the example). The second drive beam then creates the wakefields used in positron acceleration. The combination of high plasma-electron temperature, which spreads out the inward-moving sheath electrons, and the thin hollow channel, which traps them on axis, results in an extended region with positron focusing and acceleration, as shown in Fig. 19(d). The transverse nonlinearity and longitudinal non-uniformity of this focusing field suppresses the beam-breakup instability.
Beam loading of the accelerating field with a Gaussian bunch produces a non-negligible energy spread (\(\sim\)10% rms in this example). Similarly, the emittance cannot be fully preserved for a Gaussian transverse profile without some loss of charge. However, by tailoring the positron beam-current profile and using a flat-top transverse profile, the energy spread saturates around 4% rms (6% per gain) and the emittance is approximately preserved at \(\sim\)7.4 mm mrad (10% growth before saturation) [see Fig 19(e-f)]. For a charge of 100 pC accelerating in a 3.5 GV/m field, the energy-transfer efficiency from the second driver to the a positron bunch is approximately 4.7%.
A clear advantage of this scheme is its experimentally realizable method of creating a hollow channel with stable positron acceleration. While the dynamics may be somewhat complex, the implementation is not: it only requires two electron drivers and a homogeneous plasma. Plans exist to demonstrate this scheme at FACET-II--the E-337 experiment [177]. On the other hand, a disadvantage is that the overall efficiency of the scheme is limited by the need for two drivers, since the energy in the first driver (generating the hollow channel) cannot be extracted by the positron bunch.
In summary, the thin, warm, hollow plasma channel scheme offers a simple experimental setup, but may suffer from low energy efficiency.
### Modified drivers and plasmas: Hollow channels with asymmetric drivers
All the schemes above modify either the driver shape or the plasma channel to create a region of excess plasma electrons for positron focusing. This region is not present in a hollow channel, which leads to instabilities for both the driver and the trailing positron bunch. However, by combining modified drivers and modified plasmas, it is possible to exploit the benefits of hollow plasma channels while stabilizing the driver and producing a stable focusing region for the positron bunch. One scheme, proposed by Zhou _et al._[180], achieves this by driving nonlinear wakefields in a hollow channel using a transversely asymmetric electron driver.
This scheme utilizes the self-induced quadrupole wakefields generated in a hollow channel during the propagation of an asymmetric driver, for which \(\sigma_{x}>\sigma_{y}\) (i.e., the beam size is larger in the horizontal than the vertical plane). As the driver propagates, the quadrupole field defocuses the driver in \(x\) and focuses it in \(y\) until it splits into two beamlets that reach the channel boundary in \(x\), as illustrated in Fig. 20(a). At this point, two half-blowout structures are created--one on each side of the channel--where plasma electrons are expelled much like in the nonlinear regime. A stable, equilibrium driver shape is reached when the focusing field from the exposed plasma ions balances the defocusing quadrupole field.
Focusing of positrons is achieved by driving a nonlinear plasma wakefield, such that some plasma electrons become relativistic and enter the hollow channel, resulting in a near-uniform electron density. A similar region was also identified by Yi _et al._[123, 124]. Here, the longitudinal wakefield is also accelerating for positrons. When this field is beam loaded, in order to reach high energy-transfer efficiency, an on-axis electron density spike appears, as shown in Fig. 20(b). Figure 20(c) shows the resulting focusing and accelerating fields.
A simulation with stable acceleration until driver depletion shows that 490 pC of charge can be accelerated with a gradient of 4.9 GV/m at an efficiency of 33%. After an initial growth of 20-30%, the emittance stabilizes at \(79\times 56\) mm mrad, shown in Fig. 20(d). The initial emittance growth may be suppressed if the driver is injected with its equilibrium profile rather than evolving to it. Figures 20(e-f) show a final energy spread of 1.6% rms after 4.4 GeV of gain from 10.2 GeV, giving an energy spread per gain (i.e., field uniformity) of 5.3% rms; this can potentially be improved with more optimal current profiles than a Gaussian. In principle, the charge and efficiency can be further increased by overlapping a positron bunch with a similarly shaped electron bunch [181], although this may not be compatible with the above focusing method, as the overlapping electrons would quickly be defocused.
The main advantages of this scheme are its stability during propagation and strong accelerating gradients that are near-uniform transversely--the latter being a common feature of hollow channels [113]. However, it is unclear whether this field will remain uniform if loaded with lower-emittance, higher-density positron bunches than the given parameters. The scheme is generally well suited for experimental demonstration, as the complex equilibrium driver shape self-generates from easily available Gaussian beams. That said, experiments may suffer from unwanted beam ionization of on-axis gas, since the scheme requires nonlinear and consequently strong wakefields--this will restrict the choice of gas species as well as the beam and plasma densities.
In summary, this scheme offers stable propagation of the driver and stable acceleration of the positron bunch with high charge and high efficiency. However, the trade-off between beam quality and efficiency remains similar to other schemes--a topic explored in more detail in
Secs. V and VI below.
## V Comparison of schemes
We have, up to this point, discussed the various proposed schemes and their key parameters separately (Sec. IV). Table 2 summarizes these values for the positron-acceleration schemes, as well as electron-acceleration schemes and relevant experiments. However, without a common metric, it is non-trivial to compare them to each other or to the requirements for a collider. As argued in Sec. II, the two key metrics are: accelerating gradient, which affects the collider footprint; and luminosity-per-power, which affects the running costs.
Combining Eqs. 3 and 4, and assuming that the colliding electron and positron bunches are identical, we express the luminosity-per-power as
\[\frac{\mathcal{L}}{P_{\mathrm{wall}}}\approx\frac{1}{8\pi m_{e}c^{2}}\frac{ \eta_{\mathrm{prod}}\eta_{\mathrm{depl}}}{\sqrt{\beta_{x}\beta_{y}}}\frac{\eta _{\mathrm{extr}}N}{\sqrt{\epsilon_{nx}\epsilon_{ny}}}. \tag{10}\]
Here, the production efficiency \(\eta_{prod}\) can be assumed to be identical across all proposed schemes, as all are electron-driven. The driver-depletion efficiency \(\eta_{depl}\), which may vary somewhat between schemes, is assumed also to be similar. Lastly, the interaction-point beta functions \(\beta_{x}\) and \(\beta_{y}\) are determined mainly by the beam-delivery system, which sets constraints on the energy spread (i.e., \(<1\%\) rms), but can otherwise be assumed to be independent of scheme. That leaves the wake-to-beam extraction efficiency \(\eta_{extr}\), the bunch population \(N\) and the normalized emittances \(\epsilon_{nx}\) and \(\epsilon_{ny}\) for comparison. To produce a metric for meaningful comparison, we define the _dimensionless luminosity-per-power_
\[\tilde{\mathcal{L}}_{P}\equiv 4\pi r_{e}\frac{\eta_{\mathrm{extr}}N}{\sqrt{ \epsilon_{nx}\epsilon_{ny}}}=\frac{\eta_{\mathrm{extr}}\tilde{Q}}{\tilde{ \epsilon}_{n}}, \tag{11}\]
where \(\tilde{\epsilon}_{n}=k_{p}\sqrt{\epsilon_{nx}\epsilon_{ny}}\) is the dimensionless normalized emittance and \(\tilde{Q}=4\pi r_{e}k_{p}N\) is the normalized charge (Eq. 8). This metric scales as the luminosity-per-power with a factor \(H_{D}\) difference (typically between 1.5-2); a parameter that captures beam-beam effects at the interaction point [61].
An important feature of this dimensionless luminosity-per-power is its independence of the plasma density. Plasma-wakefield simulations can in general be scaled to different plasma densities, resulting in higher accelerating gradients for higher densities (\(E_{z}\sim k_{p}\), where \(k_{p}\sim\sqrt{n_{0}}\)), simultaneously giving lower charges (\(N\sim k_{p}^{-1}\)) and lower emittances (\(\epsilon_{n}\sim k_{p}^{-1}\)). However, the efficiency, the normalized charge (\(\tilde{Q}\)) and the dimensionless normalized emittance (\(\tilde{\epsilon}_{n}\)), which together define \(\tilde{\mathcal{L}}_{P}\), are all independent of plasma density. Ultimately, this means that simulations at different densities can be directly compared and that there is no gain in operating at either higher or lower plasma density, at least in terms of luminosity-per-power. Nevertheless, since the accelerating gradient does scale with plasma density, it is meaningful instead to compare this gradient normalized by the wave-breaking field \(E_{0}=m_{e}c\omega_{p}/e\) (Eq. 1)--equivalent to scaling all simulations to the same density.
Figure 20: Simulated hollow channel with an asymmetric driver, showing: (a) plasma-density perturbation (blue color map) by a 2 nC electron drive bunch (green color map) that has reached two-beamlet equilibrium profile, loaded by a 640 pC positron bunch (orange color map); (b) plasma-electron density without and with beam loading in the region around the positron bunch [dashed orange box in (a)]; (c) the loaded wakefields and lineouts (blue lines), indicating \(\pm 2\sigma\) of the positron bunch (red lines); (d) emittance evolution in both \(x\) and \(y\) planes; (e) energy spread evolution for positrons at \(\xi>305\,\mathrm{\SIUnitSymbolMicro m}\); (f) spectra of the driver (red line) and the positron bunch (blue line) as well as the longitudinal phase space of the positron bunch (colorbar). From Ref. [180].
Figure 21 compares all the proposed schemes, showing the dimensionless luminosity-per-power versus the normalized accelerating field, based on the values in Table 2. Note that while these represent the best current values, further optimization may still be possible, as discussed in Sec. VI.
We observe that several schemes perform similarly with respect to luminosity-per-power: the finite-radius channel, donut driver, nonlinear regime, asymmetric hollow channel and quasi-linear regimes all reach \(\hat{\mathcal{L}}_{P}\approx 0.4\)-\(0.9\). On the other hand, the normalized accelerating field varies significantly between these schemes in the range \(E_{z}/E_{0}\in[0.06,1.9]\), where the donut driver, laser-augmented blowout, and finite-radius channel schemes provide the highest gradients for a given plasma density. In terms of energy spread (not optimized for all schemes), the schemes perform at varying levels, with the donut driver, finite-radius channel and nonlinear regime currently providing the most collider-relevant energy spreads.
Comparing to conventional technology, here represented by CLIC (\(\hat{\mathcal{L}}_{P}\approx 300\)), all the proposed positron-acceleration schemes perform worse in luminosity-per-power by at least 3 orders of magnitude. Conversely, plasma-accelerated electrons are at the level of conventional technology (\(\hat{\mathcal{L}}_{P}\approx 500\)), at least in simulations.
Why do we observe such a large difference between plasma-acceleration of positrons and electrons? Is it possible to surpass the currently highest achieved luminosity-per-power, and if so, how? This topic is discussed in detail in Sec. VI below.
## VI The positron problem: plasma-electron motion and transverse beam loading
The discrepancy in performance between electron and positron acceleration can in large part be explained by the ratio in mass between plasma ions and electrons for many of the schemes considered in this review. Lighter plasma particles have lower inertia, leading to comparatively more motion within the accelerated positron bunch. The motion of plasma electrons within the positron bunch leads to variation in the plasma-electron density, which in turn disrupts the quality of the accelerated bunch. This effect is a potential limitation on the density of the loaded positron bunch, and therefore a limitation on the achievable luminosity of electron-positron colliders. In the end of this section, we consider schemes and conditions that exceed this limitation, but nevertheless appear to preserve the quality of the accelerated positron bunch.
### The ideal case
The ideal plasma-based positron accelerator is similar to the standard nonlinear blowout for electron acceleration: the focusing fields must vary linearly in the transverse directions to preserve the emittance, and the accelerating fields must be uniform in both the transverse and longitudinal directions to preserve the uncorrelated and correlated energy spread, respectively. For emittance
\begin{table}
\begin{tabular}{l c c c c c c c c c} & Density & Gradient & Charge & Energy & Emittance & En. spread & Uncorr. & Fin. energy & \\ _Scheme_ & (cm\({}^{-3}\)) & (GV/m) & (pC) & efficiency & (mm mrad) & per gain & en. spread & (GeV) & \(\Delta\phi_{e}\)1 & Ref. \\ \hline Quasi-linear regime (sim.) & \(5\times 10^{16}\) & 1.3 & 4.3 & 30\% & 0.64 & \(\sim\)1022 & 0.7\% & 1 & 0.77 \\ Quasi-linear regime (exp.) & \(1\times 10^{16}\) & 1 & 85 & 40\% & 1273 & \(\sim\)14\% & n/a & 21 & 0.51 \\ Nonlinear regime & \(7.8\times 10^{15}\) & 1.6 & 102 & 26\% & 8 & 2.4\% & n/a & 5.2 & 7.6 [ENDFOOTNOTE] \\ Donut driver (\#1) & \(5\times 10^{16}\) & 8.9 & 13.6 & 0.17\% & 0.036 & 0.3\% & n/a & 35.4 & 0.50 [ENDFOOTNOTE] & [ENDFOOTNOTE] \\ Donut driver (\#2) & \(5\times 10^{16}\) & 40 & 189 & 3.5\% & 1.54 & 6\% & 1\% & 1 & 7.1 & [146] \\ Finite-radius channel & \(5\times 10^{17}\) & 30 & 52 & 3\% & 0.38 & 0.86\% & 0.73\% & 5.5 & 34 [171] \\ Laser-augmented blowout & \(2\times 10^{17}\) & 20 & 15 & 5.5\% & 31 & 3.4\% & n/a & \(\sim\)10 & 0.67 [ENDFOOTNOTE] \\ Thin, warm, hollow channel & \(1\times 10^{16}\) & 3.5 & 100 & 4.7\% & 7.4 & 6\% & n/a & 1.45 & 2.0 & [ENDFOOTNOTE] \\ Asymmetric hollow channel & \(3.1\times 10^{16}\) & 4.9 & 490 & 33\% & 67 & 5.3\% & n/a & 14.6 & 6.5 & [ENDFOOTNOTE] \\ \hline \(e^{-}\) nonlinear regime (sim.) & \(2\times 10^{16}\) & \(-10\) & 800 & 37.5\% & 0.133 & 1.1\% & \(\lesssim\)1\% & 1500 & 292 [ENDFOOTNOTE] & [182] \\ \(e^{-}\) nonlinear regime (exp.) & \(1.2\times 10^{16}\) & \(-1.4\) & 40 & 22\% & 2.8 & 1.6\% & n/a & 1.1 & 3.0 & [ENDFOOTNOTE] \\ \hline Conv. technology (CLIC) & n/a & 0.1 & 596 & 28.556 & 0.11 & 0.35\% & n/a & 1500 & n/a & [ENDFOOTNOTE] \\ \end{tabular}
\end{table}
Table 2: Key parameters of the plasma accelerator and accelerated beam in each of the proposed positron-acceleration schemes (see Sec. IV). Electron-acceleration schemes and conventional technology are listed for comparison. The parameter \(\Delta\phi_{e}\) represents the phase advance, or degree of plasma-electron motion, inside the positron bunch (see Sec. VI).
preservation, we specifically require [184, 185]
\[\nabla_{\perp}(E_{r}-v_{z}B_{\phi})=\frac{1}{\epsilon_{0}}(\rho-J_{z}/c)=\text{ const}, \tag{12}\]
where \(\rho\) is the charge density (providing _passive_ plasma lensing [186]) and \(J_{z}\) is the axial current density (providing _active_ plasma lensing [187]). This means that either both \(\rho\) and \(J_{z}\) need to be transversely uniform, or, more generally, that any variation in \(\rho\) must be matched by a corresponding variation in \(J_{z}\). Longitudinally uniform focusing fields [\(\partial_{z}(E_{r}-v_{z}B_{\phi})=0\)] are not strictly necessary, as the beam emittance can still be preserved with slice-by-slice matching [188], assuming the fields are linear within each slice. However, the Panofsky-Wenzel theorem [189]
\[\partial_{z}(E_{r}-v_{z}B_{\phi})=\nabla_{\perp}E_{z}, \tag{13}\]
states that in order to preserve energy spread transversely (\(\nabla_{\perp}E_{z}\)=0), the focusing fields must be uniform longitudinally [\(\partial_{z}(E_{r}-v_{z}B_{\phi})=0\)]. This generalizes the restriction on (\(\rho-J_{z}/c\)) from being constant transversely to being constant everywhere within the accelerated bunch. Lastly, longitudinally uniform accelerating fields (\(\partial_{z}E_{z}=0\)) can be obtained through precise shaping of the current profile--optimal beam loading [31].
### Ion and electron motion
All the above criteria are normally satisfied in the blowout regime for electrons, where the ion-charge density is constant everywhere and there is no axial current density anywhere within the accelerating bunch. There is, however, an important exception: if the charge density of the electron bunch is sufficiently high to induce ion motion, the ion-charge density will no longer be constant within the bunch. If the beam density \(n_{b}\) is sufficiently high to move the ions toward the axis within the timescale of the full bunch length \(\Delta\zeta\approx\sqrt{2\pi}\sigma_{z}\), emittance may no longer be preserved. Rosenzweig _et al._[45] calculated the _phase advance_ of the ion motion (for round electron bunches) to be
\[\Delta\phi_{i}\simeq k_{i}\Delta\zeta=\sqrt{\frac{\mu_{0}e^{2}}{2}\frac{Z \sigma_{z}N}{m_{i}}\sqrt{\frac{r_{e}\gamma n_{0}}{\epsilon_{nx}\epsilon_{ny}}}}, \tag{14}\]
where \(k_{i}\) is the plasma-ion wavenumber in the focusing field of the electron bunch, \(Z\) is the ion charge state, \(m_{i}\) is the mass of the ion, \(\gamma\) is the relativistic factor of the beam particles, and \(\mu_{0}\) is the permeability of free space. An on-axis density spike will form when ions are focused onto the axis, which should be avoided: \(\Delta\phi_{i}\lesssim\pi/2\), often referred to as the _ion-motion limit_.
Exactly the same dynamics occur for plasma electrons in the presence of a high-density positron bunch, as illustrated in Fig. 22. In this case, we substitute the ion mass
Figure 21: Comparison of the dimensionless luminosity-per-power versus the normalized accelerating field for all proposed positron-acceleration schemes, as well as the nonlinear blowout electron-acceleration scheme and relevant experimental results (see Table 2). The energy spread per gain (red-yellow-green color map; the inner and outer circles represent the projected and uncorrelated energy spreads, respectively) and final energy (parenthesis) of each simulation/experiment are indicated. Conventional technology is represented by CLIC parameters (blue line). Estimated limits on the luminosity-per-power based on the motion of plasma electrons and ions, which depend on beam energy and ion mass, are indicated (gray dotted lines).
\(m_{i}\) for the electron mass, specifically \(\gamma_{pe}m_{e}\), where \(\gamma_{pe}\) is the Lorentz factor of the plasma electrons, set \(Z=1\) as plasma electrons are always singly charged, and change the focusing background-ion density \(n_{0}\) to the local background density (i.e., the difference between the electron and ion densities \(\Delta n=|n_{e}-n_{i}|\)):
\[\Delta\phi_{e}\simeq k_{e}\Delta\zeta=\sqrt{\frac{\mu_{0}e^{2}}{2}\frac{ \sigma_{z}N}{\gamma_{pe}m_{e}}\sqrt{\frac{r_{e}\gamma\Delta n}{\epsilon_{nx} \epsilon_{ny}}}}, \tag{15}\]
where \(k_{e}\) is the plasma-electron wavenumber in the focusing field of the positron bunch. The corresponding _electron-motion limit_, \(\Delta\phi_{e}\lesssim\pi/2\), is approximately equivalent to the limit \(k_{e}\sigma_{z}\approx 1\) (up to a factor \(\sqrt{\pi/8}\approx 0.63\)), as discussed in Ref. [146].
The electron phase advance \(\Delta\phi_{e}\) is calculated for each positron-acceleration scheme and the nonlinear blowout scheme for electrons and displayed in Table 2. We note that some of the schemes discussed in this review preserve positron beam quality even though the plasma-electron phase advance exceeds \(\pi/2\). This suggests alternative strategies for exceeding the electron-motion limit. Before discussing these strategies, one important question remains: how does electron motion affect the luminosity-per-power of colliders based on positron acceleration concepts?
### An electron-motion limit to the dimensionless luminosity-per-power
We observe that the phase advance of plasma electrons (Eq. 15) depends on the same ratio of charge to emittance, (i.e., \(N/\sqrt{\epsilon_{nx}\epsilon_{ny}}\)), exactly like the dimensionless luminosity-per-power (Eq. 10). Crucially, this means that the luminosity-per-power can be expressed as the electron-motion phase advance:
\[\tilde{\mathcal{L}}_{P}^{e^{+}}\simeq\sqrt{\frac{16\pi}{\gamma}}(\Delta\phi_ {e})^{2}\left(\frac{\eta_{\text{extr}}}{k_{p}\sigma_{z}}\right)\gamma_{pe} \sqrt{\frac{n_{0}}{\Delta n}}. \tag{16}\]
The ratio of extraction efficiency to normalized bunch length, in the case of optimally beam-loaded bunch, is typically of order unity, i.e. \(\eta_{\text{extr}}/k_{p}\sigma_{z}=\mathcal{O}(1)\). As examples, in Ref. [154] (nonlinear regime for positrons) the ratio is \(\sim\)0.6 and in Ref. [36] (nonlinear regime for electrons) the ratio is \(\sim\)0.9. Note in particular the unfavourable energy dependence in Eq. 15 (\(\sim 1/\sqrt{\gamma}\)), which results from smaller matched beam sizes (\(\sim 1/\gamma^{1/4}\); leading to higher beam densities) at higher energy. The charge density ratio varies with scheme, but is typically no more than one order of magnitude away from unity: \(\Delta n/n_{0}=\mathcal{O}(0.1-10)\).
Combining these ratios (\(\eta_{\text{extr}}/k_{p}\sigma_{z}\approx 1\) and \(\Delta n/n_{0}\approx 1\)) with the conventional phase-advance limit (\(\Delta\phi_{e}\approx\pi/2\)) and assuming non-relativistic plasma electrons (\(\gamma_{pe}\approx 1\)), we get an estimated upper bound on the dimensionless luminosity-per-power for plasma-based positron accelerators of \(\tilde{\mathcal{L}}_{P}^{e^{+}}\approx 17.5/\sqrt{\gamma}\), or about 0.4 for 1 GeV and 0.013 for 1 TeV. This range is indeed consistent with the dimensionless luminosity-per-power found across all the proposed schemes, as shown in Fig. 21. Note that some schemes exceed this limit, indicating that they operate outside the above assumptions. These will be discussed further in Sec. VI.4.
For electrons, the dimensionless luminosity-per-power differs from that for positrons by the mass ratio of the plasma ions and electrons and the ion-charge state. By comparing Eqs. 14 and 15 we therefore find that the ion-motion limit on the dimensionless luminosity-per-power for electrons is
\[\tilde{\mathcal{L}}_{P}^{e^{-}}=\frac{m_{i}}{Z\gamma_{pe}m_{e}}\sqrt{\frac{ \Delta n}{n_{0}}}\tilde{\mathcal{L}}_{P}^{e^{+}}, \tag{17}\]
which is a factor of 73350 larger for singly ionized argon ions compared to plasma electrons where \(\Delta n=n_{0}\). Taking also into account that flat beams (\(\epsilon_{nx}\gg\epsilon_{ny}\)) have a larger phase advance by a factor \(\sqrt{2}\), as argued by Rosenzweig _et al._[45], the limit for flat beams is consequently a factor 2 lower than for round beams:
\[\tilde{\mathcal{L}}_{P}^{\text{flat}}\approx\frac{1}{2}\tilde{\mathcal{L}}_{P} ^{\text{round}}. \tag{18}\]
The resulting ion-motion limit for flat beams is indicated in Fig. 21, which corresponds well to that reached in simulations as well as that of conventional technology.
Figure 22: Simulations demonstrating plasma-electron motion in a nonlinear wakefield, (a) showing plasma-electron densities (blue color map) and trajectories (gray and colored lines) driven by an intense electron bunch. The corresponding plasma-electron density and trajectories are shown for the positron-loading region [dashed box in (a)], both unloaded (b) and loaded (c–d) by an intense positron bunch (orange color map). This extracts energy from the wake (i.e., longitudinal beam loading) but also significantly changes the trajectory and distribution of some plasma electrons inside the positron bunch (red dashed box), leading to electron oscillations with a phase advance of approximately \(2\pi\). The corresponding field changes (i.e, transverse beam loading) from being defocusing when unloaded (e) to being focusing when loaded (f). Transverse lineouts (g) show that in the presence of the positron bunch these fields are nonlinear away from the axis and nonuniform longitudinally. From Ref. [154].
In short, the root of the positron problem is the comparatively low mass of plasma electrons, leading to complex motion and therefore nonlinear focusing fields (transverse beam loading) inside high-density positron bunches. This causes degradation of the beam quality, ultimately making simultaneous high-efficiency and high-quality acceleration challenging.
### Outlook: raising the electron-motion limit
So how can we increase the luminosity-per-power by three orders of magnitude beyond the values of the schemes detailed in Fig. 21? Most of the positron-acceleration schemes are designed starting from the ideal case (as discussed in Sec. VI.1); creating quality-preserving wakefields and then increasing the beam density until electron motion occurs. This may explain why many schemes reach similar luminosity-per-power and why the best-performing schemes reach a similar limit. However, since electron motion is inevitable for the positron beam densities required for linear colliders, all future schemes must include transverse beam loading as an integral part of their design.
Equation 16 straightaway motivates four main strategies of improvement: (1) tolerating larger electron phase advance, increasing \(\Delta\phi_{e}\); (2) reaching high efficiencies with shorter bunch lengths, increasing \(\eta_{\mathrm{extr}}/k_{p}\sigma_{z}\); (3) using relativistic electrons for focusing the positron beam, increasing \(\gamma_{pe}\); and (4) using a low on-axis excess electron density, decreasing \(\Delta n/n_{0}\). Finally, the effect of plasma temperature is not reflected in Eq. 16, but higher plasma temperatures reduce on-axis plasma density spikes and provide more linear focusing fields [190].
At first glance, increasing the phase advance significantly beyond \(\pi/2\) implies tolerating multiple electron oscillations within the positron bunch. However, simulations of the finite-radius channel do not show this feature, despite the fact \(\Delta\phi_{e}\approx 34\), or approximately 5 plasma electron oscillations. The plasma electrons do not oscillate within the volume of the accelerated positron bunch because they already have large transverse momentum as they return towards the beam axis. The dense positron bunch further accelerates plasma electrons toward the axis such that their momenta carry them well beyond the positron beam volume and their subsequent return to axis occurs behind the positron bunch. The \(\Delta\phi_{e}\) limit still exists for a sufficiently long positron bunch in this scheme, but the first return of plasma electrons within the bunch may occur at \(\Delta\phi_{e}\gg\pi/2\).
In the homogeneous-plasma nonlinear regime, plasma electrons can also oscillate significantly within the positron bunch, as shown in Fig. 22(d). Note that the electron oscillations in and of themselves are not necessarily problematic: the resulting, often non-uniform charge distribution, is. Therefore, surviving multiple electron oscillations will likely require finding an equilibrium positron-bunch profile that results in uniform electron density inside the bunch.
Achieving high efficiency with significantly shorter bunches, all while maintaining low energy spread (through optimal beam loading), is another interesting strategy. As noted by Tzoufras _et al._[31], in a highly nonlinear wake for electrons, it is possible to achieve a high energy efficiency with either higher charge at lower accelerating field or lower charge at higher accelerating field. In the latter case, the bunch length can be significantly reduced. This concept is already exploited in the donut-driver scheme [146], but the limit may be pushed even further. The use of shorter bunches comes with practical challenges related to their production and coherent-synchrotron-radiation effects in chicanes. Shorter bunches provide an advantage for beam-beam interactions because they reduce the deleterious effects of beamstrahlung on the luminosity spectrum [191]. Energy-recovery techniques [192, 154] that use additional laser or electron beam pulses to extract energy from the wake can also be used to increase efficiency.
The use of highly relativistic plasma electrons is a way to effectively symmetrize the mass of plasma electron and ions. One way to achieve this would be using an external source of high-energy, counter-propagating electrons [193]--a scheme with similarities to _electron lenses_[194] used in proton colliders--although the power required to maintain such a stream of electrons may be problematic.
Lastly, by reducing the excess charge density of the electron filament that focuses the positron bunch, the matched beta function can be increased. This reduces the beam density for a given charge and emittance, mitigating the issues related to electron motion. An extreme case of this approach is the hollow channel (discussed in Sec. III.2), where \(\Delta n=0\), which can in principle provide beams that reach high luminosity-per-power, but suffers from a catastrophic transverse instability [114, 82].
As an alternative, we could in principle use an _anti-plasma_[121], which would effectively swap the electron-motion limit (Eq. 16) for the ion-motion limit (Eq. 17). Unfortunately, this particular solution is presently neither technologically nor economically feasible.
More broadly, whether the above strategies can be used, individually or in combination, to develop a scheme that provides competitive luminosity-per-power is currently an open question, and a potential topic of future research.
## VII Conclusion
The overarching goal of accelerating positrons in plasma wakefields is to reduce the footprint and cost of future electron-positron colliders. This imposes a number of strict requirements on the positron accelerator: high accelerating gradient (\(>\)1-10 GV/m); high energy efficiency (5-10% from wall plug to beam); high beam quality including high charge (nC-scale), low emittance
(\(<\)1 mm mrad) and low energy spread (\(<\)1%); as well as sufficient stability. An important combined metric is the luminosity-per-beam power. Plasma-based electron acceleration appears able to meet these requirements (including the luminosity-per-power), but this is currently not the case for positrons.
Major progress on plasma acceleration for positrons has been made over the previous two decades, since the first theoretical investigations around 2000. The first experiments were performed at SLAC's FFTB facility, which showed that positrons can indeed be both focused and accelerated in a plasma. Subsequent work branched into two main directions: acceleration in homogeneous plasmas, and acceleration in hollow plasma channels--the latter promised better beam quality. After numerous theoretical advancements and several years of commissioning the FACET test facility at SLAC, major experimental milestones were reached. Simultaneous high-efficiency and high energy gain (multi-GeV) positron acceleration was demonstrated in a homogeneous plasma. Moreover, acceleration of positrons in a laser-ionized hollow plasma channel was demonstrated, albeit with significantly less energy gain. However, strong nonlinear focusing field in the homogeneous plasma scheme caused large emittance growth, whereas in the hollow channel scheme, strong transverse wakefields caused an instability which rapidly deflected the accelerating positron bunch. As a result, while experiments partly met several collider requirements, the accelerated positron bunches were generally not suitable for a collider.
To remedy this shortfall, several new positron-acceleration schemes have been proposed. These schemes create favourable conditions for positron acceleration either by further optimizing the homogeneous plasma scheme, or by modifying the shape of the driver, the plasma, or both. A new common metric--dimensionless luminosity-per-power \(\tilde{\mathcal{L}_{P}}=\eta_{\mathrm{extr}}\tilde{Q}/\tilde{\epsilon}_{n}\)--is introduced here to compare the seven proposed schemes. The resulting comparison indicates that all the proposed schemes perform similarly within 2 orders of magnitude in luminosity-per-power. However, all are at least 3 orders of magnitude below that of collider proposals using conventional technology (e.g., CLIC) and the nonlinear blowout scheme for plasma-accelerated electrons.
This limitation stems from complex electron motion within the positron bunch, which arises from high beam densities--effectively acting as a strong lens for the plasma electrons. The resulting nonlinear focusing fields lead to degradation of the positron beam quality, exactly equivalent to the effects of ion motion on electrons. However, since the mass of plasma electrons is significantly lower than that of plasma ions, this disruptive motion occurs for correspondingly lower beam densities, which can explain the observed discrepancy between positron and electron acceleration in plasma.
While alternative plasma-based collider concepts have been proposed that circumvent the positron problem altogether, including asymmetric plasma-RF hybrid colliders [195] and \(\gamma\)-\(\gamma\) colliders [196; 197; 198], several strategies do exist for overcoming the electron-motion challenge. These may include: increasing the temperature of the plasma; imparting large-transverse momenta to converging plasma electrons; maintaining a uniform distribution of plasma electrons within a high-density positron bunch; using relativistic (and effectively heavier) plasma electrons; achieving uniform and high efficiency acceleration also with very short bunches; and sustaining a decreased excess electron charge density to increase the matched beta function--or perhaps something more exotic. Regardless of strategy, any future scheme for positron acceleration will inevitably face electron motion and should therefore be designed from the start to tolerate it or even exploit it.
###### Acknowledgements.
This work was supported by the Research Council of Norway (NFR Grant No. 313770). Computations were performed on resources provided by Sigma2; the National Infrastructure for High Performance Computing and Data Storage in Norway. |
2307.16548 | Specification of MiniDemographicABM.jl: A simplified agent-based
demographic model of the UK | This documentation specifies a simplified non-calibrated demographic
agent-based model of the UK, a largely simplified version of the Lone Parent
Model presented in [Gostolil and Silverman 2020]. In the presented model,
individuals of an initial population are subject to ageing, deaths, births,
divorces and marriages throughout a simplified map of towns of the UK. The
specification employs the formal terminology presented in [Elsheikh 2023a]. The
main purpose of the model is to explore and exploit capabilities of the
state-of-the-art Agents.jl Julia package [Datseris2022] in the context of
demographic modeling applications. Implementation is provided via the Julia
package MiniDemographicABM.jl [Elsheikh 2023b]. A specific simulation is
progressed with a user-defined simulation fixed step size on a hourly, daily,
weekly, monthly basis or even an arbitrary user-defined clock rate. The model
can serve for comparative studies if implemented in other agent-based modelling
frameworks and programming languages. Moreover, the model serves as a base
implementation to be adjusted to realistic large-scale socio-economics,
pandemics or immigration studies mainly within a demographic context. | Atiyah Elsheikh | 2023-07-31T10:28:23Z | http://arxiv.org/abs/2307.16548v2 | # Specification of MiniDemographicABM.jl:
###### Abstract
This document presents adequate formal terminology for the mathematical specification of a simplified non-calibrated agent-based demographic model of the UK. Implementation is provided via the Julia package MiniDemographicABM.jl [2]. Individuals of an initial population are subject to ageing, deaths, births, divorces and marriages. The main purpose of the model is to explore and exploit capabilities of the state-of-the-art Agents.jl Julia package [1]. Additionally, the model can serve as a base model to be adjusted to realistic large-scale socio-economics, pandemics or social interactions-based studies mainly within a demographic context. A specific simulation is progressed with a user-defined simulation fixed step size on a hourly, daily, weekly, monthly basis or even an arbitrary user-defined clock rate.
## 1 Terminology
This section introduces the basic formal terminologies employed throughout the article for the specification of agent-based models and their simulation process.
### Populations \(P,m\) and \(F\)
Given that \(F(t)=F^{t}\equiv F\) (\(M(t)=M^{t}\equiv M\)) is the set of all females (males) in a given population \(P(t)=P^{t}=\equiv P\) at an arbitrary time point \(t\) where
\[P=M\cup F \tag{1}\]
\(t\) is omitted for the sake of simplification. \(t\) is sometimes placed as a superscript, i.e. \(F^{t}\), purely for algorithmic specification readability purpose. Similarly, individuals \(p\in P(t)\) can be attributed with time, i.e. \(p^{t}\), referring to that individual at a particular time point.
### Population features \(\mathcal{F}\)
Every individual \(p\in P\) is attributed by a set of features related to gender, location, age among others. The elementary population features \(f\) considered in the presented model are related to:
* age, e.g. particular age group e.g. neonates, children, teenagers, renters, thirties, etc.
* alive status, i.e. whether alive or dead
* gender, i.e. male or female
* location, e.g. sub-population of a particular town
* kinship status or relationship, e.g. father-ship, parents, orphans, divorce, singles etc.
### Featured sub-populations \(P_{f}\), \(f\in\bigcup\mathcal{F}\)
Let \(P_{f}\) corresponds to the set of all individuals who satisfy a given feature \(f\) where
\[f(p\in P)=b\in\{true,false\} \tag{2}\]
That is
\[P_{f}=\{p\in P\text{ s.t. }f(p)=true\} \tag{3}\]
For example
* \(M=P_{male}\)
* \(W_{married}\) corresponds to the set of all married women
* \(P_{age>65}\) corresponds to all individuals of age older than 65
For the given set of features specified in Section 1.2, a subset of features
\[F^{\prime}=\{f^{\prime}_{1},f^{\prime}_{2},\ldots f^{\prime}_{m}\}\ \subset F \tag{4}\]
is called a closed subset of elementary features, if the overall population constitutes of the union of the underlying elementary featured sub-populations, i.e.
\[P=P_{\mathcal{F}^{\prime}}\equiv P_{f^{\prime}_{1}\cup f^{\prime}_{2}\cup \cdots\cup f^{\prime}_{m}}=P_{f^{\prime}_{1}}\cup P_{f^{\prime}_{2}}\cup\cdots \cup P_{f^{\prime}_{m}} \tag{5}\]
For example, male and female gender features constitute a closed set of elementary features.
### Non-elementary features \(\bigcup\mathcal{F}\)
For a set of elementary features \(\mathcal{F}\) (informally those which demands only one descriptive predicate), the set of all non-elementary features \(\bigcup\mathcal{F}\) is defined as follows. A non-elementary feature
\[f^{*}\in\bigcup\mathcal{F}\ \ \text{where}\ \ \mathcal{F}\subset\bigcup \mathcal{F}\]
can be recursively established from a finite number of arbitrary elementary features
\[f_{i},f_{j},f_{k},...\in\mathcal{F}\]
by
* union (e.g. \(f_{i}\cup f_{j}\) ),
* intersection (e.g. \(f_{i}\cap f_{j}\))
* negation (e.g. \(\neg f_{i}\))
* exclusion or difference (e.g. \(f_{i}-f_{j}\) )
Formally, if
\[f^{*}=f_{i}\ o\ f_{j}\ \text{where}\ o\in\{\cup,\cap,-\}\ \text{and}\ f_{i},f_{j}\in\bigcup \mathcal{F}\]
then
\[P_{f^{*}}=P_{f_{i}\ o\ f_{j}}=\{p\in P\ s.t.\ p\in P_{f_{i}}\ o\ P_{f_{j}}\} \tag{6}\]
Analogously,
\[P_{\neg f}=\{p\in P\ s.t.\ p\notin P_{f}\}\ \ \text{with}\ f\in\bigcup \mathcal{F} \tag{7}\]
Negation operator s.a. (\(\neg\)) is beneficial for sub-population specification, e.g.
\[F_{married\ \cap\ \neg hasChildren} \tag{8}\]
corresponds to all married females without children. This sub-population can be equivalently described using the difference operator:
\[F_{married\ -\ hasChildren} \tag{9}\]
which entails to be a matter of style unless algorithmic execution details of the operators are assumed. For instance, it can be assumed that in intermediate computation the \(-\) operator is operated directly on the set of married females rather than the set of all females.
Generally, any of the features \(f_{i}\) and \(f_{j}\) in Equation 6 can be either elementary or non-elementary and the definition is recursive allowing the construction of an arbitrary set of non-elementary features. For example, the sub-population
\[M_{divorced\ \cap\ hasChildren\ \cap\ age>45\ -\ hasSiblings} \tag{10}\]
corresponds to the set of all divorced men of age older than 45 who has no siblings but they have children. In order to improve readability, equation 10 can be re-written as:
\[M_{divorced}\ \cap\ M_{hasChildren}\ \cap\ M_{age>45\ -\ M_{hasSiblings}} \tag{11}\]
Both styles can be mixed together for readability purpose, cf. Section 7.3.
### Composition operator \(f(g)\)
Another beneficial operator is the composition operator analogously defined as
\[P_{f(g)}=\{p\in P_{f}\ \mbox{s.t.}\ g(p)=true\} \tag{12}\]
Based on the definition, the composition operator is not symmetric as the case with the intersection operator. For example,
\[M_{isAlive(isSibling)}\ \neq\ M_{isSibling(isAlive)}\]
The left hand side refers to the set of siblings of the alive male population, while the right hand side refers to alive siblings of the male populations. Moreover, the composition operator can be regarded to be more computationally efficient in comparison with the intersection operator1.
Footnote 1: In this work, the main purpose behind the composition operator mainly remains in the context of algorithmic specification rather than enforcing any implementation details regarding computational efficiency
The desired sub-population specification in the example given by equation 11 may not correspond to the desired specification. Namely, desired is to specify the
alive divorced male population older than 45 years with alive children and alive siblings. In this case, the employment of the composition operator is relevant:
\[M_{isAlive(isDivorced)}\ \ \ \cap\ M_{|children(isAlive)|>0\ \cap\ age>45\ -\ |silbing (isAlive)|>0} \tag{13}\]
Nevertheless, to retain the desired simplicity, the previous equation can be rather rewritten as
\[M_{isAlive(isDivorced\ \cap\ hasAliveChildren\ \cap\ age>45\ -\ hasAliveSibling)} \tag{14}\]
## 2 Temporal operators
This section introduces further operators, inspired by the field of temporal logic. These operators provide powerful capabilities for algorithmic specification of time-dependent complex phrases in a compact manner. This section is concerned with defining those operators employed within the context of the demonstrated example model described starting from Section 4. The demonstrated operators in this section shall be included in the set of non-elementary features \(\bigcup\mathcal{F}\) defined in Equations 6 and 7.
### just operator
A special operator is
\[just(P_{f})\subseteq P_{f}\,\ f\in\bigcup\mathcal{F}\]
standing for a featured subpopulation established by an event that has just occurred (in the current simulation iteration). So for instance,
\[P^{t+\Delta t}_{just(married)}\]
stands for those individuals who just got married in the current simulation iteration with a fixed step size \(\Delta t\) but they were not married in the previous iteration, i.e.
\[P^{t+\Delta t}_{just(married)}\ =\ P^{t+\Delta t}_{married}\ -\ P^{t}_{\neg married}\]
Formally,
\[P^{t+\Delta t}_{just(f)}=P^{t+\Delta t}_{f}\ -\ P^{t}_{\neg f} \tag{15}\]
The just operator provides capabilities for powerful specification when combined with the negation operator. For example,
\[P^{t+\Delta t}_{just(\neg married)}\]
stands for those who "just" got divorced or widowed.
### pre operator
Another distinguishable operator is
\[pre(P_{f})\,\ f\in\bigcup{\cal F}\]
standing for "the previous iteration". So for instance,
\[P_{pre(married)}^{t+\Delta t}\]
stands for those individuals who were married (and not necessarily just got married) in the previous simulation iteration
\[P_{pre(married)}^{t+\Delta t}\ =\ P_{married}^{t}\]
Formally,
\[P_{pre(f)}^{t+\Delta t}=P_{f}^{t} \tag{16}\]
This operator may look unnecessary excessive, however cf. Section 7.3 as an example for the usefulness of the \(pre\) operator.
In this work, temporal operators is assumed to extend their applicability to individuals and their attributes. For instance
\[pre(location(p\in P)\]
stands for the location of a person in the previous iteration (which can be the same in the current iteration), e.g. cf. Section 7.3.
## 3 General form
### Definitions
This article is concerned with formalizing an agent-based model simulation formally defined via the tuple
\[<{\cal M},\alpha_{sim},{\cal F},{\cal M}^{t_{0}},{\cal E}> \tag{17}\]
based on a demographic time-dependent model:
\[{\cal M}\equiv{\cal M}(t)\equiv{\cal M}^{t}\equiv{\cal M}(P,S,\alpha,D,t) \tag{18}\]
where
* \(P=P(t)\): a given population of agents (i.e. individuals) at time \(t\) evaluated via the model \({\cal M}(t)\)
* \(S=S(t)=<H(t),W>\): the space on which individuals \(p\in P\) are operating, i.e. the set of houses \(H(t)\) distributed within the set of towns \(W\), cf. Section 5 for further detailed insights
* \(\underline{\alpha}\): time-independent model parameters,, cf. Section A.1
* \(\underline{D(t)}\): input data integrated into the model as (possibly smoothed) input trajectories, cf. Section A.2
* \(\underline{\alpha_{sim}=(\Delta t,t_{0},t_{final},\alpha_{meta})^{T}}\): simulation parameters including a fixed step size and final time-step after which simulation process stops
* \(\underline{\alpha_{meta}}\): Implementation-dependent simulation parameters, e.g. simulation seed for random number generation
The rest of mathematical symbols are defined in the following subsections.
### Featured sub-populations (via \(\mathcal{M}_{f^{*}}\))
\(\mathcal{F}=\{f_{1},f_{2},f_{3},...\}\): a finite set of elementary features each distinguishes a featured sub-population
\[P_{f}(t)\subseteq P(t)\,\ f\in\mathcal{F}\]
as defined in Equations 2 and 20, cf. Section 1.3. Each featured sub-population \(P_{f}(t)\) is evaluated by the submodel
\[f(\mathcal{M})\equiv f(\mathcal{M}^{t})\equiv\mathcal{M}^{t}_{f}=\mathcal{M} _{f}(P_{f},S,\alpha,D,t) \tag{19}\]
evaluating or predicting the sub-population
\[f(P(t))\equiv P_{f}(t)\ \ \text{s.t.}\ \ \forall p\in P_{f}(t)\implies f(p)=true \tag{20}\]
Note that this definition extends to non-elementary features as well:
\[f^{*}(\mathcal{M})\equiv\mathcal{M}_{f^{*}}(P_{f^{*}},S,\alpha,D,t)\ \text{for any}\ f^{*}\in\bigcup\mathcal{F} \tag{21}\]
Such non-elementary features, cf. Section 1.4, are used to distinguish sub-populations needed for describing the transient processes in agent-based modeling simulation process, cf. Section 7.
For a given closed set of elementary features as given in Equation 4, the overall population is the union of the elementary features, cf. Equation 5. In that case the comprehensive model \(\mathcal{M}\) constitutes of the sum of its elementary featured submodels:
\[\mathcal{M}\equiv\sum_{f^{\prime}\in\mathcal{F}^{\prime}}\mathcal{M}^{\prime} _{f} \tag{22}\]
### Initial population and space (via \(\mathcal{M}^{t_{0}}\))
\(\underline{\mathcal{M}^{t_{0}}}\): a model that evaluates an initial space and a population at a proposed simulation start time \(t_{0}\). The initial model also specifies featured sub-populations via:
\[\mathcal{M}^{t_{0}}_{f},\quad\forall f\in\mathcal{F} \tag{23}\]
Consequently, both the corresponding initial population and featured sub-populations:
\[P(t_{0})\text{ and }P_{f}(t_{0}),\quad\forall f\in\mathcal{F} \tag{24}\]
are specified as well as the initial space:
\[S(t_{0})\ =\ \ <H(t_{0})\,\ W> \tag{25}\]
i.e., the distribution of an initial set of houses \(H(t_{0})\) within a set of towns \(W\).
### Events \(\mathcal{E}\)
\(\underline{\mathcal{E}=\{e_{1},e_{2},e_{3},...,e_{n}\}}\): a finite set of events that transients a particular set of sub-populations evaluated by
\[\mathcal{M}_{f^{*}}(t),\quad\text{for some }f^{*}\in\bigcup\mathcal{F}\]
to modified sub-populations predicted by
\[\mathcal{M}_{f^{*}}(t+\Delta t)\]
Formally,
\[e(\mathcal{M}_{f^{*}}(t))=\mathcal{M}_{f^{*}}(t+\Delta t)\text{ for some }f^{*}\in\bigcup\mathcal{F}\,\ e\in \mathcal{E} \tag{26}\]
The appliance of all events transients the model to the next state
\[\prod_{i=1}^{n}e_{i}(\mathcal{M}(t))=\mathcal{M}(t+\Delta t) \tag{27}\]
### Single-clocked fixed-step simulation process
An agent-based simulation process follows the following pattern:
\[\sum_{t=t_{0}}^{t_{final}}\prod_{i=1}^{n}e_{i}(\mathcal{M}(t)) \tag{28}\]
Illustratively, the evolution of the population and its featured sub-populations is defined as a sequential application of the events transitions:
\[\mathcal{M}(t_{0})\text{ evaluating }(P(t_{0}),P_{f}(t_{0}))\text{ }\forall f\in \mathcal{F}\xrightarrow{\mathcal{E}}\]
\[\mathcal{M}(t_{0}+\Delta t)\text{ evaluating }(P(t_{0}+\Delta t),P_{f}(t_{0}+ \Delta t))\text{ }\xrightarrow{\mathcal{E}}\]
\[\mathcal{M}(t_{0}+2\Delta t)\text{ evaluating }(P(t_{0}+2\Delta t),P_{f}(t_{0}+2 \Delta t))\text{ }\xrightarrow{\mathcal{E}}\]
\[\ldots\ldots\]
\[\mathcal{M}(t_{final})\text{ evaluating }(P(t_{final}),P_{f}(t_{final}))\]
## 4 Model example
In this and the following sections, a model example is introduced to demonstrate the descriptive capabilities of the proposed formal terminology.
### Overview
The model is concerned with demographic agent-based model, a simplified demographic-only version of the lone parent model introduced in [3]. The presented model evolves an initial population of the UK through a combination of events listed in alphabetical order as follows:
* ageing, cf. Section 7.1
* births, cf. Section 7.2
* deaths, cf. Section 7.3
* divorces, cf. Section 7.4
* marriages, cf. Section 7.5
The population evolution follows Equation 28.
Establishing a mathematical model that corresponds to reality till the tiniest details is impossible. Therefore, initially a set of (potentially non-realistic) assumptions has to be made in order to simplify the model specification process. There are mainly two set of assumptions:
* population-based assumptions (to be labeled with \(P\))
* space-based assumptions (to be labeled with \(S\))
### Population assumptions
The population assumptions are summarized as follows:
**P. 1**: There are no homeless individuals:
\[\mbox{if }p\in P^{t}_{isAlive}\ \ \Longrightarrow\ \ house(p)\in H(t) \tag{29}\]
**P. 2**: (In- and out-) immigration is not included:
\[\mbox{if }p\in P^{t^{\prime}}_{isAlive}\mbox{ where }t_{0}<t^{\prime}\leq t _{final}\ \ \Longrightarrow\] \[pre(town(p))\in W\mbox{ and }town(p^{t^{\prime}})\in W \tag{30}\]
**P. 3**: Major demographic events s.a. world wars and pandemics are not considered
**P. 4**: Any two individuals living in a single house are either a 1-st degree relatives, step-parent, step-child, step-siblings or partners
**P. 5**: An exception to the previous assumption occurs when an orphan's oldest sibling is married
### Space assumptions
Resoundingly from Section 3.1, the space \(S\) is composed of a tuple
\[S\ \equiv\ S(t)\ =\ \ <H(t)\,\ W>\]
corresponding to the set of all houses \(H(t)\) and towns \(W\) implying that:
**S. 1**: the space is not necessarily static and particularly the set of houses can vary along the simulation time span
**S. 2**: the set of towns is constant during a simulation, i.e. no town vanishes nor new ones get constructed
**S. 3**: each town \(w\in W\) contains a dynamic set of of houses \(H_{w}\equiv H_{w}(t)\)
Furthermore,
**S. 4**: each house \(h\in H(t)\) is located in one and only one town \(w\in W\), i.e.
\[town(h)=w\in W\]
**S. 5**: the location of each house \(h\in H_{w}\) is given in xy-coordinate of the town
\[location(h_{x,y}\in H_{w})=(x,y)_{w}\]
**S. 6**: the houses within a town are uniformly distributed along the x and y axes
**S. 7**: a house never get demolished and remains always inhabitable
The space - detailed description
In the sake of comprehensive description of the space, further assumptions are listed as follows:
**S. 8**: The static set of towns of UK, cf. Assumption **S. 2**, are projected as a rectangular \(12\times 8\) grid with each point in the grid corresponding to a town
Formally, assuming that
\[location(w_{(x,y)})=(x,y)\]
then
**S. 9**: the town \(w_{(1,1)}\) corresponds to the north-est west-est town of UK whereas
**S. 10**: the town \(w_{(12,8)}\) corresponds to the south-est east-est town of UK
**S. 11**: the distances between towns are commonly defined, e.g.
\[\text{manhattan-distance}(w_{(x_{1},y_{1})},w_{(x_{2},y_{2})})=\mid x_{1}-x_{2 }\mid+\mid y_{1}-y_{2}\mid \tag{31}\]
The (initial) population and houses distribution within UK towns are approximated by am ad-hoc pre-given UK population density map. The map is projected as a rectangular matrix
\[M\in R^{12\times 8}\approx\begin{bmatrix}0.0&0.1&0.2&0.1&0.0&0.0&0.0&0.0\\ 0.1&0.1&0.2&0.2&0.3&0.0&0.0&0.0\\ 0.0&0.2&0.2&0.3&0.0&0.0&0.0\\ 0.0&0.2&1.0&0.5&0.0&0.0&0.0&0.0\\ 0.4&0.0&0.2&0.2&0.4&0.0&0.0&0.0\\ 0.6&0.0&0.0&0.3&0.8&0.2&0.0&0.0\\ 0.0&0.0&0.0&0.6&0.8&0.4&0.0&0.0\\ 0.0&0.0&0.2&1.0&0.8&0.6&0.1&0.0\\ 0.0&0.0&0.1&0.2&1.0&0.6&0.3&0.4\\ 0.0&0.0&0.5&0.7&0.5&1.0&1.0&0.0\\ 0.0&0.0&0.2&0.4&0.6&1.0&1.0&0.0\\ 0.0&0.2&0.3&0.0&0.0&0.0&0.0\\ \end{bmatrix} \tag{32}\]
It can be observed for instance that
* cells with density \(0\) (i.e. realistically, with very low-population density) don't correspond to inhabited towns
* the towns in UK are merged into \(48\) towns
* e.g. the center of the capital London spans the cells \((10,6),(10,7),(11,6)\) and \((11,7)\)
**S. 12**: if an empty house \(h\) is demanded in a particular town \(w\in W\), an empty house is randomly selected from the set of existing houses \(W_{w}\) in that town. If no empty house exists, a new empty house is established in conformance with assumption **S. 6**
**S. 13**: if an empty house \(h\) is demanded in an arbitrary town, a town is selected via a random weighted selection:
\[town(h)=random(W,M^{T}) \tag{33}\]
an empty house is selected or established according to the previous assumption
Further details on the initial set of houses is given in Section 6.6.
## 6 Model initialization \(\mathcal{M}^{t_{0}}\)
This section provides a detailed description of the initial model state as evaluated by \(M^{t_{0}}\) given that Section A demonstrates potential case studies specifying possible simulation parameter values for \(t_{0},\ t_{final}\) and \(\Delta t\). Further initialization assumptions are proposed by demand, distinguished by the labels **P0** for initial population assumptions or **S0** for initial space assumptions.
### Initial population size \(\left|P_{w}^{t_{0}}\right|\)
The initial population size is given by the parameter \(\alpha_{initialPop}\), cf. Section A.1. The matrix \(M\), defined in Equation 32, provides a stochastic ad-hoc estimate of the initial population \(P(t_{0})\) distribution within the UK as well as the initial set of given houses \(H(t_{0})\). That is, the initial population size of a town \(w\in W\) is approximated by
\[\left|P_{w}(t_{0})\right|\approx\alpha_{initialPop}\times M_{y,x}/48\quad \text{where}\quad\text{location}(w)=(x,y) \tag{34}\]
where 48 is the number of nonzero entries in \(M\).
### Gender \(P^{t_{0}}=M^{t_{0}}\cup F^{t_{0}}\)
The parameter \(\alpha_{initialPop}\) specifies the size of initial population, cf. Section A for potential values. The gender ratio distribution is unrealistically specified via the following non-realistic assumption:
**P. 6**: An individual can be equally a male or a female2
Gender assignment is established according to a uniform distribution, i.e.
\[Pr(isMale(p\in P^{t_{0}}))\approx 0.5 \tag{35}\]
### Age distribution \(P^{t_{0}}=\bigcup_{r}P^{t_{0}}_{age=r}\)
The proposed non-negative age distribution of population individuals in years follows a normal distribution:
\[\frac{age(P^{t_{0}})}{N_{\Delta t}}\in\mathbb{Q}^{\alpha_{initialPop}}_{+}\ \propto\ \left|\left|\mathcal{N}(0,\frac{100}{4}\cdot N_{\Delta t})\right|\right| \tag{36}\]
where \(\mathbb{Q}_{+}\) stands for the set of positive rational numbers and \(\mathcal{N}\) stands for a normal distribution with mean value 0 and standard deviation depending on
\[N_{\Delta t}\ =\ \left\{\begin{array}{ll}\ldots&\\ 12&\text{if}\ \ \Delta t=month\\ 365&\text{if}\ \ \Delta t=day\\ 365\cdot 24&\text{if}\ \ \Delta t=hour\\ \ldots&\end{array}\right. \tag{37}\]
A possible outcome of the distribution of ages in an initial population of size 1,000,000 is shown in Figure 1.
### Partnership \(P^{t_{0}}_{isMarried}=M^{t_{0}}_{isMarried}\cup M^{t_{0}}_{partners}\)
Initially the following population assumption is proposed:
Figure 1: The distribution of ages in an initial population of size 1,000,000
**P. 7**: There is no grandpa or grandma for any individual in the initial relationship, i.e.
\[p\in P_{isChild}^{t_{0}}\equiv P_{age\leq 18}^{t^{0}}\implies\nexists q\in P^{t_{0}} \mbox{ s.t. }grandchild(q)=p \tag{38}\]
The ratio of married adults (males or females) is stochastically approximated according to
\[Pr(p\in P_{isAdult}^{t_{0}})\approx\alpha_{startMarriedRate} \tag{39}\]
Before the partnership initialization, the following variables are initialized:
1. set \(F_{isMarriageEligible}=F_{isMarEli}=F_{isAdult}^{t_{0}}\)
2. set \(n_{candidates}=max\left(\alpha_{maxNumMarrCand}\,\ \frac{|F_{isMarEli}|}{10}\right)\)
For every male \(m\in M_{isMarried}^{t_{0}}\) selected for marriage, his wife is selected according to following steps:
1. establish a random set of \(n_{candidates}\) female candidates: \[F_{candidates}=random(F_{isMarEli},n_{candidates})\]
2. for every candidate, \(f\in F_{isMarEli}\), evaluate a marriage weight function: \[weight(m,f)\ =\ ageFactor(m,f)\] (40) where \[ageFactor(m,f)=\\ \left\{\begin{array}{ll}1/(age(m)-age(f)-5+1)&\mbox{if }\ age(m)-age(f)\geq 5\\ -1/(age(m)-age(f)+2-1)&\mbox{if }\ age(m)-age(f)\leq-2\\ 1&\mbox{otherwise}\end{array}\right.\] (41)
3. select a random female associated with the evaluated weights \[f_{partner(m)}=weightedSample(F_{isMarEli},W_{m})\\ \mbox{where }\ W_{m}=\{w_{i}:w_{i}=weight(m,f_{i})\,\ f_{i}\in F_{isMarEli}\}\] (42)
4. \(F_{isMarEli}=F_{isMarEli}-\{f_{partner(m)}\}\)
### Children and parents
The following assumptions are assumed only in the context of the initial population:
**P0. 8**: There is no orphan
**P0. 9**: There is no age difference restrictions among siblings, i.e. age difference can be less than 9 months
Children are assigned to married couples as parents in the following way. For any child \(c\in P_{age<18}^{t_{0}}\), the set of potential fathers is established as follows:
\[M_{candidates}= \{\ m\in M_{isMarried}\ \text{s.t.} \tag{43}\] \[min(age(m),age(wife(m))\geq age(c)+18+\frac{9}{12}\ \text{and}\] \[age(wife(m))<45+age(c)\ \}\]
out of which a random father is selected for the child:
\[father(c)= random(M_{candidates})\ \text{and}\] \[mother(c)=wife(father(c))\]
### Spatial distribution
The assignment of newly established houses to initial population considers the assumptions **S. 12** and **S. 13**. That is, the location of new houses in \(H(t_{0})\) is specified according to to Equation 33. Furthermore, the following assumption is proposed:
**P0. 10**: any house in \(H(t_{0})\) either occupied by a single person or a family
\[|occupants(h^{t_{0}})|>1\ \text{with}\ p,q\in occupants(h^{t_{0}})\ \text{and}\ p\neq q\ \implies\] \[\ p\in firstDegRelatives(q)\]
Assignments of new houses to the initial population is conducted as follows:
\[p^{t_{0}}\in P_{isSingle}^{t_{0}}\implies occupants(house(p))=\{p\} \ \text{otherwise}\] \[m^{t_{0}}\in M_{isMarried}^{t_{0}}\implies\] \[occupants(house(m))=\{m,wife(m)\}\cup children(m) \tag{44}\]
Overall, all houses in \(H(t_{0})\) are occupied, i.e. \(|occupants(h^{t_{0}})|\geq 1\).
## 7 Events
This section provides a compact algorithmic specification as a demonstration of the proposed terminology. The employed set of parameters and input data trajectories is given in Appendix A. The algorithmic specification makes use of
rates and instantaneous probabilities conceptually reviewed in Appendix B.
The considered events are alphabetically listed in this section without enforcing a certain appliance order, except for the ageing event which should proceed any other events. That is, Equations 27 and 28 are constrained by setting
\[e_{1}=ageing\]
The execution order of the events as well as the order of the agents subject to such events, whether sequential or random, remains an implementation detail. Nevertheless, since many of the events, specified in the following subsections, are following a random stochastic process, probably, the higher the resolution of the simulation becomes (e.g. weekly step-size instead of monthly, or daily instead of weekly), the less influential the execution order of the events becomes.
### Ageing
Following the terminology introduced so far, ageing process of a population can be described as follows
\[ageing\left(P_{isAlive(age=a)}^{t}\right)=P_{isAlive(age=a+\Delta t)}^{t+\Delta t }\ \,\ \ \forall a\in\{0,\Delta t,2\Delta t,...\} \tag{45}\]
The age of any individual as long as he remains alive is incremented by \(\Delta t\) for each simulation step. Furthermore, the following assumption is considered
**P. 9**: In case a teenager orphan becomes an adult and he/she is not the oldest sibling, the orphan gets re-allocated to an empty house within the same town, formally:
\[ageing\left(P_{isAlive(age=18)}\ \cap\ {isOrphan}\ \cap\ { hasOlderSibling}\right)=\] \[P_{isAlive(age=18+\Delta t)}\ \cap\ {isOrphan}\ \cap\ { hasOlderSibling}\ \cap\ { livesAlone} \tag{46}\]
Moreover,
\[\text{If }p\in P_{isAlive(age=18)}^{t+\Delta t}\text{ and }pre( house(p))\neq house(p)\implies\] \[town(p)=pre(town(p)) \tag{47}\]
The re-allocation to an empty house is in conformance with assumption **S. 12**.
### Births
For simplification purpose, from now on it is implicitly assumed (unless specified) that only the alive population is involved in event-based transition of
population specification. Let the set of reproducible females be defined asi.e.
\[F_{reproducible}\ =\ F_{isMarried\ \cap\ age<45}\ \bigcap\]
\[F_{youngestChild(age>1)\ \cup\ \neg hasChildren} \tag{48}\]
That is, the set of all married females in a reproducible age and either do not have children or those with youngest child older than one. The specification of a birth event demands enhancing the population-related assumptions as follows:
**P. 7**: a neonate's house is his mother house:
\[f\in\ F_{youngestChild(age=0)}\implies\] \[house(youngestChild(f))=house(f) \tag{49}\]
**P. 8**: only a married female3 gives birth
Footnote 3: This was already assumed in the lone parent model and obviously the marriage concept needs to be re-defined in the context of realistic studies
\[age(youngestChild(f\in F))=0\implies isMarried(f)=true \tag{50}\]
**P. 9**: a married person is not a teenager
\[isMarried(p\in P)=true\implies age(p)\geq 18 \tag{51}\]
Assumptions **P. 8** and **P. 9** implies that only an adult person can become a parent. The birth event produces new children from reproducible females:
\[birth\left(F^{t}_{reproducible}\right)\ =\] \[\left(F^{t+\Delta t}_{reproducible}\ -\ F^{t+\Delta t}_{just(reproducible )}\right)\ \bigcup\] \[F^{t+\Delta t}_{just(\neg reproducible)}\ \bigcup\] \[P^{t+\Delta t}_{age=0} \tag{52}\]
As an illustration based on the _just_ operator illustrated in Section 2.1
\[F_{just(reproducible)}\ =\ F_{just(isMarried)}\ \cup\ F_{isMarried(youngestChild(age=1))} \tag{53}\]
and (given Assumption **P. 8**)
\[F_{just(\neg reproducible)}\ =\] \[F_{youngestChild(age=0)}\ \cup\ F_{just(devored)}\ \cup\ F_{isMarried(age=45)} \tag{54}\]
The yearly-rate of births produced by the sub-population \(F^{t}_{reproducible,(age=a)}\) i.e. reproducible females of age \(a\) years old with actual simulation time \(t\), depends on the yearly-basis fertility rate data:
\[R_{birth,yearly}(F^{t}_{reproducible,(age=a)})\ \propto\ D_{ fertility}(a,currentYear(t)) \tag{55}\]
cf. fertility rate data in A.2. This implies that the instantaneous probability that a reproducible female \(f\in F_{reproducible}(t)\) gives birth to a new individual \(p\in P^{t+\Delta t}\) depends on \(D_{fertility}(a,currentYear(t))\) and is given by Equation 73, cf. Appendix B.
### Deaths
The death event transforms a given population of alive individuals as follows:
\[death\left(P^{t}_{isAlive}\right)\ =\ P^{t+\Delta t}_{isAlive-age=0}\ \bigcup\ P^{t+\Delta t}_{just(\neg isAlive)} \tag{56}\]
The first phrase in the right hand side stands for the alive population except neonates and the second stands for those who just became dead. The following simplification assumptions are considered:
**P. 10**: No adoption or parent re-assignment to orphans is established after their parents die
**P. 11**: Those who just became dead they leave their houses, i.e.
\[\mbox{if}\ p\in P^{t+\Delta t}_{just(\neg isAlive)}\ \ \mbox{and}\ \ pre(house(p))=h\] \[\implies\ p\notin P_{h}\ \ \mbox{and}\ \ house(p)=grave \tag{57}\]
The amount of population deaths depends on the yearly probability given by:
\[Pr_{death,yearly}(p\in P)=\alpha_{baseDieRate}+\] \[\left\{\begin{array}{c}\left(e^{\frac{age(p)}{\alpha_{maleAgeScaling }}}\right)\times\alpha_{maleAgeDieProb}\ \mbox{if}\ isMale(p)\\ \left(e^{\frac{age(p)}{\alpha_{femaleAgeScaling}}}\right)\times\alpha_{female AgeDieProb}\ \mbox{if}\ isFemale(p)\end{array}\right. \tag{58}\]
from which instantaneous probability of the death of an individual is derived as illustrated in Appendix B.
### Divorces
The divorce event causes that a subset of married population becomes divorced:
\[divorce(M^{t}_{isMarried})\ =\] \[M^{t+\Delta t}_{isMarried-just(isMarried)}\ \bigcup\ M_{just(\neg isMarried)} \tag{59}\]
The first phrase in the right hand side refers to the set of married individuals who remained married excluding those who just got married. The second phrase refers to the population subset who just got divorced in the current iteration. Note that it is sufficient to only apply the divorce event to either the male or female sub-populations. After divorce takes place, the housing is specified according to the following assumption:
**P. 12**: Any male who just got divorced moves to an empty house within the same town (in conformance with assumption **S. 12**):
\[location(m\in M_{just(isDivored)})=h\ \ \mbox{and}\ \ pre(location(m))=h^{\prime}\implies\] \[|occupants(h)|=1\ \ \mbox{and}\ \ town(h)=town(h^{\prime}) \tag{60}\]
The re-allocation to an empty house is in conformance with assumption **S. 12**. The amount of yearly divorces in married male populations depends on the yearly probability given by
\[Pr_{divorce,yearly}(m\in M^{t}_{isSingle})\ =\\ \alpha_{basicDivorceRate}\ \cdot\ D_{divorceModifierByDecade}(\lceil age(m)/10\rceil) \tag{61}\]
That is, the instantaneous probability of a divorce event to a married man \(m\in M_{isMarried}\) depends on \(D_{divorceModifierByDecade}(\lceil age(m)/10\rceil)\), cf. Equation 73.
### Marriages
Similar to the divorce event, it is sufficient to apply the marriage event to a sub-population of single males. Assuming that
\[M_{isMarEli}\ =\ M_{isMarriageEligible}\ =\ M_{isSingle\ \cup\ age\geq 18} \tag{62}\]
the marriage event updates the state of few individuals within a sub-population to married males, formally:
\[marriage(M^{t}_{isMarEli})\ =\\ M^{t+\Delta t}_{isMarEli-just(isDivorced)-age=18}\ \bigcup\ M^{t+\Delta t}_{ just(isMarried)} \tag{63}\]
The amount of yearly marriages is estimated by
\[Pr_{marriage,yearly}(m\in M^{t}_{isSingle})\ =\\ \alpha_{basicMaleMarriageRate}\ \cdot\ D_{maleMarriageModifierByDecade}(\lceil age(m)/10\rceil) \tag{64}\]
from which simulation-relevant instantaneous probability is calculated as given in Equation 73. For an arbitrary just married male \(m\in M^{t+\Delta t}_{just(married)}\), his partner was selected according to following steps:
* set \(n_{candidates}=max\left(\alpha_{maxNumMarCand}\,\ \frac{\lfloor F_{isMarEli} \rfloor}{10}\right)\)
* establish a random set of \(n_{candidates}\) female candidates: \[F_{candidates}=random(F_{isEliMar},n_{candidates})\]
* for \(m\in M_{isMarEli},\ f\in F_{isMarEli}\), set a marriage weight function: \[weight(m,f)\ =\\ geoFactor(m,f)\ \cdot\ childrenFactor(m,f)\ \cdot\ ageFactor(m,f)\] (65)
where \[geoFactor(m,f)=\] \[1/e^{(4\cdot\text{manhattan-distance}(town(m),town(f)))}\] (66) \[childrenFactor(m,f)=\] \[1/e^{\left|children(m)\right|}\cdot 1/e^{\left|children(f)\right| }\cdot e^{\left|children(m)\right|\cdot\left|children(f)\right|}\] (67) \[ageFactor(m,f)=\] \[\left\{\begin{array}{lcl}1/(age(m)-age(f)-5+1)&\text{if }\ age(m)-age(f) \geq 5\\ -1/(age(m)-age(f)+2-1)&\text{if }\ age(m)-age(f)\leq-2\\ 1&\text{otherwise}\end{array}\right.\] (68)
* select a random female according to the weight function \[f_{partner(m)}^{t+\Delta t}=weightedSample(F_{isMarEli}^{t},W_{m})\] \[\text{where }\ W_{m}=\{w_{i}:w_{i}=weight(m,f_{i})\,\ f_{i}\in F _{isMarEli}\}\] (69)
* \(F_{candidates}=F_{candidates}-\{f_{partner(m)}\}\)
Note that the just married male and his selected female don't belong to the set of marriage eligible population \(P_{isMarEli}^{t+\Delta t}\) any more. The following assumption specifies the house of the new couple
**P. 13**: When two individuals get married, the wife and the occupants of actual house (i.e. children and non-adult orphan siblings) moves to the husband's house unless there are fewer occupants in his house. In the later case, the husband and the occupants of his house move to the wife's house.
Formally, suppose that \(m\in P_{just(isMarried)}^{t+\Delta t}\) and \(f^{t+\Delta t}=partner(m^{t+\Delta t})\), if
\[\left|P_{house(m^{t})}\right|\geq |P_{house(f^{t})}|\text{ and }house(p^{t})=house(f^{t})\implies\] \[house(p^{t+\Delta t})=house(m)\]
Otherwise
\[\left|P_{house(m^{t})}\right|< |P_{house(f^{t})}|\text{ and }house(p^{t})=house(m^{t})\implies\] \[house(p^{t+\Delta t})=house(f) \tag{70}\]
## Appendix A Parameters and input data
### Parameters
#### Model parameters
The following is a table of parameters employed for events specification. The values are set in an ad-hoc manner and they are not calibrated to actual data.
Actual data is rather dependent on simulation parameters, e.g. the start and final simulation times.
\begin{tabular}{|l|c|l|} \hline \(\alpha_{x}\) & Value & Usage \\ \hline \(basicDivorceRate\) & 0.06 & Equation 61 \\ \(basicDeathRate\) & 0.0001 & Equation 58 \\ \(basicMaleMarriageRate\) & 0.7 & Equation 64 \\ \(femaleAgeDieRate\) & 0.00019 & Equation 58 \\ \(femaleAgeScaling\) & 15.5 & Equation 58 \\ \(initialPop\) & 10000 & Section 6.2 \\ \(maleAgeDieRate\) & 0.00021 & Equation 58 \\ \(maleAgeScaling\) & 14.0 & Equation 58 \\ \(maxNumMarrCand\) & 100 & Sections 6.4 \& 7.5 \\ \(startMarriedRate\) & 0.8 & Equation 39 \\ \hline \end{tabular}
The value of the initial population size is just an experimental value and can be selected from the set \(\left\{10^{4},10^{5},10^{6},10^{7}\right\}\) to examine the performance of the implementation or to enable a realistic demographic simulation with actual population size.
#### Simulation parameters
In version 1.1 of the package MiniDemographicABM.jl [3], the following ad-hoc values of the simulation parameters are selected:
\begin{tabular}{|l|l|} \hline \(\alpha_{x}\) & Value \\ \hline \(t_{0}\) & 2020 \\ \(\Delta_{t}\) & Daily \\ \(t_{final}\) & 2030 \\ \hline \end{tabular}
It is beneficial in future to further propose several case studies with specific simulation parameter values for each case. This shall be hopefully accompanied in the documentation of the model provided as a pdf-file within the package.
### Data
The data values are set as follows:
\(D_{divorceModifierByDecade}\in R^{16}\ =\)
\((0,1.0,0.9,0.5,0.4,0.2,0.1,0.03,0.01,0.001,0.001,0.001,0,0,0,0)^{T}\)
\(D_{maleMarriageModifierByDecade}\in R^{16}\ =\)
\((0,0.16,0.5,1.0,0.8,0.7,0.66,0.5,0.4,0.2,0.1,0.05,0.01,0,0,0)^{T}\)
In the archived Julia package MiniDemographicABM.jl Version 1.1 and originally taken from the lone parent model implemented in Python the fertility data
\[D_{fertility}\in R^{35\times 360}\ =\] \[\ [d_{ij}:\ \text{ fertility rate of women of age $i-16$ in year}j-1950]\]
This matrix reveals and forecast the fertility rate for woman of ages 17 till 51 between the years 1951 and 2050, cf. Figure 2.
## Appendix B Events rates and instantaneous probability
Pre-given data, e.g. mortality and fertility rates, are usually given in the form of finite rates (i.e. cumulative rate) normalized by sub-population length. In other words, the rate
\[R_{event,period(t,t+\Delta t)}(X^{t})\,\ X^{t}\subseteq P^{t}\]
corresponds to the number of occurrences that a certain _event_ within a sub-population \(X\) (e.g. marriage) takes place in the time range between \((t,t+\Delta t)\), e.g. a daily, weekly, monthly or yearly rate, etc. normalized by the sub-population length. That is, say if a pre-given typically yearly rate is given as input data:
\[R_{event,yearly}(X^{t})\ =\ D_{event,yearly}\in R^{N\times M} \tag{71}\]
Figure 2: fertility rates of women of different age classes between the years 1966 and 2050
where \(M\) corresponds to a given number of years and \(N\) corresponds to the number of particular features of interest, for examples:
* \(M=100\) for mortality or fertility yearly rate data between the years 1921 and 2020
* \(N=28\) for fertility rate data for women of ages between 18 and 45 years old, i.e. \(28=45-18+1\)
The yearly probability that an event takes place for a particular individual \(x^{t}\in X^{t}\) is:
\[Pr_{event,yearly}(x^{t}\in X^{t})\ =\ D_{event,yearly}(yearsold(x^{t}),currentYear(t)) \tag{72}\]
Pre-given data in such typically yearly format desires adjustments in order to employ them within a single clocked agent-based-model simulation of a fixed step size typically smaller than a year. Namely, the occurrences of such events need to be estimated at equally-distant time points with the pre-given constant small simulation step size \(\Delta t\). For example, if we have population of 1000 individuals with a (stochastic) monthly mortality rate of 0.05, then after
* one month (about) 50 individuals die with 950 left (in average)
* two months, about 902.5 individuals are left
*...
* one year, 540 individuals are left resulting in a yearly finite rate of 0.46
Typically mortality rate in yearly forms of various age classes are given, but a daily or monthly estimate of the rates shall be applied within an agent-based simulation.
The desired simulation-adjusted probability is approximated by rather evaluating the desired rate per very short period regardless of the simulation step-size, assumed to be reasonably small (e.g. hourly, daily, weekly or monthly at maximum). Formally, the so called instantaneous probability is evaluated as follows:
\[Pr_{event,instantaneous}(x^{t}\in X^{t},\Delta t)\ =\ \ -\ \frac{ln(1-Pr_{event,yearly}(x^{t}))}{N_{ \Delta t}} \tag{73}\]
where \(N_{\Delta}\) is given as in Equation 37.
## Acknowledgments
The following are acknowledged
* (Research Associate) Dr. Martin Hinsh for scientific exchange
* (Research Fellow) Dr. Eric Silverman as a principle investigator
Both are affiliated at MRC/CSO Social & Public Health Sciences Unit chol of Health and Wellbeing, University of Glasgow.
## Funding
Dr. Atyiah Elsheikh, by the time of publishing Version 1.1 of this software, is a Research Software Engineer at MRC/CSO Social & Public Health Sciences Unit, School of Health and Wellbeing, University of Glasgow. He is in the Complexity in Health programme. He is supported by the Medical Research Council (MC_UU_00022/1) and the Scottish Government Chief Scientist Office (SPHSU16).
For the purpose of open access, the author(s) has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
|
2309.07857 | Overhead-constrained circuit knitting for variational quantum dynamics | Simulating the dynamics of large quantum systems is a formidable yet vital
pursuit for obtaining a deeper understanding of quantum mechanical phenomena.
While quantum computers hold great promise for speeding up such simulations,
their practical application remains hindered by limited scale and pervasive
noise. In this work, we propose an approach that addresses these challenges by
employing circuit knitting to partition a large quantum system into smaller
subsystems that can each be simulated on a separate device. The evolution of
the system is governed by the projected variational quantum dynamics (PVQD)
algorithm, supplemented with constraints on the parameters of the variational
quantum circuit, ensuring that the sampling overhead imposed by the circuit
knitting scheme remains controllable. We test our method on quantum spin
systems with multiple weakly entangled blocks each consisting of strongly
correlated spins, where we are able to accurately simulate the dynamics while
keeping the sampling overhead manageable. Further, we show that the same method
can be used to reduce the circuit depth by cutting long-ranged gates. | Gian Gentinetta, Friederike Metz, Giuseppe Carleo | 2023-09-14T17:01:06Z | http://arxiv.org/abs/2309.07857v3 | # Overhead-constrained circuit knitting for variational quantum dynamics
###### Abstract
Simulating the dynamics of large quantum systems is a formidable yet vital pursuit for obtaining a deeper understanding of quantum mechanical phenomena. While quantum computers hold great promise for speeding up such simulations, their practical application remains hindered by limited scale and pervasive noise. In this work, we propose an approach that addresses these challenges by employing circuit knitting to partition a large quantum system into smaller subsystems that can each be simulated on a separate device. The evolution of the system is governed by the projected variational quantum dynamics (PVQD) algorithm, supplemented with constraints on the parameters of the variational quantum circuit, ensuring that the sampling overhead imposed by the circuit knitting scheme remains controllable. We test our method on quantum spin systems with multiple weakly entangled blocks each consisting of strongly correlated spins, where we are able to accurately simulate the dynamics while keeping the sampling overhead manageable. Further, we show that the same method can be used to reduce the circuit depth by cutting long-ranged gates.
Gian Gentinetta: [email protected]
## 1 Introduction
Quantum computers are promising tools for simulating quantum systems [1, 2, 3, 4, 5, 6]. Particularly, the efficient simulation of quantum dynamics can provide insightful information about the nature of physical phenomena at the microscopic scale [7, 8, 9, 10, 11]. However, the practical utility of quantum devices is currently constrained by limitations in scale and the effects of noise [12, 13, 14, 15]. While the size of available quantum computers is steadily growing [16], most publicly available devices are still very limited in size. In order to extend the capabilities of Noisy Intermediate-Scale Quantum (NISQ) devices [17], several schemes have been proposed to partition large systems into small clusters that can be solved individually on smaller quantum hardware [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. To combine the solutions and recover the entanglement between the subsystems, classical resources are usually employed. Hence, ultimately, these hybrid quantum-classical computing approaches allow for quantum simulations on a larger scale.
Developing strategies for efficiently partitioning quantum computations is particularly timely, as one of the focuses of the next generation of quantum processors lies in connect
ing multiple medium-size quantum chips with fast classical communication, allowing for parallelization of quantum simulations with classical cross-talk [16]. Moreover, the idea of splitting a quantum system into subsystems can also be motivated by the underlying physical or chemical processes. Several interesting physical systems naturally allow for partitioning into weakly-entangled subsystems such as ground and low-energy eigenstates of local lattice Hamiltonians [31, 32, 33] and molecules [34], as well as quantum impurities immersed in a bath [35, 36].
Two prominent hybrid quantum-classical schemes that combine multiple quantum circuits using classical post-processing are entanglement forging [27, 28, 29] and circuit knitting [18, 19, 20, 21, 22]. Entanglement forging relies on the fact that a bipartite quantum state can always be written in the Schmidt decomposition. This enables a classical computer to combine the states of two systems implemented on separate quantum devices. If the two systems are weakly entangled, a small number of Schmidt coefficients suffices for a good approximation of the full solution. Crucially, entanglement forging is limited to two subsystems, as the Schmidt decomposition cannot be applied to general multipartite states. Circuit knitting, on the other hand, employs quasi-probability distributions to cut gates that span across different systems into locally realizable quantum channels. This allows to arbitrarily cut a quantum circuit into multiple subsystems. However, this technique imposes a sampling overhead that scales exponentially in the number of gates cut.
In this work, we propose a method that splits a quantum circuit ansatz into multiple subsystems using circuit knitting while keeping the sampling overhead controlled. This is achieved by imposing a constraint on the circuit parameters during the optimization of the variational quantum circuit.
We employ this method to simulate the dynamics of quantum systems using the projected variational quantum dynamics (PVQD) algorithm [37]. An application of variational quantum-classical hybrid schemes to dynamics is largely missing in the literature. The task is non-trivial, as evolving a parameterized quantum state in time either requires measuring (complex) matrix elements of the geometric tensor [38, 39, 40, 41] or fidelities between quantum states [37, 42]. This poses a challenge to entanglement forging, where the ansatz is given by a superposition of quantum circuits. There, measuring overlaps is expensive and usually requires non-local circuits such as Hadamard-tests [43]. Instead, in the framework of circuit knitting, fidelities can be straightforwardly computed using, for example, the compute-uncompute method [44] without introducing any ancilla qubits or long-ranged gates.
We test our method on spin systems in a transverse field Ising model, where we weakly couple multiple blocks of strongly correlated spins. We show that with a realistic sampling overhead, we can significantly improve the accuracy of the simulation compared to a pure block product approximation, which does not consider any entanglement between different blocks. Furthermore, the trade-off between the sampling overhead and the accuracy of the variational state can be tuned in a controlled way via a single hyperparameter of the optimization. Finally, we demonstrate that our scheme can also reduce the required circuit depth when simulating models containing long-range interactions.
The structure of this paper is as follows: In Section 2, we explain how we use PVQD and circuit knitting techniques to evolve a quantum circuit ansatz in time while keeping the sampling overhead controlled. In Section 3, we test our method on quantum spin systems in a transverse field Ising model for different setups. Finally, in Section 4, we discuss the results and provide an outlook on possible future applications of the method.
Methods
We consider the dynamics of a quantum system represented by a Hilbert space partitioned into \(N\) individual subsystems (called blocks) \(\mathcal{H}=\mathcal{H}_{1}\otimes\mathcal{H}_{2}\otimes\cdots\otimes\mathcal{H} _{N}\), where the blocks are simulated either in parallel on separate quantum devices or sequentially on the same machine. While the entanglement between qubits within one block can be arbitrarily high, we impose that the entanglement between blocks is weak, such that it can be recovered efficiently using classical resources.
### Projected variational quantum dynamics
We perform the dynamics of the system governed by a Hamiltonian \(H\) using the projected variational quantum dynamics (PVQD) algorithm [37]. While traditional trotterized time evolution requires circuits that grow in depth with increasing evolution time \(t\), the advantage of variational algorithms such as PVQD is that the circuit depth remains constant over the whole evolution. PVQD evolves the parameters \(\theta\) of a quantum circuit ansatz \(\ket{\psi(\theta)}\) in time, by minimizing the infidelity
\[\theta_{t}=\min_{\theta}\left[1-\big{|}\bra{\psi(\theta)}e^{-i\Delta tH}\ket{ \psi(\theta_{t-1})}\big{|}^{2}\right] \tag{1}\]
at every time step \(t\). This ensures that \(\ket{\psi(\theta_{t})}\) is the state within the manifold defined by the ansatz that is closest to the true time-evolved state \(e^{-i\Delta tH}\ket{\psi(\theta_{t-1})}\). Here, the time evolution unitary \(e^{-i\Delta tH}\) can be expanded into gates using the Trotter-Suzuki decomposition of the first order, as the time step \(\Delta t\) is chosen to be small. Crucially, PVQD only requires measuring fidelities between two quantum states. This can be achieved by sampling from hardware efficient circuits, in contrast to other variational methods such as the time-dependent variational principle (TDVP) [38, 39, 40, 41], where complex-valued state overlaps need to be measured using for example Hadamard-tests.
The fidelity between two quantum circuits is usually measured using the compute-uncompute method [44], in which one measures the probability of retrieving the all-zero bit string after evolving the circuit in Eq. (1). The optimization of this global loss function is known to be prone to cost function-dependent barren plateaus [45], i.e. the gradients vanish exponentially fast in the number of qubits \(n\). It has been shown that for small enough time steps \(\Delta t\), PVQD is not affected by this problem as the initial guess \(\ket{\psi(\theta_{t-1})}\) has a non-zero overlap with the target state \(e^{-i\Delta tH}\ket{\psi(\theta_{t-1})}\)[37, 46]. In addition, in the following experiments, we further increase the variance of the gradient by measuring a local observable with the same maximum as the global fidelity. The observable is defined as averaging over the local \(\ket{0}\bra{0}\) projectors
\[\mathcal{O}_{\text{loc}}=\frac{1}{n}\sum_{k=1}^{n}\mathbbm{1}^{\otimes k-1} \otimes\ket{0}\bra{0}\otimes\mathbbm{1}^{\otimes n-k}. \tag{2}\]
### Circuit knitting
Performing any measurements on the variational state defined on the composite Hilbert space \(\mathcal{H}=\mathcal{H}_{1}\otimes\mathcal{H}_{2}\otimes\cdots\otimes \mathcal{H}_{N}\) requires running circuits spanning across all blocks. To realize measurements on circuits of smaller sizes, we utilize circuit knitting techniques [18, 19, 20, 21, 22] to cut cross-block gates and recover the entanglement using additional circuit evaluations and classical post-processing. Circuit knitting allows decomposing a global quantum channel \(\mathcal{U}\) acting on a quantum state \(\rho\) into locally realizable quantum channels \(\mathcal{E}_{k}^{i}\) according
to a quasi-probability decomposition (QPD)
\[\mathcal{U}[\rho]=\sum_{k=1}^{K}\alpha_{k}\mathcal{E}_{k}^{1}\otimes\mathcal{E}_{ k}^{2}\otimes\cdots\otimes\mathcal{E}_{k}^{N}[\rho], \tag{3}\]
for \(K\in\mathbb{N}\) and \(\alpha_{k}\in\mathbb{R}\). In our specific case, \(\mathcal{U}\) will be the channel defined by a unitary gate acting on qubits of separate blocks \(\mathcal{H}_{i}\otimes\mathcal{H}_{j}\), \(\rho=\left|\psi\right\rangle\left\langle\psi\right|\) is the pure state defined by the circuit prior to applying this gate, and \(\{\mathcal{E}_{k}^{j},\mathcal{E}_{k}^{j}\}\) are the corresponding set of channels that act locally only within each subsystem \(\mathcal{H}_{i}\) or \(\mathcal{H}_{j}\).
In practice, for every circuit evaluation, the global channel \(\mathcal{U}\) is replaced by some locally realizable channel \(\mathcal{E}_{k}=\mathcal{E}_{k}^{1}\otimes\cdots\otimes\mathcal{E}_{k}^{N}\) sampled according to the probability distribution defined by \(p_{k}\propto\left|\alpha_{k}\right|\). While the QPD provides an unbiased estimator of the true expectation value of the measurement, the sampling cost required to achieve the same precision increases. Crucially, some of the \(\alpha_{k}\) can be negative, which leads to a sampling overhead of
\[\omega(\mathcal{U},\{\mathcal{E}_{k}^{i}\}_{k,i})=\left(\sum_{k}\left|\alpha _{k}\right|\right)^{2}. \tag{4}\]
This overhead is multiplicative1 and, hence, scales exponentially in the number of gates that are cut.
Footnote 1: In general, the overhead is sub-multiplicative as, for the combination of multiple gates, a more efficient QPD could be found. However, a general theory for this is yet missing, and the decomposition would have to be found on a case-to-case basis. For this reason, we restrict the following analysis to the worst-case scenario of a multiplicative overhead.
### Overhead constrained PVQD
The circuit that needs to be run to evaluate the fidelity in Eq. (1) is composed of gates arising from the Trotter step unitary \(e^{-i\Delta tH}\) and gates in the variational ansatz state \(\left|\psi(\theta)\right\rangle=U(\theta)\left|0\right\rangle\) that potentially span across multiple blocks and thus have to be cut (see Fig. 1). For the Trotter gates, we restrict the analysis to 2-local Hamiltonians, such that the multiqubit gates appearing in the Trotter expansion are given by two-qubit rotations defined as \(e^{-i\Delta tJ_{ij}\sigma_{i}\otimes\sigma_{j}}\), for Pauli operators \(\sigma_{i},\sigma_{j}\in\{X,Y,Z\}\) and coupling coefficients \(J_{ij}\in\mathbb{R}^{2}\). The sampling overhead imposed by cutting those gates with the optimal decomposition is given as \(\omega_{J_{ij}}=\left(1+2|\sin(2\Delta tJ_{ij})|\right)^{2}\)[21, 22]. For the time evolution to be accurate, we require \(\Delta t\) to be small. Moreover, we consider only cases in which the coupling \(J_{ij}\) between qubits of different blocks is weak. Hence, we can assume \(\Delta tJ_{ij}\ll 1\), and thus, \(\omega_{J_{ij}}\) is close to 1. If the Trotter step requires a total of \(L\) such gates to be cut, the overhead scales as \(\omega_{\Delta t}=\omega_{J}^{L}\), where for simplicity, we take \(J_{ij}=J\ \forall ij\). While this scales exponentially in the number of gates, the base is small, and for a finite number of blocks, the overhead remains manageable.
For the cross-block gates introduced by the variational state \(U(\theta)\), the analysis is less straightforward, as generally, the ansatz can be constructed from an arbitrary gate set. Many commonly used ansatzes consist of parameterized single-qubit rotations followed by CNOT gates that impact the entanglement. Cutting CNOT gates, however, comes at a
fixed cost of \(\omega_{\text{CNOT}}=9\)3. An alternative class of two-qubit gates that allow more control over the sampling overhead when being cut are parameterized two-qubit rotations such as those appearing in the Trotter decomposition. If one cuts \(M\) two-qubit rotations with angles \(\varphi_{1},\ldots\varphi_{M}\), the multiplicative sampling overhead needed to evaluate the PVQD loss function with the circuit knitting scheme is given as
Footnote 3: The cost of cutting multiple CNOT gates can be reduced using classical communication [21, 22]. For simplicity, we here assume a setting with only local operations and no classical communication. This also allows for the circuits to be run sequentially on the same device instead of parallelly on multiple devices.
\[\omega(\varphi)=\omega_{\Delta t}\cdot\left(\prod_{i=1}^{M}\left(1+2|\sin( \varphi_{i})|\right)^{2}\right)^{2}, \tag{5}\]
where \(\omega_{\Delta t}\) is the overhead due to cutting the Trotter step, and the additional square appears due to doubling the circuit (see Fig. 1). The total overhead can become extremely large if the angles \(\varphi_{i}\) are unbound. A way to circumvent this issue is to employ a block product ansatz that does not introduce any entangling gates between different blocks. This ansatz is shown in Fig. 1 on the left and labeled as BPA* to distinguish it from a pure block product approximation (BPA) where also the entangling Trotter gates are omitted. While the BPA* comes at a minimal sampling overhead, it is not able to capture any entanglement between different blocks. Even for weakly entangled systems, the ansatz is thus expected to fail after evolving the system for a long enough time. Hence, it becomes necessary to add parameterized entangling gates between different blocks of the ansatz state (see right panel of Fig. 1). We refer to this type of ansatz as circuit knitting approximation (CKA).
In order to keep the overhead \(\omega\) controllable throughout the optimization of the CKA, we add a constraint to the optimization of Eq. (1) such that \(\omega\) is always bound by a
Figure 1: Circuits used to measure the (local) fidelity between the time evolved state \(e^{-i\Delta tH}U(\theta_{t-1},\varphi_{t-1})\ket{0}=e^{-i\Delta tH}\ket{\psi( \theta_{t-1},\varphi_{t-1})}\) and the ansatz \(\ket{\psi(\theta,\varphi)}\) required for a PVQD optimization step. We cut the circuits into distinct blocks (indicated by the red dashed lines), which can each be simulated on a separate quantum device. In the experiments considered in this work, the single-qubit gates are realized using \(R_{X}\) rotations, while the two-qubit gates correspond to \(R_{ZZ}\) rotations. Gray-shaded gates are fixed by the parameters of the last time step \(t\), while the parameters of the blue-shaded gates are varied to optimize Eq. (6). The trotterized time evolution unitaries are colored yellow. **Left panel:** Block product ansatz (BPA*) where the only entangling gates between different blocks appear in the Trotter step. (In a pure block product approximation (BPA) all inter-block gates are omitted, including the ones arising in the time evolution step.) **Right panel:** Circuit knitting ansatz (CKA). Here, additional entangling gates between the different blocks are introduced into the ansatz. For clarity, the parameters of these dashed, light-colored gates are labeled \(\varphi\), whereas \(\theta\) denotes the angles of all other gates that do need to be cut.
threshold \(\tau>1\)
\[\theta_{t},\varphi_{t}=\min_{\theta}\left[1-\big{|}\left\langle\psi( \theta,\varphi)\right|e^{-i\Delta tH}\left|\psi(\theta_{t-1},\varphi_{t-1}) \right\rangle\big{|}^{2}\right] \tag{6}\] \[\text{s.t.}\,\omega(\varphi)\leq\tau,\]
where we denote by \(\theta\) the parameters of gates acting within a single block and by \(\varphi\) the parameters of gates that are being cut, i.e. two-qubit gates stretching across two blocks. We satisfy the constraint throughout the optimization by projecting the parameters \(\varphi\) back into the allowed subspace defined by \(\omega(\varphi)\leq\tau\) (see Fig. 2). This projection is performed after every PVQD update step, which would result in circuits exceeding the predefined overhead threshold. The steps of the algorithm are outlined in Algorithm 1, and an in-depth description is provided in Appendix A.
## 3 Results
As an example application of our method, we consider the transverse field Ising model (TFIM) spin system
\[H=\sum_{\langle ij\rangle}J_{ij}Z_{i}Z_{j}+\sum_{i}X_{i}, \tag{7}\]
where we assume that the coupling between neighboring spins \(J_{ij}\) is large for \(i,j\) in the same block and small for \(i,j\) in different blocks. We compare our circuit knitting ansatz (CKA) with different thresholds \(\tau\) to a pure block product approximation (BPA) and to the BPA*, where the full Trotter step, including all cross-block interactions, is implemented.
### Spin chain
In the first experiment, we consider a spin chain of \(N=3\) blocks of 2 spins each, as shown in Fig. 3 (a). The coupling within one block is chosen to be at the critical point \(J_{ij}=J_{1}=1\), whereas the coupling between two blocks is set to \(J_{ij}=J_{2}=1/4\). The ansatz follows the structure of the Trotter decomposition of \(e^{-i\Delta tH}\) with 4 repetitions of alternating layers of \(R_{X}\) and \(R_{ZZ}\) rotations (see Fig. 1).
Figure 2: Solving the constrained optimization problem defined in Eq. (6) to evolve the ansatz state by one-time increment. The optimization starts at the parameters of the last time step \(t-1\). In every iteration, the parameters are first updated to maximize the fidelity with respect to the true time evolved state using an ADAM [47] update step (blue arrow). After the update, the multiplicative sampling overhead \(\omega(\varphi)\) is computed according to Eq. (5) and compared against the threshold \(\tau\). In case \(\omega(\varphi)>\tau\), the parameters are projected onto the manifold of \(\omega(\varphi)\leq\tau\) (orange dashed arrow). This procedure is repeated until the parameters converge; the final point is labeled as \(\varphi_{t,\tau}\). In contrast, the path on the top represents the usual, unconstrained optimization with no predefined threshold that converges to different parameters \(\varphi_{t,\infty}\) which, however, incur an uncontrolled sampling overhead.
In Fig. 4 (a) we plot the fidelity of time-evolved states obtained through PVQD state vector simulations with respect to the exact solution. We observe that the pure block product ansatz optimized with block product Trotter gates (BPA) has the poorest performance as the fidelity quickly drops and reaches a value of only \(0.87\) at time \(t=2\). This behavior is, however, expected since neither the ansatz nor the optimization takes into account any interactions or entanglement between different blocks of the systems. Adding (and cutting) the Trotter gates involving cross-block interactions while keeping the same block product ansatz (BPA*) slightly increases the fidelity. Finally, we expand the ansatz itself by adding parameterized gates between the blocks which are cut (CKA), and employ the overhead-constrained PVQD algorithm for the evolution. We are able to control the fidelity by tuning the threshold hyper-parameter \(\tau\) that constrains the allowed sampling overhead for the ansatz. Ultimately, our optimization scheme gives us the means to naturally interpolate between the results obtained with a block product ansatz which incurs only a minimal sampling overhead, and the unconstrained PVQD evolved state, which gives rise to an unbounded overhead.
In Fig. 4 (b), we show the evolution of a correlated observable acting on all three blocks. The behavior of long-ranged observables is typically more difficult to capture in hardware
Figure 4: Simulating the dynamics of a TFIM spin chain consisting of 3 blocks with 2 spins each. **(a)** Fidelity of our time evolved ansatz with respect to the exact solution. **(b)** Expectation value of the observable acting as \(Z\) on spins 1 and 5 and as \(X\) on spin 3. We compare an ansatz involving parameterized two-qubit gates between blocks that are cut (CKA) while keeping the sampling overhead controlled under a threshold \(\tau\) and a block-product state ansatz without entangling gates between the blocks (BPA). In the case of the latter, we further differentiate between optimizing with a block-product Trotter gate (i.e, no inter-block interactions) or the full Trotter gate, including the exact inter-block interactions (BPA*). We find that CKA can reach higher fidelities at longer times while the exact accuracy can be controlled by changing the overhead threshold.
Figure 3: Spins in a transverse field Ising model. In (a) and (b), the system is split into blocks, where the coupling between two blocks \(J_{2}\) is much smaller than the coupling within a block \(J_{1}\). In (c), the strong coupling corresponds to nearest-neighbor interactions, whereas the next- interactions are weak.
efficient variational simulations, as their support grows faster compared to purely local observables. The BPA(*) is expected to fail in representing correlations spanning across different blocks, as becomes evident at times \(t>1\). In contrast, the CKA with the particular thresholds chosen here can capture the inter-block correlations accurately also for long times.
To explicitly see how the fidelity obtained with the overhead-constrained optimization increases with a higher threshold, we plot the mean infidelity
\[I=\frac{1}{T}\sum_{t=1}^{T}\left[1-|\left\langle\psi(\theta_{t},\varphi_{t})| \psi_{t}\right\rangle|^{2}\right] \tag{8}\]
of the simulations with respect to the exact solution \(|\psi_{t}\rangle\) in Fig. 5 (a). A larger overhead threshold improves the expressibility of the ansatz as the inter-block gate parameters are less constrained. As a result, the mean infidelity decreases. In order to fully quantify the computational cost required to achieve a certain fidelity, we further include a shot-based simulation, taking into consideration finite sampling noise. Fig. 5 (b) shows how the mean infidelity decreases as the total number of shots is increased. For every point, 10 simulations were performed with a fixed number of shots \(R\) per circuit evaluation. The total number of shots for every run is calculated as
\[R_{\mathrm{tot}}=R\cdot n_{\mathrm{iter}}\cdot 2\,n_{\mathrm{params}}\cdot \sum_{t=1}^{T}\,\omega(\varphi_{t}), \tag{9}\]
where \(n_{\mathrm{iter}}=200\) is the number of iterations per time step4, \(2\cdot n_{\mathrm{params}}\) the cost of calculating the gradient using the parameter shift rule for \(n_{\mathrm{params}}\) parameters, and \(T=40\) the number of time steps in the simulation. The overhead is set to 1 for the BPA. While Fig. 5 (a) suggests that increasing the threshold improves the expressibility of the ansatz, leading to
Figure 5: Mean infidelity over the time evolution. In **(a)** as a function of the threshold \(\tau\) constraining the sampling overhead in the statevector simulations presented in Fig. 4. The black points correspond to additional simulations performed (for thresholds 5, 10, 25, 50, 250, 500, 5000) to interpolate between the points shown in the other plots. In **(b)** as a function of the total shots required for the simulation, with dashed lines indicating the result achieved by the statevector simulation. Note that here we do not plot the CKA without a threshold, as the first point of this method would start at \(10^{17}\) total shots and is, thus, unfeasible in reality.
decreasing infidelities, Fig. 5 (b) demonstrates that shot noise limits the simulation from reaching the ideal infidelity. Given a fixed budget of total shots \(R_{\text{tot}}\), choosing the optimal threshold \(\tau\) and the number of shots \(R\) per circuit evaluation is a nontrivial constrained optimization problem. In Fig. 5 (a), this balance is illustrated as there is a regime around \(10^{11}\) total shots where having a lower threshold (but larger \(R\)) results in lower infidelity than a high threshold (but smaller \(R\)). On the other hand, for a higher budget of around \(10^{12}\) total shots, choosing the larger threshold is advantageous.
We further investigate how entanglement between different blocks can be captured with the CKA and how the entanglement correlates to the sampling overhead. We generally expect that imposing a threshold on the sampling overhead required for the circuit knitting scheme limits the entanglement that can arise between the subsystems. In order to quantify the entanglement in our ansatz state, we split the system shown in Fig. 3 (a) into a bipartite system. We call \(A\) the subsystem containing the center block and \(B\) the subsystem containing the outer two blocks. We write the pure state \(\ket{\psi}=U(\theta,\varphi)\ket{0}\) defined by the quantum circuit in its Schmidt decomposition
\[\ket{\psi}=\sum_{k=1}^{\dim(A)}\lambda_{k}\ket{a_{k}}\ket{b_{k}}, \tag{10}\]
where \(\lambda_{k}\geq 0\) are the Schmidt coefficients, \(\ket{a_{k}},\ket{b_{k}}\) the Schmidt basis states in systems \(A\) and \(B\), respectively. From this decomposition, the von Neumann entanglement entropy can be easily computed as [48]
\[E(\ket{\psi})=-\sum_{k=1}^{\dim(A)}\lambda_{k}^{2}\log\Bigl{(}\lambda_{k}^{2} \Bigr{)}. \tag{11}\]
In Fig. 6 (a), we show how the entanglement entropy grows in time for different ansatzes. As expected, the BPA(*) captures no entanglement between the distinct blocks, while the CKA without a threshold recovers the full entanglement of the exact solution. For the CKA with \(\tau=100,1000\), we observe that the entanglement entropy eventually starts deviating and stays below its exact value as expected. To understand whether the errors in the entanglement entropy arise due to the constrained optimization problem, we also show
Figure 6: **(a)** The entanglement entropy between the center block and the outer two blocks is calculated as a function of time for different ansatzes. The overhead threshold in the CKA determines the time window for which the entanglement growth can be captured accurately. In contrast, the BPA(*) is not able to account for any inter-block entanglement by construction. **(b)** Required sampling overhead versus simulation time for different thresholds. We indicate the exact time at which the overhead reaches the threshold as vertical dashed lines in (a).
how the sampling overhead increases over time and, if applicable, caps at the threshold (see Fig. 6 (b)). Interestingly, the entanglement entropy is growing even after the sampling overhead saturates (indicated by the vertical lines) and does not plateau to a specific value. An explanation for this behavior is provided in Appendix B.
### Two-leg ladder
Next, we demonstrate that our scheme can also be applied to lattice geometries beyond the simple 1d spin chain. To that end, we consider the Ising model on a two-dimensional extension of the chain as shown in Fig. 3 (b). Each of the two blocks is comprised of 4 spins and coupled to the other block via weak nearest-neighbor interactions. To simulate the dynamics of this system, we choose an ansatz layout reflecting the corresponding trotterized time evolution operator. Specifically, we use alternating layers of single-qubit \(R_{X}\) rotations and \(R_{ZZ}\) rotations, repeated 5 times. In Fig. 7, we show how the fidelity of the different ansatzes with respect to the exact solution evolves in time. Additionally, we plot the expectation value of the observable that acts as \(X\) on the four outer qubits (the two qubits on the left of the first block and the two qubits on the right of the second block). While the block product ansatz (BPA*) initially tracks the qualitative behavior of the dynamics, it fails in the second half of the simulation period. Here, adding the cross-block entangling unitaries to the ansatz is necessary to accurately approximate the time-evolved state. The CKA with a threshold of \(\tau=100\) captures the qualitative dynamics of the observable plotted in Fig. 7 until \(t\approx 1.5\). In order to accurately simulate the dynamics until \(t=2\), the threshold has to be increased to \(\tau=1000\).
### Reducing circuit depth
Many state-of-the-art quantum computing platforms such as those based on superconducting qubits feature only a limited qubit connectivity. Gates acting on qubits that are not adjacent in the device layout have to be implemented via additional two-qubit SWAP operations. However, these extra gates increase the amount of noise in a computation. In the era of NISQ devices, it is therefore crucial to find ways of reducing the circuit depth while keeping the simulations as accurate as possible. To that end, circuit knitting can be employed to cut long-range acting gates.
Here, we demonstrate the use of circuit knitting to effectively reduce the circuit depth
Figure 7: Simulating the dynamics of the two-leg ladder TFIM. **(a)** Fidelity of the simulation with respect to the exact solution for the BPA, BPA* and CKA with different thresholds. **(b)** Expected value of the observable acting as \(X\) on the four outer qubits (cf. in Fig. 3 (b)). The accuracy of the simulation improves as the threshold is increased.
in the variational simulation of the dynamics of the J1-J2 transverse field Ising model depicted in Fig. 3 (c) and defined by
\[H=J_{1}\sum_{\langle ij\rangle}Z_{i}Z_{j}+J_{2}\sum_{\langle\langle ij\rangle \rangle}Z_{i}Z_{j}+\sum_{i}X_{i}, \tag{12}\]
where we again choose \(J_{1}=1\), \(J_{2}=1/4\). \(\langle ij\rangle\) indicates nearest-neighbors, whereas \(\langle\langle ij\rangle\rangle\) corresponds to next-nearest-neighbors. Instead of cutting the system into blocks, we here cut the long-range gates induced by the next-nearest-neighbor interactions. We compare the PVQD dynamics for similar ansatzes as in the previous experiments. Specifically, we consider an ansatz that is composed only of hardware-efficient gates, i.e, gates acting only on nearest-neighbor spins/qubits. For consistency, we refer to this ansatz as BPA(*), even though we are not cutting the system into blocks in this case. In contrast, the CKA ansatz reflects the full interaction graph of the model and contains additional long-range entangling gates that are cut using circuit knitting.
The results of our simulations are provided in Fig. 8, where we show both the time dependence of the fidelity to the exact state and of an observable acting on two non-adjacent spins. In the BPA, all gates acting on next-nearest-neighbors are omitted from the circuit, including the Trotter step. As a result, the fidelity quickly deteriorates as we effectively evolve with a slightly different model where \(J_{2}=0\). In contrast, for BPA*, the finite next-nearest-neighbor interactions are included in the Trotter step while the ansatz is kept hardware-efficient. In this case, the fidelities stay high throughout the time evolution interval. Hence, the hardware-efficient ansatz comprised of 4 repeated layers is already able to accurately represent the long-range correlations and entanglement generated by the next-nearest-neighbor interactions of the model. However, we can improve on these fidelities even further by using the CKA with a comparatively small overhead threshold of \(\tau=10\).
In order to quantify the depth reduction enabled by cutting long-ranged gates in this example, we count the number of SWAP gates required to run PVQD on this system without cutting any gates. Given a quantum device where the connectivity coincides with this geometry (i.e. nearest-neighbor spins/qubits are connected), every Trotter layer would
Figure 8: Simulating the dynamics of J1-J2 Ising model sketched in Fig. 3 (c). We show **(a)** the fidelity of the simulation with respect to the exact solution as well as **(b)** the expectation value of the observable acting as \(X\) on the upper left qubit and as \(Z\) on the lower right qubit. BPA here indicates a hardware efficient circuit, whereas CKA includes next-nearest-neighbor 2-qubit gates that are cut using circuit knitting while the overhead is constrained by a threshold \(\tau\). The main improvement over the BPA is given by BPA*, where next-nearest-neighbor interactions are considered in the Trotter decomposition of \(e^{-i\Delta tH}\).
require 2 SWAP gates. For the ansatz with 4 repeated layers, this results in 2+2\(\cdot\)4\(\cdot\)2 = 18 SWAP gates that can be saved using circuit knitting (2 for the Trotter step, 4\(\cdot\)2 for the ansatz, and the extra factor of 2 comes from doubling the circuit in the compute-uncompute method).
Overall, in this example, circuit knitting enables us to trade-off a larger circuit depth for an increased sampling overhead.
## 4 Conclusion
Circuit knitting allows for the simulation of larger quantum systems using small quantum devices. While in general, cutting circuits can be expensive due to the sampling overhead, we show that this overhead can be controlled by constraining the parameters in the variational circuit optimization. We are thus able to achieve the optimal fidelity given a fixed budget of samples. A change in the threshold hyper-parameter \(\tau\) leads to a trade-off between the accuracy of the simulation and the sampling overhead. The optimal threshold therefore depends on the quantum computing resources available, the desired accuracy, and the total evolution time.
In the examples considered in this work, we show that with a realistic sampling overhead, the accuracy of the dynamics simulations can be drastically improved compared to a simple block product ansatz. Classical resources can thus effectively be used to recover correlations between the different subsystems. Our framework opens the door to simulating the dynamics of quantum systems with a large number of qubits that are otherwise not reachable with current hardware. Possible systems of interest are, for example, quantum impurities immersed in a bath [35, 36] or low-energy eigenstates of local lattice Hamiltonians and molecules [31, 32, 33, 34]. Furthermore, we show that our technique can also be used to reduce the circuit depth if, instead of cutting the system into blocks, we cut long-ranged but weak interactions. Here, we observed that with a controlled sampling overhead, the dynamics can be accurately simulated with hardware-efficient circuits.
An expansion of this work would encompass a hardware experiment of the overhead-constrained PVQD. In this regard, it will be interesting to see whether current error mitigation techniques are powerful enough to mitigate the hardware noise to a level where the (local) fidelities can be measured to sufficient precision for the optimization to be successful.
Moreover, the constrained optimization presented in this work is not limited to PVQD but can be extended to arbitrary loss functions. As such, it could, for example, be applied to simulate ground states using circuit knitting and VQE [49, 50] while keeping the sampling overhead controlled.
Finally, we remark that the calculations of the sampling overhead throughout this work are based on the worst-case scenario, where the total overhead is the product of the overheads required to cut individual gates. If a general way to find the optimal decomposition of a global circuit into locally realizable quantum channels was discovered, this could further reduce the sampling overhead and allow our method to capture even more entanglement given a fixed threshold.
Code availabilitySimulations presented in this work were performed in Julia [51] using the Yao.jl framework [52] and are available on Github [53].
AcknowledgmentsWe thank Stefano Barison, Julien Gacon, and David Sutter for fruitful discussions on hybrid algorithms, optimization techniques, and circuit knitting. This research was supported by the NCCR MARVEL, a National Centre of Competence in Research, funded by the Swiss National Science Foundation (grant number 205602).
|
2309.14195 | Confinement Slingshot and Gravitational Waves | In this paper, we introduce and numerically simulate a quantum field
theoretic phenomenon called the gauge ``slingshot" effect and study its
production of gravitational waves. The effect occurs when a source, such as a
magnetic monopole or a quark, crosses the boundary between the Coulomb and
confining phases. The corresponding gauge field of the source, either electric
or magnetic, gets confined into a flux tube stretching in the form of a string
(cosmic or a QCD type) that attaches the source to the domain wall separating
the two phases. The string tension accelerates the source towards the wall as
sort of a slingshot. The slingshot phenomenon is also exhibited by various
sources of other co-dimensionality, such as cosmic strings confined by domain
walls or vortices confined by $Z_2$ strings. Apart from the field-theoretic
value, the slingshot effect has important cosmological implications, as it
provides a distinct source for gravitational waves. The effect is expected to
be generic in various extensions of the standard model such as grand
unification. | Maximilian Bachmaier, Gia Dvali, Juan Sebastián Valbuena-Bermúdez, Michael Zantedeschi | 2023-09-25T14:54:27Z | http://arxiv.org/abs/2309.14195v1 | # Confinement Slingshot and Gravitational Waves
###### Abstract
In this paper, we introduce and numerically simulate a quantum field theoretic phenomenon called the gauge "slingshot" effect and study its production of gravitational waves. The effect occurs when a source, such as a magnetic monopole or a quark, crosses the boundary between the Coulomb and confining phases. The corresponding gauge field of the source, either electric or magnetic, gets confined into a flux tube stretching in the form of a string (cosmic or a QCD type) that attaches the source to the domain wall separating the two phases. The string tension accelerates the source towards the wall as sort of a slingshot. The slingshot phenomenon is also exhibited by various sources of other co-dimensionality, such as cosmic strings confined by domain walls or vortices confined by \(Z_{2}\) strings. Apart from the field-theoretic value, the slingshot effect has important cosmological implications, as it provides a distinct source for gravitational waves. The effect is expected to be generic in various extensions of the standard model such as grand unification.
## I Introduction
Understanding the transition between the confining and deconfining regimes of gauge theories remains one of the most fundamental challenges in physics. An enlightening input in this direction can be provided by the study of systems in which the different phases of a gauge theory can coexist in a controllable manner. The early example is provided by the construction given in [1] in which a domain wall (or a vacuum layer) supports a deconfined (Coulomb) phase of a gauge theory that at the same time exhibits the confining behavior in the bulk of space. Such coexistence of phases has some important implications.
In particular, the layer of the deconfined phase localizes a massless \(U(1)\) gauge field in the Coulomb regime. One effect of this localization is that the charges (e.g., quarks), placed in a bulk of the confining vacuum, become attached to the deconfining boundary (wall) by the QCD flux tubes. The string stretches from the quark towards the boundary and opens up there. The flux carried by the string spreads within the deconfined layer in the form of the \(U(1)\) Coulomb field. The system thereby realizes a field-theoretic analog to a \(D\)-brane.
The dual setups, in which the analogous effect is exhibited by the magnetic charges have been constructed in [2; 3; 4]. In these cases, it is the magnetic flux that is confined in flux tubes (cosmic strings) in one of the vacuum domains. The same flux, in the neighboring domain, gets deconfined and spreads in the form of a Coulomb-magnetic field of a magnetic monopole.
In a system with coexisting phases, the interesting question is, what happens when charges (either electric or magnetic) cross the boundary separating the two phases?
In the present paper, we shall study this behavior. We first consider the case of magnetic charges. For this purpose, we construct a prototype \(SU(2)\) gauge theory which admits two types of vacua. The vacuum in which \(SU(2)\) is Higgsed down to a \(U(1)\) subgroup and the one in which the \(U(1)\) is further Higgsed. The first vacuum supports free 't Hooft-Polyakov monopoles that are magnetically charged under \(U(1)\). These monopoles are in the Coulomb magnetic phase.
In the second vacuum, the monopoles are confined, i.e., the monopoles and antimonopoles are connected by magnetic flux tubes. These magnetic flux tubes represent Nielsen-Olesen strings [5] of the \(U(1)\) gauge theory (analogous to Abrikosov flux tubes [6] in superconductors).
A domain wall separates the two vacua. In this system, we study a scattering process in which a monopole crosses from the magnetic-Coulomb to the magnetic-confining phase. Due to the conservation of the magnetic charge, the magnetic flux follows the monopole into the confining phase. However, the confinement makes the flux trapped in a string.
The monopole thus becomes attached to the boundary wall by the string. The string opens on the wall, releasing the entire flux into the Coulomb vacuum. For an observer placed in the \(U(1)\) Coulomb vacuum, the end-point of the flux tube carries the entire magnetic charge of the 't Hooft-Polyakov monopole. In this way, the monopole that crosses from the Coulomb into a confining domain leaves its "image" on the boundary. The image is represented by the throat of the same flux tube via which the monopole is attached to the boundary from the opposite side.
One important dynamical question is what happens when the energy in the collision process is much larger as
compared to the rest mass of the monopole? A possible outcome one may consider is that the string breaks up by creating monopole-antimonopole pairs. Then, instead of penetrating deeply into the confining domain and stretching a long string, the energy of the collision is released in the form of many monopole-antimonopole pairs connected by short strings, which will soon annihilate into waves. In this case, one could say that effectively the magnetic charge never enters the confining domain.
This (naive) intuition is supported by the study of the annihilation of monopoles connected by a string [7]. As shown there, after coming on top of each other, the pair does not oscillate even once. Instead, it decays into the waves of Higgs and gauge fields. This effect was explained by entropic arguments: the entropy of a monopole-antimonopole pair is much lower than the entropy of waves. Once the monopole and antimonopole come on top of each other, the system loses the memory of its pre-history of the magnetic dipole. After this point, it simply evolves into the highest entropy state, which is given by waves, as opposed to monopoles connected by a long string. In the language of amplitudes, this can be understood as insufficient entropy for compensating a strongly suppressed process of production of highly coherent states [8].
This outcome is characteristic of the phenomenon of defect "erasure", originally discussed in [2] for the interaction of monopoles and domain walls. In [7] was argued that the same effect must hold for heavy quark-antiquark pairs connected by the QCD strings.
As we shall show, in the present case, the situation is very different. The reason is the existence of the net un-erased magnetic charge. That is, in contrast with the cases of monopole-antimonopole [7] and monopole-wall [2]. The net magnetic charge is always point-like. Due to this, the system is aware of its magnetic pre-history all the time. Unlike the erasing antimonopole or an erasing domain wall, in the present case, the magnetic charge neither cancels nor spreads. Correspondingly, the string never breaks apart.
Thus the outcome is a formation of a monopole attached to a boundary by a long string. The string stretches and absorbs the initial kinetic energy of the monopole, gradually slowing it down. If the wall is static, after reaching a certain maximal size, the string will start shrinking accelerating the monopole towards the boundary. After reaching the boundary, the monopole will be shot back into the Coulomb vacuum. We shall refer to this phenomenon as the "slingshot" effect.
One of the important implications of the slingshot effect is the novel source of production of gravitational waves. The monopole slingshot effect is expected to be rather generic in the early cosmology of grand unified theories. It is thereby important to understand the imprints of this effect in the gravitational wave spectrum.
Due to this, we study the corresponding gravitational wave signal in detail in our parameters space range. In particular, it is found that the energy spectrum and the beaming angle of emission are analogous to the case of a monopole-antimonopole connected by a string in the confined phase, complying with the fact that most of the signal is due to the acceleration of the monopole by the slingshot. In particular, the energy spectrum is found to scale as the inverse frequency \(\omega^{-1}\), which agrees with studies of a confined monopole-antimonopole pair in the point-like approximation [9] and in the fully-fledged field theoretical case [7]. Moreover, we also observe that the slingshot gravitational radiation is emitted in a beaming angle \(\theta\), measured from the acceleration axis of the domain wall, scaling approximately as \(\omega^{-1/2}\).
The same type of gravitational wave signal is expected in the dual slingshot case of the "electric" confinement. In this case, the role of a monopole is played by a heavy quark that crosses over from the Coulomb to a confining domain. Similarly to the monopole stretching a cosmic string, the quark entering the confining domain stretches a QCD flux tube. For explicit analysis we construct a model using the earlier setup discussed in [3; 10] for the study of the gauge field localization mechanism of [1]. In this setup, the two vacua represent confining and deconfining phases of \(SU(2)\) QCD. We assume that the quarks are heavier than the corresponding QCD scale.
The QCD flux tubes connect these quarks in the confining domain. The flux tubes can be exponentially long without the danger of breaking apart. In the deconfining domain, \(SU(2)\) is Higgsed down to \(U(1)\), and the same quarks can propagate freely and interact via the Coulomb \(U(1)\) field. The two phases are separated by a domain wall. The massless photon is "locked" in the \(U(1)\) domain by the gauge field localization mechanism of [1].
We then consider a scattering process in which a heavy quark goes across the wall from the \(U(1)\) Coulomb to the \(SU(2)\)-confining phase. Transporting the intuition from the monopole case of the dual theory, we shall argue that the outcome is similar: the system exhibits a slingshot effect. Namely, the quark stretches along the QCD string which connects it to the wall. Despite the sufficient energy in the collision, the string does not break up into a multiplicity of mesons and glueballs. The physical reason, as we shall argue, is similar to the monopole case and has to do with the existence of the net \(U(1)\) charge measured by the Coulomb observer.
Just like the magnetic slingshot effect, its electric dual can be relevant for cosmology in various extensions of the standard model, including grand unification, since the coexistence of phases is rather generic. The gravitational wave signal from the electric slingshot is rather similar to its magnetic dual.
Finally, the slingshot effect generalizes to defects of other co-dimensions. In particular, it can be exhibited by cosmic strings. When the string crosses over into a phase in which it becomes a boundary of the domain wall, it stretches the wall. The essence of the effect is captured by an effective \(2+1\)-dimensional model. The model in which \(Z_{2}\) vortices can be confined was constructed earlier
in [11]. We extend this model by allowing the coexistence of two phases: the free phase with exact \(Z_{2}\) symmetry, as well as, the confining phase in which \(Z_{2}\) is spontaneously broken. When the vortex crosses over from the free into the confining phase, it results in a slingshot effect.
The main findings of this paper are summarized in a companion letter [12].
## II The model
We shall now construct a simple prototype model that possesses a domain wall separating the two vacua in which the magnetic field is in Coulomb and confining phases respectively. Such examples were constructed earlier in [3] as setups for realizing a dual (magnetic) version of the gauge field localization mechanism of [1]. Correspondingly, in the construction of [3] there exist Higgs and Coulomb phases of a \(U(1)\) gauge theory are separated by a domain wall. Within the \(U(1)\) Higgs domain, the magnetic flux is trapped in the tubes (cosmic strings). A string can terminate perpendicularly to the wall and open up on the other side in the form of the sources of a magnetic Coulomb field. In order to include the magnetic monopoles on both sides of the wall, we embed the \(U(1)\) as a subgroup of an \(SU(2)\) gauge symmetry.
The model that we will analyze is an \(SU(2)\) gauge theory with two scalar fields. The first field \(\phi\) transforms under the adjoint representation, while the second field \(\psi\) is in the fundamental representation. The Lagrangian of the theory is
\[\mathcal{L}= \,\mathrm{Tr}\left(\left(D_{\mu}\phi\right)^{\dagger}\left(D^{ \mu}\phi\right)\right)+\left(D_{\mu}\psi\right)^{\dagger}\!\left(D^{\mu}\psi\right)\] \[-\frac{1}{2}\,\mathrm{Tr}\left(G^{\mu\nu}G_{\mu\nu}\right)-U( \phi,\psi)\,, \tag{1}\]
with the potential
\[U(\phi,\psi)= \lambda_{\phi}\left(\mathrm{Tr}(\phi^{\dagger}\phi)-\frac{v_{ \phi}^{2}}{2}\right)^{2}\] \[+\lambda_{\psi}\left(\psi^{\dagger}\psi-v_{\psi}^{2}\right)^{2} \;\psi^{\dagger}\psi+\beta\psi^{\dagger}\phi\psi\,. \tag{2}\]
The field strength tensor and the covariant derivatives are given in the conventional form
\[G_{\mu\nu} =\partial_{\mu}W_{\nu}-\partial_{\nu}W_{\mu}-ig[W_{\mu},W_{\nu} ]\,, \tag{3}\] \[D_{\mu}\phi =\partial_{\mu}\phi-ig[W_{\mu},\phi]\,,\] (4) \[D_{\mu}\psi =\partial_{\mu}\psi-igW_{\mu}\psi\,. \tag{5}\]
We can write the gauge field and the adjoint scalar field as \(W_{\mu}=W_{\mu}^{a}T^{a}\) and \(\phi=\phi^{a}T^{a}\) respectively, where the \(SU(2)\) generators are normalized by \(\mathrm{Tr}(T^{a}T^{b})=\frac{1}{2}\delta^{ab}\).
In this study, we consider a symmetry-breaking hierarchy characterized by several distinct stages. Initially, the \(SU(2)\) symmetry undergoes a Higgs mechanism through the scalar field \(\phi\), resulting in the reduction of symmetry to \(U(1)\). Subsequently, the \(U(1)\) symmetry is Higgsed further down by \(\psi\). Schematically, the breaking pattern is
\[SU(2)\to U(1)\to 1\,. \tag{6}\]
During the first breaking process, two of the gauge bosons acquire a mass \(m_{v_{\phi}}=gv_{\phi}\), while one gauge boson, which we will refer to as the photon, remains massless. The corresponding Higgs boson manifests a mass \(m_{h_{\phi}}=\sqrt{2\lambda_{\phi}}v_{\phi}\). Following the second symmetry breaking, all gauge bosons, including the photon, acquire an additional contribution to their mass denoted by \(m_{v_{\phi}}=gv_{\psi}/\sqrt{2}\). Additionally, the Higgs boson acquires a mass \(m_{h_{\psi}}=2\sqrt{\lambda_{\psi}}v_{\psi}^{2}\) in this subsequent stage. We note here that although the potential (2) is non-renormalizable, it does not concern our analysis since such a potential can be obtained from a renormalizable theory by the introduction of an additional gauge singlet field, as it was previously discussed in [3]. Further examples can be found in the same paper. The classical field equations of this theory are
\[(D_{\mu}G^{\mu\nu})^{a}=j_{\phi}^{a,\nu}+j_{\psi}^{a,\nu}\,, \tag{7}\] \[(D_{\mu}D^{\mu}\phi)^{a}+\frac{\partial V(\phi,\psi)}{\partial \phi^{a}}=0\,,\] (8) \[D_{\mu}D^{\mu}\psi+\frac{\partial V(\phi,\psi)}{\partial\psi^{ \dagger}}=0\,, \tag{9}\]
where the currents are \(j_{\phi}^{a,\nu}=g\varepsilon^{abc}(D^{\nu}\phi)^{b}\phi^{c}\) and \(j_{\psi}^{a,\nu}=ig\psi^{\dagger}T^{a}(D^{\nu}\psi)+h.c..\).
For \(\psi=0\), the \(SU(2)\) symmetry is Higgsed down to \(U(1)\). Consequently, the theory encompasses a magnetic monopole solution characterized by the 't Hooft-Polyakov magnetic monopole ansatz [13; 14]
\[W_{i}^{a} =\varepsilon_{aij}\frac{r^{j}}{r^{2}}\frac{1}{g}\left(1-K(r) \right),\] \[W_{t}^{a} =0\,,\] \[\phi^{a} =\frac{r^{a}}{r^{2}}\frac{1}{g}H(r)\,, \tag{10}\]
where \(K(r)\) and \(H(r)\) are profile functions that depend on the parameters of the theory.
The \(SU(2)\) magnetic field can be defined by
\[B_{k}^{a}=-\frac{1}{2}\varepsilon_{kij}G_{ij}^{a}\,. \tag{11}\]
In order to obtain the \(U(1)\) magnetic field, we can project out the component that is parallel to \(\phi\). This yields
\[B_{k}^{U(1)}=\frac{\phi^{a}}{\sqrt{\phi^{b}\phi^{b}}}B_{k}^{a}\,. \tag{12}\]
With this definition, the magnetic field of the 't Hooft-Polyakov magnetic monopole in the limit of large \(r\) is
given by,
\[B_{k}^{U(1)}\to\frac{1}{g}\frac{r^{k}}{r^{3}}\,. \tag{13}\]
By substituting the ansatz (10) into the field equations (7) and (8), these equations can be simplified to
\[K^{\prime\prime}= \frac{1}{r^{2}}(K^{3}-K+H^{2}K+J^{2}K)\,, \tag{14}\] \[H^{\prime\prime}= \frac{2}{r^{2}}HK^{2}+\frac{1}{2}m_{h_{\phi}}^{2}H\left(\frac{H^{ 2}}{m_{v_{\phi}}^{2}r^{2}}-1\right)\,. \tag{15}\]
Note that we are still considering the case with \(\psi=0\). The profile functions can be determined analytically in the BPS limit \(m_{h_{\phi}}\to 0\)[15; 16]. For other parameter choices, we employ numerical relaxation techniques. In order to initiate the iteration procedure, we utilize the profile functions obtained in the BPS limit as a starting point. The resulting profile functions are visualized in Fig. 1.
Let us now shift our focus to the discussion regarding the \(\psi\) field. For the moment let us fix the \(SU(2)\) direction to be \(\psi=(\xi,0)^{T}\). For \(\beta=0\) the potential part corresponding to the \(\xi\) field exhibits two distinct vacua, the \(U(1)\) Coulomb phase at \(\xi^{\dagger}\xi=0\) and the \(U(1)\)-Higgsed phase at \(\xi^{\dagger}\xi=v_{\psi}^{2}\). The reason behind this terminology will be further explained later. Since these two vacua are disconnected, the model allows a domain wall solution interpolating between them. By using the Bogomolny equation [15] the solutions can be found to be1
Footnote 1: See also [17; 18].
\[\xi_{(\pm v_{\psi},0)}(z) =\frac{\pm v_{\psi}}{\sqrt{1+e^{m_{h_{\psi}}z}}}\,, \tag{16}\] \[\xi_{(0,\pm v_{\psi})}(z) =\frac{\pm v_{\psi}}{\sqrt{1+e^{-m_{h_{\psi}}z}}}\,. \tag{17}\]
Therefore, the two phases, the \(U(1)\) invariant and the \(U(1)\)-Higgsed phase can coexist and are separated by these domain walls.
When \(\beta\neq 0\), the degeneracy of the two vacua is broken. The potential difference between the \(U(1)\) Coulomb vacuum and the \(U(1)\)-Higgsed vacuum eliminates the possibility of a static domain wall. This potential difference generates a pressure difference between the two sides of the domain wall, causing it to accelerate toward the phase with higher potential energy. To achieve higher relative collision velocities for our numerical analysis of the interaction between a magnetic monopole and this type of domain wall, we exploit this acceleration. Of course, the splitting of energies between different vacua, and thereby the amount of the pressure difference acting on the wall, can be controlled by the parameters of the Lagrangian. In particular, the vacua can easily be kept to be exactly degenerate in energy, resulting in the possibility of static domain walls.
As a final comment on the spectrum of the theory, we note that once the \(U(1)\) is Higgsed down by \(\psi\), the free monopole solutions no longer exist. Instead, the monopoles get connected to antimonopoles by the cosmic strings. These strings represent Nielsen-Olesen magnetic flux tubes [5] that carry the \(U(1)\) magnetic field lines sourced by the monopoles. Since \(U(1)\) is embedded in \(SU(2)\), the strings are not topologically-stable and can break by quantum nucleation of monopole-antimonopole pairs [19]. The process is exponentially suppressed by the ratio of the two symmetry-breaking scales. As a result even for a mild hierarchy of scales, an unperturbed segment of a long string is practically stable against such a decay. In particular, this will be the case in our analysis.
## III Initial configuration
One generic phenomenon experienced by the monopoles in the confinement regime is the annihilation of a monopole-antimonopole pair connected by a string. During this process, the monopole and antimonopole are pulled together by the string and subsequently annihilate. In the approximation of point-like monopoles connected by a thin string, the system is allowed to perform several oscillations. Since in this approximation the structures are not resolved, the monopoles are permitted to pass through each other and stretch a long string multiple times [9]. However, the fully resolved analysis shows that this is not the case [7]. In the regime of finite and comparable thicknesses of strings and monopoles, the system decays after the first collision. In [7] this is explained by the loss of coherence [2] during the collision and the entropy suppression characteristic for the creation of low entropy solitons in high energy collision processes [8].
Figure 1: The profile function for a ’t Hooft-Polyakov magnetic monopole for \(m_{h_{\phi}}/m_{v_{\phi}}=1\).
In the present case, we wish to investigate another type of scattering process involving the confined monopole. However, instead of being in the confinement regime from the very beginning, initially, the monopole starts in the magnetic Coulomb phase and only later enters the confinement domain with a relativistic velocity.
Thus, we aim to determine the initial configuration for a specific scenario: a magnetic monopole positioned within the \(U(1)\) Coulomb phase, while elsewhere, a domain wall separates the Coulomb phase and the Higgsed phase. We want to analyze in a numerical simulation what happens when the monopole collides with the domain wall.
In the phase where the \(U(1)\) symmetry is Higgsed, the photon, that is massless in the \(U(1)\) Coulomb phase, receives a mass. Notice that the magnetic charge is still fully conserved. However, in the \(U(1)\) Higgs domain the flux can only exist in the form of flux tubes. This is energetically costly. Thereby, the lowest energy configuration with a single monopole placed in the \(U(1)\) Coulomb domain is the one in which the entire flux is spread within the same domain. Upon reaching the wall, the magnetic flux lines are repelled and spread parallel to the wall.
To include this effect in the initial configuration, we made use of the monopole-antimonopole ansatz with a maximal twist [20]. If we take only the monopole side of this ansatz, the magnetic field lines resemble the right behavior. The general ansatz for \(\hat{\phi}^{a}\), where \(\hat{\phi}^{a}=\phi^{a}/\sqrt{\phi^{b}\phi^{b}}\), for a monopole-antimonopole configuration is [20]
\[\hat{\phi}_{1}= \left(\sin\bar{\theta}\cos\theta-\sin\theta\cos\bar{\theta}\cos \alpha\right)\cos\left(\varphi-\alpha/2\right)\] \[+\sin\theta\sin\alpha\sin\left(\varphi-\alpha/2\right),\] \[\hat{\phi}_{2}= \left(\sin\bar{\theta}\cos\theta-\sin\theta\cos\bar{\theta}\cos \alpha\right)\sin\left(\varphi-\alpha/2\right)\] \[-\sin\theta\sin\alpha\sin\left(\varphi-\alpha/2\right),\] \[\hat{\phi}_{3}= -\cos\theta\cos\bar{\theta}-\sin\theta\sin\bar{\theta}\cos\alpha\,. \tag{18}\]
The angle \(\alpha\) represents the relative twist between the monopole and the antimonopole. In our simulations, we took \(\alpha=\pi\) to obtain a configuration for which the magnetic field presents the right behavior.
We will take the monopoles to be located on the \(z\)-axis at \(z_{\rm M}\) and \(z_{\rm\bar{M}}\). Thus, \(\varphi\) is the azimuthal angle around the \(z\)-axis. \(\theta\) and \(\bar{\theta}\) correspond to the angles between the \(z\)-axis and the position vectors originating from the monopole and antimonopole, respectively.
The ansatz that Saurabh and Vachaspati considered [20] is then given by
\[\phi^{a}= \frac{1}{g}\frac{H(r_{\rm M})}{r_{\rm M}}\frac{H(r_{\rm\bar{M}}) }{r_{\rm\bar{M}}}\hat{\phi}^{a}, \tag{19}\] \[W_{\mu}^{a}= -\frac{1}{g}(1-K(r_{\rm M}))(1-K(r_{\rm\bar{M}}))\varepsilon_{ abc}\hat{\phi}^{b}\partial_{\mu}\hat{\phi}^{c}\,. \tag{20}\]
In order to Lorentz boost this configuration we can replace \(z-z_{\rm M}\) and \(z-z_{\rm\bar{M}}\) by \(\gamma_{\rm M}(z-u_{\rm M}t-z_{\rm M})\) and \(\gamma_{\rm M}(z-u_{\rm\bar{M}}t-z_{\rm\bar{M}})\) respectively. For \(t=0\), we obtain the initial field values. The values for the time derivatives can be determined numerically by using the field configuration at \(t=0\) and \(t={\rm d}t\). We conducted the numerical simulations in the Lorenz gauge.
In our configuration, we will incorporate a domain wall positioned at the center between the monopole and the antimonopole. The domain wall is located at \(z=0\) and the monopole is on the \(z<0\) side. To remove the antimonopole from our setup, we modified our ansatz by
\[\phi^{a}(x,y,z>0) \rightarrow\phi^{a}(x,y,z=0)\,, \tag{21}\] \[W_{\mu}^{a}(x,y,z>0) \rightarrow W_{\mu}^{a}(x,y,z=0)\frac{1}{1+e^{\gamma_{\rm D}m_{v_{\phi}} ^{2}}}\,, \tag{22}\]
where \(\gamma_{\rm D}\) is the Lorentz factor of the domain wall. Note that the suppression factor in equation (22) is an approximation that is in accordance with the wall profile.
The ansatz for the \(\psi\) field that includes the domain
Figure 2: A sketch of the magnetic field (top) and the scalar field vector \((\phi^{3},-\phi^{2})^{T}\) (bottom) for the initial configuration. The color in the background represents \(|\psi|\) ranging from \(|\psi|=0\) (blue) to \(|\psi|=v_{\psi}\) (red).
wall solution needs to minimize the potential. In order to achieve this, we seek to extremize the interaction term,
\[\mathcal{L}_{int}=-\beta\psi^{\dagger}\phi\psi\,, \tag{23}\]
by aligning \(\psi^{\dagger}T^{a}\psi\) parallel to \(\phi^{a}\). Therefore, the ansatz for \(\psi\) can be written as
\[\psi^{1}= -\frac{\xi_{(0,+v_{\phi})}}{\sqrt{2}}\frac{(\hat{\phi}^{1}-i\hat{ \phi}^{2})}{\sqrt{1+\hat{\phi}^{3}}}\,,\] \[\psi^{2}= \frac{\xi_{(0,+v_{\phi})}}{\sqrt{2}}\sqrt{1+\hat{\phi}^{3}}\,. \tag{24}\]
The Lorentz boosted configuration of the domain wall can be determined in a similar manner to that of the magnetic monopole. Specifically, we replace the variable \(z\) with \(\gamma_{\rm D}(z-u_{\rm D}t)\), where \(\gamma_{\rm D}\) represents the Lorentz factor associated with the domain wall.
In Fig. 2, the scalar fields and the magnetic field are illustrated. Since the ansatz we are employing is an approximation, we incorporated it into a numerical relaxation procedure, as outlined in [20], to investigate the response of the field to the static field equations. Notably, we observed that the deviations between the configuration before and after the relaxation remained small, thus affirming its suitability for our intended purpose.
## IV Numerical implementation
The numerical simulations were performed using the Python programming language, leveraging the Numba package [21]. Numba facilitates the translation of Python code into efficient machine code, enabling faster computations. Additionally, it offers a straightforward approach to parallelizing the code, effectively utilizing the capabilities of multi-core processors.
In order to improve the computation time, we took benefit of the axial symmetry of the configuration, as we have previously done in the context of magnetic monopole erasure [22]. The approach involved utilizing only three lattice points in the \(y\)-direction, sufficient for numerically calculating the second-order derivative appearing in the field equations. At each time iteration step, we solved the field equations in the \(y=0\) plane and used the axial symmetry to determine the field values in the two neighboring planes. This method, first employed in configurations of this nature, was introduced in [23].
From (18) we can find the axial symmetry of the \(\phi^{a}\) field of a monopole-antimonopole system for an arbitrary twist. This is given by
\[\phi^{1}= f_{1}x+f_{2}y\,,\] \[\phi^{2}= f_{1}y-f_{2}x\,,\] \[\phi^{3}= f_{3}\,, \tag{25}\]
where the functions \(f_{i}\) depend only on the time \(t\), the radius around the \(z\)-axis, and the \(z\)-coordinate. To find an ansatz for the gauge fields we inserted (IV) into \(D_{\mu}\phi=0\). This gives us
\[W_{x}^{1}=xyf_{4}+y^{2}f_{5}+f_{6} W_{y}^{1}=y^{2}f_{4}-f_{5}xy-f_{7}\] \[W_{x}^{2}=-x^{2}f_{4}-f_{5}xy+f_{7} W_{y}^{2}=-xyf_{4}+x^{2}f_{5}+f_{6}\] \[W_{x}^{3}=xf_{8}+yf_{9} W_{y}^{3}=yf_{8}-xf_{9}\] \[W_{z}^{1}=f_{10}x+f_{11}y W_{t}^{1}=f_{12}x+f_{13}y\] \[W_{z}^{2}=-f_{11}x+f_{10}y W_{t}^{2}=-f_{13}x+f_{12}y\] \[W_{z}^{3}=0 W_{t}^{3}=0\,. \tag{26}\]
From equation (IV) we can find the axial symmetric ansatz for the \(\psi\) field
\[\psi^{1}=f_{14}(x-iy)+f_{15}(y+ix)\,,\] \[\psi^{2}=f_{16}\,. \tag{27}\]
Notice that this axial symmetric ansatz presented here can be also used in the analysis of head-on collisions between a monopole and an antimonopole like in the situations described in [7; 24].
We employed the second iterative Crank-Nicolson method, as described in [25], to simulate the time evolution. We applied the axial symmetry method described above every time we solved the field equation in the \(y=0\) plane. We used absorbing boundary conditions for \(\phi^{a}\) and \(W_{\mu}\). For \(\psi\) we chose Dirichlet boundaries in \(z\)-direction and periodic boundaries in \(x\)-direction. Notice that for the twist \(\alpha=\pi\), the imaginary component of \(\psi^{1}\) is anti-symmetric in \(x\)-direction.
The theory (1) contains six independent parameters which can be given in terms of: \(g\), the masses \(m_{v_{\phi}}\), \(m_{h_{\phi}}\), \(m_{v_{\psi}}\), \(m_{h_{\psi}}\), and \(\beta\). The first three parameters were set to \(g=1\) and \(m_{v_{\phi}}/m_{h_{\phi}}=1\). We varied the latter three parameters in the intervals \(m_{v_{\phi}}\in[0.1\,m_{v_{\phi}},0.7\,m_{v_{\phi}}]\), \(m_{h_{\psi}}\in[0.1\,m_{v_{\phi}},1.0\,m_{v_{\phi}}]\), and \(\beta\in[0.001\,m_{v_{\phi}},0.1\,m_{v_{\phi}}]\). In the results section we will focus especially on the case with \(m_{v_{\phi}}=0.15\,m_{v_{\phi}}\), \(m_{h_{\psi}}=0.6\,m_{v_{\phi}}\) and \(\beta=0.01\,m_{v_{\phi}}\).
Besides the aforementioned parameters, we have the flexibility to select the initial velocities of the magnetic monopole (\(u_{\rm M}\)) and the domain wall (\(u_{\rm D}\)), as well as the distance between them. In the potential (2), the interaction term between \(\phi\) and \(\psi\) causes the domain wall to experience acceleration. Consequently, achieving a collision between the monopole and the domain wall does not necessitate a Lorentz boost. Nevertheless, we varied the initial velocities in the interval \(u_{\rm M},u_{\rm D}\in[0,0.98]\) (in units of \(c=1\)). Below we will specifically focus on the scenario where the initial velocities are set to \(u_{\rm M}=0.8\) and \(u_{\rm D}=0.8\) in opposite directions. The domain wall was located at \(z=0\) and the magnetic monopole at \(z=z_{\rm M}=-40\,m_{v_{\phi}}\).
For the numerical simulations, we used a lattice of the size \([-60\,m_{v_{\phi}}^{-1},60\,m_{v_{\phi}}^{-1}]\) and \([-180\,m_{v_{\phi}}^{-1},60\,m_{v_{\phi}}^{-1}]\) in \(x\)- and \(z\)-direction respectively. The lattice spacing was set to \(0.25\,m_{v_{\phi}}^{-1}\) and the time step we chose to be \(0.1\,m_{v_{\phi}}^{-1}\). The
time interval under investigation was \([0,180\,m_{v_{\phi}}^{-1}]\).
## V Results
During the time evolution of the initial setup outlined in the previous section, we can observe that as the magnetic monopole approaches the domain wall, a significant amount of magnetic energy density accumulates along the wall. This phenomenon arises due to the presence of a mass for the photon on the right-hand side of the domain wall. As a consequence, the penetration of the photon, which carries the magnetic energy, into the Higgs vacuum is exponentially suppressed.
Notice, however, that the magnetic field is repelled from the \(U(1)\) Higgs domain but not screened [3]. This is analogous to the Meissner effect in superconductors. Some of us already discussed the dual case, in which the electric field is repelled while the magnetic field is screened by a confining layer [22]. The penetration is possible only in the form of a flux tube, which is costly in energy. This repulsion leads to the concentration of energy density along the wall, resulting in the observed phenomenon.
Upon collision with the wall, the monopole transitions to the right-hand side and stretches a string, as this is the only way in which the monopole can enter the \(U(1)\) Higgs region. The end of the string opens up on the \(U(1)\) Coulomb side of the wall, where the flux can spread out. Since the magnetic charge is conserved, the integrated flux exactly matches the magnetic charge of the monopole. Correspondingly, an observer located in the Coulomb vacuum will effectively measure the same magnetic charge carried by the string "throat" as the one taken by the original monopole.
This phenomenon can be seen in the magnetic energy density and the behavior of the magnetic field as illustrated in Fig. 3. In addition to this figure, the full-time evolution can be found in the video in the ancillary files or at the following link:
[https://youtu.be/IPJAPjo3nSc7si](https://youtu.be/IPJAPjo3nSc7si)
In the time evolution, we can also see that the monopole decelerates during the string stretching. At a certain point, the string approaches its maximum length, and the entire configuration, with the monopole connected to the domain wall via the string, moves collectively at the same velocity. Here it is important to note that it is not exactly the same velocity but approaches the same speed asymptotically. The reason for this is that the domain wall as well as the monopole have a proper constant acceleration,
\[a_{\rm DW} \sim\frac{\delta}{\sigma_{\rm DW}}\sim\beta\frac{m_{v_{\phi}}}{ gm_{h_{\psi}}}\,, \tag{28}\] \[a_{\rm M} \sim\frac{\mu_{\rm string}}{M_{\rm M}}\sim\frac{m_{v_{\phi}}^{2}}{ m_{v_{\phi}}}\,, \tag{29}\]
where \(\delta\) is the potential energy difference between the \(U(1)\)-symmetric and the \(U(1)\)-Higgsed vacua, \(\sigma_{\rm DW}\sim m_{h_{\psi}}v_{\psi}^{2}\) is the domain wall tension, and \(\mu_{\rm string}\sim m_{v_{\phi}}^{2}/g^{2}\) is the string tension. Of course, the accelerations in (28) may differ. In some cases, it is even possible that the monopole is expelled from the Higgsed phase and re-enters it as soon as the domain wall catches up with it again. It is worth noting that the interaction term present in the potential equation (2) plays a crucial role in this behavior. As previously mentioned, this term introduces the vacuum energy difference between the Coulomb and Higgs phases. Consequently, there is a constant acceleration of the domain wall. This acceleration is essential for preventing the monopole from re-entering the Coulomb phase since the tension of the string pulls it outward. Without the domain wall's acceleration, the monopole would be drawn back into the Coulomb phase by the string slingshot effect.
Another notable observation regarding the magnetic energy density is the emission of radiation during the interactions. When the monopole collides with the wall, a significant amount of energy is invested in creating the string, resulting in an extreme deceleration. This process generates electromagnetic radiation in the form of a shock wave. We can also see that this radiation is capable of penetrating into the Higgs phase, demonstrating its ability to traverse regions with broken \(U(1)\) symmetry.
In the parameter regime under consideration, the formation of a string was observed to be nearly ubiquitous. However, when \(m_{v_{\phi}}\) and \(m_{h_{\psi}}\) are sufficiently large, the energy gap at the domain wall becomes too large for the string to form. As a result, the monopole remains localized on the wall and moves together with it. Additionally, the thickness of the string is dependent on the specific parameters of the theory. These parameters and the initial velocities of the monopole and domain wall also determine the maximum length of the string. When these objects possess higher velocities, there is increased availability of energy, allowing the string to extend to greater lengths. As a general estimate, assuming the point-like limit for the monopole solution, the thin string, and the thin wall limit, the maximal penetration is
\[\ell_{\rm max}\sim\gamma_{c}\frac{M_{\rm M}}{\mu_{\rm string}}\,, \tag{30}\]
where \(\gamma_{c}\) is the relative gamma factor between the wall and the monopole at the moment of the collision.
The natural question that arises is what is the fate of the extended string. Energetically, it is theoretically possible to form monopole-antimonopole pairs connected by strings after the collision. However, despite considering various parameters in our classical simulation, we have not observed this phenomenon. In our simulations, the magnetic monopole consistently stretches the string and maintains its connection to the domain wall, as long as there are no external influences present. Yet, if one per
turbed the string, the only way we found so far to disconnect the string from the domain wall is by introducing an additional antimonopole.
In other simulations, we examined a specific configuration where a monopole and an antimonopole, separated by a sufficiently large distance, enter the \(U(1)\)-Higgsed phase successively along the \(z\)-axis. To ensure the correct repulsion behavior of the magnetic field lines along the domain wall, we combined two untwisted monopole-antimonopole pairs by introducing a twist between them. This configuration was achieved using the following ansatz for the scalar field
\[\hat{\phi}=\begin{pmatrix}-\sin(\theta_{1}-\bar{\theta}_{1}+\theta_{2}-\bar{ \theta}_{2})\sin\phi\\ \sin(\theta_{1}-\bar{\theta}_{1}+\theta_{2}-\bar{\theta}_{2})\cos\phi\\ -\cos(\theta_{1}-\bar{\theta}_{1}+\theta_{2}-\bar{\theta}_{2})\end{pmatrix}, \tag{31}\]
where \(\theta_{1}\) and \(\theta_{2}\) (\(\bar{\theta}_{1}\) and \(\bar{\theta}_{2}\)) correspond to the angles between the \(z\)-axis and the position vectors stemming from the monopoles (antimonopoles).
The subsequent implementation followed a similar approach to the previously described model. Our observations revealed that the initial monopole extended a string, and later when the antimonopole entered the string, it caused the detachment of the string from the domain wall as can be seen in Fig. 4. Consequently, the monopole and antimonopole were drawn together with constant acceleration until their annihilation occurred. The dynamics are analogous to the one described in [7]. The energy stored in the strings connecting them is transferred into kinetic energy of the monopole-antimonopole, which turns them ultra-relativistic 2.
Footnote 2: In [26], this dynamics was applied to the production of primordial black holes. In fact, for a long enough initial string, the system will find itself within its own Schwarzschild radius well before the monopole-antimonopole annihilation, therefore leading to the production of black holes.
The full-time evolution for this configuration can be found in the ancillary video, which is also available in the following link:
[https://youtu.be/IPJAPjo3nSc?si](https://youtu.be/IPJAPjo3nSc?si)
Monopoles connected by a string and their dynamics in this type of model have been studied in great detail in [7].
As already discussed, the string can break by the spontaneous creation of a monopole-antimonopole pair on its world volume. As analysed by Vilenkin [19], for a
Figure 3: The illustration depicts the magnetic energy density and magnetic field at time \(t=115\,m_{\varphi_{\phi}}^{-1}\) in the \(y=0\) plane for the specific case described in the numerical implementation section. The length values are provided in units of \(m_{\varphi_{\phi}}^{-1}\), while the energy density values are given in units of \(m_{\varphi_{\phi}}^{4}/g^{2}\). The black line represents the contour corresponding to \(|\psi|=0.1\,m_{\varphi_{\phi}}\), serving to illustrate the presence of the domain wall. We observe that the magnetic monopole has formed a string, connecting it to the domain wall. Both the magnetic field lines and the magnetic energy density indicate the presence of a localized magnetic flux within the string. In the Coulomb phase, the magnetic field vectors point radially away from the point where the string attaches to the domain wall, representing the magnetic field of a virtual monopole located at that particular position.
classically-stable string this is a tunneling process with extremely low probability. This analysis is equally applicable also to a long string in our case after its formation.
The non-trivial question on which our analysis sheds light is, how probable is the stretching of such a string in a monopole wall collision? We could imagine that once the monopole collides with the wall, the entire energy gets converted into radiation without ever stretching a string, as this was observed to be the case in the collision of a confined monopole-antimonopole pair [7]. There, in a head-on collision, the monopoles would never pass each other and re-create a string. Instead, the system decayed into waves after the first collision. In this respect, the two setups give very different outcomes.
The reason for this difference is the following. First, as explained in [7], in the case of monopole-antimonopole collision, after they come on top of each other, the system completely "forgets" about the existence of the magnetic charges. Basically, the monopole-antimonopole completely erases one another. The collision also takes away some coherence, as this is typical for the processes of defect erasure [2; 18]. Correspondingly, in its further evolution, the system has no "profit" in re-creating a highly coherent and low entropy state of monopoles connected by a string. Such an outcome is exponentially suppressed. It is much more probable to decay into a highly entropic state of waves. This exponential suppression is generic for the transition amplitudes into macroscopic final states of low entropy [8; 27].
The situation in the present case is very different. The reason is that the magnetic charge is conserved. Correspondingly, the system must maintain the monopole, no matter what. The only question is the arrangement of its magnetic flux. Since the monopole has sufficient kinetic energy for entering the confining phase, the system has two choices: 1) accompany the monopole with a long string; or 2) create at least one additional monopole-antimonopole pair for breaking it apart. Since the pair-creation via quantum tunneling is exponentially suppressed, the latter process would require a hard perturbation which would force the adjoint field to vanish. This is not happening since the monopole "sees" the wall through the change of the expectation value of the fundamental field, which is a rather soft perturbation. Thus, the system chooses the process of stretching the long string adiabatically. Due to this, the outcome is a sling-shot effect.
The string breaking could also occur via thermal fluctuations, which will be the subject of future investigation.
## VI Gravitational waves
The slingshot mechanism provides a novel source of gravitational waves that can be produced in the early
Figure 4: The magnetic energy density and magnetic field for two magnetic monopoles entering the confined phase. To the units, the same applies as in Fig. 3. The first monopole enters and stretches a string. Afterward, the antimonopole enters and detaches the string from the domain wall, leading to the formation of a monopole-antimonopole pair connected by a string.
universe. This adds to the list of previously discussed sources of gravitational waves from various types of defects, such as colliding bubbles [28; 29], monopole-antimonopole pairs confined by strings [7; 9], cosmic string loops [30], etc.
The interesting novelty of the slingshot source of gravity waves is that it is expected to be rather generic in a grand unified phase transition, as such transition often proceeds with the formation of domain walls separating the phases of confined and free monopoles. For example, already the minimal grand unified theory with Higgs fields in the adjoint \(24_{\rm H}\) and fundamental \(5_{\rm H}\) representations, allows for the coexisting temporary phases such as \(SU(4)\times U(1)\) and \(SU(4)\) separated by domain walls. The vacuum expectation values in these two phases have the following forms: \((24_{\rm H})\propto{\rm diag}(-4,1,1,1,1),\ \langle 5_{\rm H}\rangle=0\) and \(\langle 24_{\rm H}\rangle\propto{\rm diag}(-4,1,1,1,1),\ \langle 5_{\rm H}\rangle \propto(1,0,0,0)^{\rm t}\) respectively. In these vacuum domains, the magnetic monopoles are in the Coulomb and confining phases respectively. Correspondingly, the interaction between monopoles and domain walls leads to the slingshot effect.
Of course, the purpose of the present paper is not to study the full richness of the grand unified phase portrait, which is also highly model-dependent. It suffices to notice that the slingshot can even be a dominant source of gravitational waves. In order to understand this, we can think about a single spherically symmetric expanding bubble separating the two phases. Sweeping away the monopoles by a slingshot mechanism produces gravity waves even in the absence of bubble collisions with other bubbles. For this reason, we focus on the generic aspects of the gravitational waves produced by the slingshot, using a simple prototype example presented in previous sections.
Notice that in our simulation, we ignore the gravitational backreaction on the dynamics of the source. Namely, we are assuming that the wall/string/monopole dynamics is dominated by the string tension. This is a legitimate assumption, provided that the tensions are below the Planck mass.
As was shown long ago [31; 32], the planar infinite wall has repulsive gravity. It acts on a pointlike source of positive mass \(m\) with a repulsive linear potential, given by \(V(r)\sim G\,\sigma_{\rm DW}\,m\,r\), where \(\sigma_{\rm DW}\) is the wall tension. In the present case, this repulsion can compensate or even overtake the attractive potential due to a string. It can also prevent the monopole from crossing the wall. Notice that under the condition \(\mu\gg G\,\sigma_{\rm DW}\,M_{\rm M}\) the slingshot dynamics is negligibly affected by the gravitational field of the domain wall which we assume throughout this work.
The radiated energy at frequency \(\omega\) per unit frequency and per solid angle, in direction \(\hat{\bf k}\) (\(|{\bf k}|=\omega\)) can be calculated by Weinberg's formula [33] (following the conventions of [34])
\[\frac{{\rm d}E}{{\rm d}\Omega\,{\rm d}\omega}=\frac{G\,\omega^{2}}{2\pi^{2}} \Lambda_{ij,lm}(\hat{\bf k})T^{ij*}({\bf k},\omega)T^{lm}({\bf k},\omega)\,, \tag{32}\]
where \({\rm d}\Omega\) is the differential solid angle and the Fourier transform of the energy-momentum tensor is given by
\[T_{\mu\nu}({\bf k},\omega)=\int_{I_{t}}{\rm d}t\int_{V}{\rm d}^{3}x\ e^{i \omega t-i{\bf k}\cdot{\bf x}}\ T_{\mu\nu}({\bf x},t)\,, \tag{33}\]
with \(I_{t}\) and \(V\) being the analyzed time interval and volume, respectively. These were chosen around the time and length scales of the dynamics of interest. The former corresponds to the duration of the source \(T\simeq 80m_{\varphi}^{-1}\), while the latter is given by the volume spanned by the system during its evolution. Given the relativistic motion involved, \(V\simeq T^{3}\) proved to be an optimal choice.
The operator \(\Lambda_{ij,lm}\) projects a tensor into its transverse traceless part and is defined as
\[\Lambda_{ij,lm}(\hat{\bf k})\equiv P_{il}(\hat{\bf k})P_{jm}(\hat{\bf k})- \frac{1}{2}P_{ij}(\hat{\bf k})P_{lm}(\hat{\bf k})\,, \tag{34}\]
where \(P_{ij}(\hat{\bf k})=\delta_{ij}-\hat{k}_{i}\hat{k}_{j}\) are projectors into the orthogonal direction of \(\hat{\bf k}\).
In the derivation of the equation (32), the divergence-less condition in momentum space \(k_{\mu}T^{\mu\nu}=0\) was assumed. The Fourier-transformed data from the numerical simulations matches this condition well; thus, we can apply formula (32).
Since we are working on a lattice with a finite resolution and the initial configuration is an approximation (for example, the initial boost of the monopole and the domain walls leads to fictitious sources), the presence of noise in the gravitational energy spectrum, stemming from numerical fluctuations, is anticipated. Moreover, both the domain wall and the monopole are accelerated to relativistic velocities, which introduces an extra source of background in the simulation
Figure 5: The energy spectrum for the slingshot effect (red points). The blue-dashed curve shows the scaling \(\omega^{-1}\) for comparison.
the lattice spacing. This imposes limitations on the available parameter space.
To ensure that such effects do not invalidate our analysis and that we capture only the gravitational wave signal due to the slingshot, we execute a Lorentz boost on the monopole in the opposing direction. With this strategy, we avoid a collision with the domain wall in the considered time interval and we can extract the magnitude of the background noise.
We observe that for \(m_{v_{\phi}}=0.6m_{v_{\phi}}\)3 (all the other parameters are kept unchanged), the background noise in the energy spectrum is negligibly small - below \(5\%\) of the energy extracted in the presence of a slingshot. In this case, the length of the string is comparable to the size of the magnetic monopole. Exploration of alternative \(m_{v_{\phi}}\) values reveals that numerical spurious effects stops being negligible for \(m_{v_{\phi}}\lesssim 0.4\,m_{v_{\phi}}\). Moreover, for \(m_{v_{\phi}}\gtrsim 0.7m_{v_{\phi}}\) the Lorenz gauge condition starts being numerically violated by more than \(10\%\).
Footnote 3: In the following \(m_{v_{\phi}}=1\) is used, and all dimensionful quantities are expressed in units of it.
The resulting energy spectrum, obtained upon integration over \(\mathrm{d}\Omega\) is shown in Fig. 5, where we fixed the Newton constant \(G=1\) for simplicity4.
Footnote 4: Note that the instantaneously radiated power, according to the notation of [33], can be obtained by multiplying \(\frac{\mathrm{d}E}{\mathrm{d}\omega}\) by \(\frac{\mathrm{d}\pi}{\mathrm{d}\tau}\).
The energy spectrum is well characterized by the following scaling
\[\frac{\mathrm{d}E}{\mathrm{d}\omega}\propto\omega^{-1}\,. \tag{35}\]
This is exemplified by the dashed blue line in Fig. 5. Unfortunately, the finiteness of our numerical simulations does not permit a clear characterization at higher frequencies. However, for sufficiently high \(\omega\) we expect the amplitude to be exponentially suppressed.
The direction of the emission is towards the bubble wall, as seen in Fig. 6. Therein equation (32) is shown as a function of the axial angle \(\theta\), measured from the acceleration axis, and the frequency \(\omega\). As it can be seen, most of the radiation takes place in the direction of acceleration. In particular, radiation is emitted in a beaming angle with frequency dependence roughly approximated by
\[\theta\propto\omega^{-1/2}\,, \tag{36}\]
depicted by the dashed black line in the plot.
In order to verify that the scalings (35) and (36) are due to the monopole being accelerated by the flux tube attached to the domain wall, we performed a separate analysis in which we isolated the slingshot dynamics from the initial collision between the monopole and the domain wall. We found that indeed the main contribution to the signal in Fig. 5 is due to the former.
The gravitational radiation due to the slingshot mechanism bears a close resemblance to the one emitted by a confined pair of monopole-antimonopole. In fact, also in that case the source of gravitational waves is due to monopoles being accelerated by the flux tube. As shown by Ref. [9] in the limit of point-like monopoles and the zero thickness string, (35) holds true also for that system. The result of Martin and Vilenkin was confirmed by some of us for the case of fully-resolved confined \(SU(2)\) 't Hooft-Polyakov monopoles [7]. The angular emission in the point-like limit was instead found to scale according to (36) by [35].
The emitted instantaneous power for a confined monopole-antimonopole pair is given by \(P\sim G\,\Lambda^{4}\)[9]. While in our numerical simulation, we have little leverage on the string tension \(\mu=\Lambda^{2}\), we observe that the gravitational signal amplitude is roughly compatible with this scaling as we changed the value of \(m_{v_{\phi}}\). Moreover, we observe that for low enough \(m_{v_{\phi}}\) the radiation from the impact could become comparable to the one from the slingshot for a sufficiently short stretching of string.
In our analysis, we focus on monopoles as the representative objects on which the confined flux can terminate. However, as we discuss below, the current analysis is general and applies also to the case of confined heavy quarks connected by gauge strings. For this latter case, the signal is produced in the regime \(M\gtrsim\Lambda\), with \(\Lambda\sim\sqrt{\mu_{\mathrm{string}}}\) being the confinement scale, and \(M\) being the mass of the monopole or quark. In particular, we showed that the energy spectrum decays as \(\omega^{-1}\) and that the beaming angle
Figure 6: Angular dependence of the radiated gravitational energy as a function of \(\theta\) and \(\omega\). The parameters chosen are the same as outlined in the text. The angle \(\theta\) gives the direction of radiation with respect to the acceleration axis. \(\theta=0\) corresponds to the direction of acceleration (to the left).
of emission displays a \(\omega^{-1/2}\) behavior in the considered parameter space. Therefore, a phase transition between an unconfined and confined phase can provide a specific signal in the form of gravitational waves coming from a slingshot effect.
## VII Implications for QCD
Our analysis has direct implications for QCD-like gauge theories with coexisting domains with confined and deconfined phases. Such a system was originally considered in [1]. This setup possesses a domain wall on which the \(SU(2)\) gauge theory is deconfined. Later in [10] the domain wall was replaced by the vacuum layer the width of which can be arbitrarily adjusted. This is the setup we shall consider now. The Lagrangian has the following form,
\[\mathcal{L}= -\frac{1}{2}\operatorname{Tr}\left(G^{\mu\nu}G_{\mu\nu}\right)+ \operatorname{Tr}\left(\left(D_{\mu}\phi\right)^{\dagger}\left(D^{\mu}\phi \right)\right)-U(\phi)\] \[+i\tilde{Q}\gamma^{\mu}D_{\mu}Q-M_{Q}\bar{Q}Q\,, \tag{37}\]
where \(\phi\) is a Higgs field in the adjoint representation of the \(SU(2)\) gauge symmetry with the potential
\[U(\phi)=\lambda\operatorname{Tr}\left(\phi^{2}\right)\left(\operatorname{Tr} \left(\phi^{2}\right)-\frac{v_{\phi}^{2}}{2}\right)^{2}. \tag{38}\]
The \(SU(2)\) gauge sector of the theory, as well as the corresponding notations, are the same as in previous examples. The fermion content of the theory consists of (for simplicity) a single flavor of a heavy quark \(Q\), in the fundamental representation of \(SU(2)\). Under "heavy" we mean that the mass of the quark \(M_{Q}\) is above the confinement scale of the \(SU(2)\) theory, which we denote by \(\Lambda\).
The potential \(U(\phi)\) possesses the following two vacua. In the first vacuum \(\phi=0\), the perturbative spectrum of the theory consists of an adjoint scalar of mass \(m_{\phi}=\lambda v_{\phi}\), the fundamental quark of mass \(M_{Q}\), and a massless \(SU(2)\) gauge field. The effective low energy theory is therefore a massless Yang-Mills. As it is well-known, this theory becomes confining and generates a mass gap at the corresponding QCD scale \(\Lambda\). Correspondingly, an electric flux of gluons confines into flux tubes that represent QCD strings [36; 37] with tension \(\mu_{\text{string}}\sim\Lambda^{2}\). The lowest mass excitations about this vacuum are colorless glueballs, which can be thought of as closed QCD strings. The spectrum also includes mesons, which represent quark-antiquark pairs connected by flux tubes and open strings.
The effect of the adjoint scalar \(\phi\) on the confinement can be consistently ignored for \(\Lambda\ll m_{\phi}\), which we assume for definiteness.
In the second vacuum, classically, we have \(\phi=v_{\phi}\), and thus the \(SU(2)\) gauge group is Higgsed down to the \(U(1)\) subgroup. The bosonic spectrum of the theory consists of a real \(U(1)\) neutral scalar of mass \(m_{\phi}=\sqrt{\lambda}v_{\phi}^{2}\), a charged (complex) massive gauge boson of mass \(m_{v_{\phi}}=gv_{\phi}\), and a massless Abelian \(U(1)\) gauge field. In addition, of course, there exists a massive fermion \(Q\). For \(m_{v_{\phi}},m_{\phi}\gg\Lambda\), the quantum effects from the massive modes can be safely ignored and the effective low-energy theory consists of a \(U(1)\) gauge theory in the Coulomb phase.
The theory possesses a domain wall solution separating the two phases. Classically the solution can be found exactly. In the above approximation, the quantum effects of the shape of the solution are small, but they can lift the degeneracy of the two vacua. The bias can create a pressure difference which accelerates the wall. This does not change much in our discussion, and in fact, the controlled acceleration of the wall can be welcome for the study of the scattering as we have seen in the numerical simulations. The bias can be controlled by proper adjustment of the parameters.
The long-distance physical effects of the heavy quark in the two domains are different. In the \(U(1)\) domain, the quark produces a \(U(1)\) Coulomb electric field. In the \(SU(2)\) domain, the quark is a source of the flux tube. This flux tube can either terminate on an antiquark or on the wall. In the latter case, the QCD-electric flux flowing through the tube opens up in the form of the \(U(1)\) Coulomb flux on the other side of the wall.
Notice that a long QCD string can break by nucleation of quark-antiquark pairs. However, this process is exponentially suppressed for \(M_{Q}>\Lambda\), and the string can be stable for all practical purposes. This suppression is similar to the exponential suppression of the decay of a magnetic Nielsen-Olesen string via nucleation of monopole-antimonopole pairs. In what follows we shall assume the regime of such stability.
This structure makes it clear that in certain aspects the model (37) represents an electric "dual" of the previously discussed model (1). The role of monopoles is played by the heavy quarks, whereas the role of the magnetic field is taken up by the electric flux of QCD. In both cases, the wall separates the confining and Coulomb phases for the given flux.
The problem of monopole scattering at the wall is mapped on the scattering between the wall and a heavy quark. When a quark moves across the wall from the \(U(1)\) phase to the confining one the flux carried by it stretches in the form of the string that opens up as a Coulomb flux at the entry point. By conservation of the \(U(1)\) flux, the charge measured by an observer in the Coulomb vacuum must exactly match the \(U(1)\) charge of the initial quark.
However, this conservation can be fulfilled in two ways. Upon entry into the confining vacuum, the string may stretch without breaking up leaving the initial quark as the source of the flux. Alternatively, the string may break up by nucleating additional quark-antiquark pairs or closed strings. That is the system can transfer most of its initial energy into mesons and glueballs.
The process is very similar to the scattering of a mag
netic monopole at the wall. In that case, we saw that the string never breaks up. If the analogy can be trusted, we would conclude that the same must be true for the case of a quark entering the confinement domain from the Coulomb one.
For \(M_{Q}\gg\Lambda\) such a behavior is relatively easy to justify. The two factors are defined: 1) The continuous memory of the initial state; and 2) the softness of the process.
First, notice that by the gauge invariance, the \(U(1)\) charge is fully conserved. The generation of the mass gap in the confining domain does not affect this conservation. Due to this, a charge was placed in the \(U(1)\) Coulomb domain and never creates any image charges on the confining side and the flux is repelled without any screening [1; 10]. This is the key to the localization of a massless photon by the mechanism of [1].
Therefore, when the quark crosses the wall and enters the confining region, the memory of the initial state is maintained in the form of the Coulomb electric flux. The total flux is conserved and is exactly equal to the \(U(1)\) charge carried by the quark. This flux can be monitored by measuring it on the Coulomb side of the wall. Now, when the free heavy quark enters the confining region, no hard collision takes place. The Coulomb-electric flux carried by the quark gathers into a tube of the thickness \(\Lambda^{-1}\), which is much larger than the de Broglie and Compton wavelengths of the quark. Correspondingly, the dynamics are soft and no processes with momentum transfer exceeding the quark mass take place. Correspondingly, the probability of quark-antiquark pair creation is exponentially suppressed. This results in a slingshot effect during which a long thick string is stretched in the wake of the quark. The resulting deceleration process is soft. The decay of a formed long string via pair creation is exponentially suppressed due to the usual reasons.
This reasoning must remain applicable also for the lower masses of quarks that are closer to the QCD scale, \(M_{Q}\gtrsim\Lambda\), as long as the exponential suppression of the string decay is maintained. That is, provided the parameters are such that the static long string is stable against the breakup via pair-nucleation, the relativistic quark entering the confining domain is expected to exhibit a slingshot effect.
Just like in the monopole case, this outcome is different from what is expected from the collision of a quark-antiquark pair connected by a string. In this case, due to the absence of a net \(U(1)\) charge, upon annihilation of quarks, the memory about the pre-existing charge dipole is gone. The system then chooses to hadronize in a multiplicity of glueballs and mesons rather than to stretch a long string.
Apart from its quantum field theoretic importance, the slingshot effect with quarks can have equally interesting cosmological implications, since the coexistence of confined and deconfined phases are generic in the cosmological evolution of various extensions of the standard model, such as grand unification. The quark slingshot effect can supplement the mechanism of the primordial black hole formation proposed in [26]. Now, instead of quarks connected by a string, the black hole can form by smashing a highly energetic quark accelerated by a slingshot into a wall. In addition, the quark slingshot effect can be the source of gravitational waves in a way very similar to the monopole slingshot case discussed in the previous chapter.
## VIII Slingshot of Confined Vortexes and Strings
The slingshot effect is not limited to confined point-like sources, such as monopoles or quarks with attached strings. Both are objects of co-dimension 3 confined by a connector of co-dimension 2 (string). Objects of different co-dimensionality can exhibit the slingshot effect. In general, sources of co-dimension \(d\) are confined by co-dimension \(d-1\) agents. For example, in \(3+1\) dimensions, strings that have co-dimension 2 can be confined by domain walls which are co-dimension 1 objects. A well-known example of such confinement is provided by strings bounding domain walls that stretch between them [38; 39]. Similarly, in \(2+1\) dimensions, vortices can be confined by strings [11].
In the current section, we shall study the slingshot effect for this case. For this, we shall extend the model of confined vortices in \(2+1\) dimensions (strings in \(3+1\) dimensions) introduced in [11], by allowing an additional vacuum in which vortices (strings) are not confined. The \(2+1\)-dimensional model of this sort has double usefulness as on the one hand it captures the dynamics of the string slingshot in \(3+1\) dimensions and on the other hand it represents a toy version of the monopole slingshot discussed in the previous sections.
The key concept of the model involves replacing the adjoint and fundamental scalar fields with complex scalar fields of different charges under an Abelian symmetry. Instead of an \(SU(2)\) gauge model, we consider a \(U(1)\) gauge theory. The symmetry-breaking mechanism involves two scalar fields. The first scalar field of charge \(q_{\phi}=g\), denoted as \(\phi\), is breaking the \(U(1)\) symmetry down to \(Z_{2}\) symmetry. This discrete symmetry is broken further by a second scalar field of charge \(q_{\chi}=\frac{g}{2}\), referred to as \(\chi\). This field changes the sign under the \(Z_{2}\) transformation.
The Lagrangian governing this model is expressed as follows [11]
\[\mathcal{L}= (D_{\mu}\phi)^{*}(D^{\mu}\phi)+(D_{\mu}\chi)^{*}(D^{\mu}\chi)\] \[-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-U(\phi,\chi)\,, \tag{39}\]
with the potential
\[U(\phi,\chi)= \lambda_{\phi}(\left|\phi\right|^{2}-v_{\phi}^{2})^{2}+\lambda_{ \chi}(\left|\chi\right|^{2}-v_{\chi}^{2})^{2}\left|\chi\right|^{2}\] \[+\beta\phi^{*}\chi^{2}+c.c.\,. \tag{40}\]
The covariant derivatives and the field strength tensor are given by
\[F_{\mu\nu} =\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\,, \tag{41}\] \[D_{\mu}\phi =\partial_{\mu}\phi+igA_{\mu}\phi\,,\] (42) \[D_{\mu}\chi =\partial_{\mu}\chi+i\frac{g}{2}A_{\mu}\chi\,. \tag{43}\]
Again, the novelty as compared to [11] is that the potential for the \(\chi\) field is designed in such a way that in addition to the \(Z_{2}\)-Higgsed phase, in which both fields have non-zero vacuum expectation values, there coexists a \(Z_{2}\)-invariant phase in which the \(\chi\) field vanishes. This is possible as long as \(|\beta|\) is sufficiently small. Namely, if \(|2\beta v_{\phi}|<|\lambda_{\chi}v_{\chi}^{\dagger}|\). Notice, that for simplicity we have omitted the phase-independent interaction terms such as \(|\phi|^{2}|\chi|^{2}\). Such terms do not play any role in the confinement of vortices. The crucial term in this respect is the phase-dependent interaction term with the coefficient \(\beta\). This term defines the relative charges of the two fields.
Let us now discuss the properties of vortices in these two vacua. In the \(Z_{2}\)-invariant vacuum, only the \(\phi\) field has a non-zero vacuum expectation value. Its absolute value is constrained to the field and is \(\langle|\phi|\rangle=v_{\phi}\), whereas the phase degree of freedom \(\theta_{\phi}\) becomes the longitudinal component of a massive vector field through the usual Higgs effect.
Correspondingly, the spectrum of the theory contains a Nielsen-Olesen vortex solution, given by the ansatz [5]
\[A_{i}(r,\theta) =\frac{n}{g}\varepsilon_{ij}\frac{r^{j}}{r^{2}}K(r)\,, \tag{44}\] \[\phi(r,\theta) =v_{\phi}e^{in\theta}H(r)\,, \tag{45}\]
where \(n\) is the winding number, and \(K(r)\) and \(H(r)\) are the profile functions that we found again by a numerical relaxation method by solving the following differential equations
\[K^{\prime\prime} =\frac{K^{\prime}}{r}-m_{v_{\phi}}^{2}H^{2}(1-K)\,, \tag{46}\] \[H^{\prime\prime} =-\frac{H^{\prime}}{r}+\frac{(1-K)^{2}}{r^{2}}n^{2}H+\frac{m_{h _{\phi}}^{2}}{2}H(H^{2}-1)\,. \tag{47}\]
In the \(Z_{2}\)-Higgsed phase also the \(\chi\) field gets a non-zero vacuum expectation value. For a small enough \(\beta\)-term, its absolute value is approximately equal to \(\langle|\chi|\rangle\simeq v_{\chi}\). Due to this, the gauge field receives a further mass contribution \(m_{v_{\chi}}=v_{\chi}g/\sqrt{2}\). The Higgs masses are approximately given by \(m_{h_{\phi}}=2\sqrt{\lambda_{\phi}}v_{\phi}\) and \(m_{h_{\chi}}=2\sqrt{\lambda_{\chi}}v_{\chi}^{2}\).
The further breaking of the \(Z_{2}\) symmetry by the vacuum expectation value of \(\chi\) puts the \(\phi\) vortices in the confining phase [11]. The dominant effect is due to the interaction \(\beta\)-term, which is phase-dependent. Notice that without this term, no confinement would occur.
The reason is the following. For \(\beta=0\), the theory would be invariant under two independent global symmetries \(U(1)_{\chi}\times U(1)_{\phi}\) with only one subgroup being gauged. The gauged subgroup leaves the following combination of the phases invariant,
\[\Theta\equiv\theta_{\phi}-2\theta_{\chi}\,. \tag{48}\]
This gauge-invariant phase shifts under the additional global \(U(1)\) symmetry which emerges for \(\beta=0\). The breaking of this symmetry by the combination of the two vacuum expectation values results in the emergence of a massless Goldstone boson. In the regime \(v_{\phi}\gg v_{\chi}\), this would-be-Goldstone boson resides mostly in the phase \(\theta_{\chi}\).
Correspondingly, for \(\beta=0\), the vacuum expectation value of \(\chi\) would lead to the formation of a second type vortex. Around each vortex, the two phases can in general have independent winding numbers.
Notice that some vortices would be "semi-global" [40]. In particular, a vortex around which both fields have unit winding numbers would have a logarithmically divergent gradient energy since the gauge field would be unable to compensate the winding of both phases simultaneously due to the difference in their gauge charges.
We are interested in the regime of confined vortices which takes place for \(\beta\neq 0\). First notice that, since the \(\beta\)-term explicitly breaks the global \(U(1)\) symmetry, the would-be-Goldstone degree of freedom gets the mass
\[m_{g}^{2}\simeq|4\beta v_{\phi}|\,. \tag{49}\]
Minimization of the \(\beta\)-term term forces the alignment in the phases of the \(\phi\) and \(\chi\) fields. For \(\beta<0\), the term is minimized for
\[\theta_{\phi}=2\theta_{\chi}\,. \tag{50}\]
However, such a relationship cannot be maintained everywhere around the \(\phi\) vortex with winding number one around which the phase shift is \(\Delta\theta_{\phi}=2\pi\). In the light of (50), this would imply that the corresponding change of the phase of \(\chi\) around a closed path is \(\Delta\theta_{\phi}=\pi\), which violates a single-valuedness of the vacuum expectation values.
To avoid the conflict, the field compromises: The presence of the \(\beta\)-term makes sure that around the closed contour enclosing the \(\phi\) vortex the phase of \(\chi\) experiences a jump (rapid change) from \(\pi\) to \(2\pi\) within a region of thickness \(\sim m_{g}^{-1}\). This region represents a string that is attached to the \(\phi\) vortex.
Far away from the vortex core, the corresponding configuration for the gauge invariant combination of the two phases (48) can be found by solving the sine Gordon equation,
\[\Theta^{\prime\prime}-m_{g}^{2}\sin(\Theta)=0\,, \tag{51}\]
where the derivative is taken with respect to a perpendicular coordinate \(y\). This equation has a well-known
solution,
\[\Theta(y)=4\tan^{-1}(e^{m_{y}y})\,, \tag{52}\]
which interpolates from \(\Theta=0\) to \(\Theta=2\pi\).
In the vacuum with broken \(Z_{2}\) symmetry, the string can terminate on another vortex or an antivortex, and the two get confined. In the present case, we have a separate domain with unbroken \(Z_{2}\) symmetry. This gives a possibility for the \(Z_{2}\) string to terminate on a domain wall separating the two phases.
The domains with free and confined \(\phi\) vortices are separated by a domain wall in which the \(\chi\) field interpolates from \(0\) to \(v_{\chi}\). This domain wall solution can be found by fixing the \(U(1)\) direction, e.g. \(\xi=\operatorname{Re}\chi\) and solving the Bogomolny equation [15]
\[\xi(x)=\frac{v_{\chi}}{\sqrt{1+e^{m_{\chi}\chi^{x}}}}\,. \tag{53}\]
This domain wall separates the \(Z_{2}\)-invariant phase from the \(Z_{2}\)-Higgsed phase.
We now turn to the analysis of a slingshot effect experienced by a \(\phi\) vortex that passes from the \(Z_{2}\)-invariant domain into the Higgsed \(Z_{2}\) domain. Just like in the case of a monopole, the vortex stretches a string that connects it to the boundary of the two phases.
Since the gauge field is massive in both regions, it has no long-range effects. Its influence vanishes exponentially at distances larger than the Compton wavelength of the photon. Consequently, a detailed study of its behavior in close proximity to the domain wall is unnecessary, provided we assume an initial configuration in which the vortex is far enough away from the wall. This implies that the ansatz for the \(\phi\) and \(A_{\mu}\) fields in the numerical simulation does not require adaptation to account for the presence of the wall. The ansatz for the \(\chi\) field can be written as
\[\chi(x,y)=\xi(x)\,e^{in\theta/2}\,, \tag{54}\]
where \(\theta=\arctan\left(y/(x-x_{0})\right)\), \(x_{0}\) being the position of the vortex. The above choice for \(\chi\) minimizes the \(\beta\) coupling in the broken \(Z_{2}\) phase. Moreover, ansatz (54) ensures the single-valuedness of \(\chi\) since \(\xi(x)\) is vanishing in the unbroken region.
In order to conduct the simulation, we employed the same numerical methods as described earlier. However, since our current analysis is limited to two dimensions, the axial symmetry method is not necessary. The lattice size, lattice spacing, time step, and the investigated time interval remained the same as in the magnetic monopole setup. Furthermore, the boundary conditions stay similar. Absorbing boundaries were utilized for \(\phi\) and \(A_{\mu}\), while the Dirichlet boundary condition was applied to \(\chi\) in the \(x\)-direction, accompanied by a periodic boundary condition in the \(y\)-direction. Note that the imaginary part of \(\chi\) is anti-symmetric in \(y\)-direction.
We set \(m_{v_{\phi}}\) and \(m_{h_{\phi}}\) to one and \(g=1/\sqrt{2}\). Additionally, we took the following parameter values: \(m_{v_{\chi}}=0.3m_{v_{\phi}}\), \(m_{h_{\chi}}=0.8m_{v_{\phi}}\), and \(\beta=-0.01m_{v_{\phi}}^{3/2}\).
The initial distance between the vortex and the wall was chosen to be \(d=40m_{v_{\phi}}^{-1}\) and the velocities were \(0.8\) and \(-0.8\) for the vortex and domain wall respectively.
From the simulation, we can observe that the vortex stretches a \(Z_{2}\) string when it enters the \(Z_{2}\)-Higgsed phase as can be seen in Fig. 7. The formation happens very similarly to the magnetic monopole case. The qualitative difference is that there is no magnetic flux inside the \(Z_{2}\) string due to the short-range behavior of the vortex gauge field.
The minimization of the interaction term results in \(\chi^{2}\) being proportional to \(\phi\). Given the vortex's winding number of one, this proportionality implies a winding of \(1/2\) in the \(\chi\) field at the end of the string. Consequently, the field vectors exhibit a rotation by \(\pi\) around the string's end. Within the string, the phase is changing according to equation (52). This rotational behavior explains why the \(Z_{2}\) string does not detach, as a rotation by \(2\pi\) is necessary for the detaching to occur. Therefore, the formation of the \(Z_{2}\) string is purely explained by topology.
Unlike in the case of monopoles or quarks in \(3+1\) dimensions, the slingshot effect of vortices (strings) happens without the confinement of the gauge flux. Instead, what confines within the string connecting two vortices is the flux of gradient of the Goldstone field which in the \(\beta=0\) limit becomes uniformly distributed around the vortex resulting in \(2+1\)-dimensional Coulomb interaction between them. For vortices separated by a distance \(r\) the interaction potential is \(\propto\ln(r)\). If \(\beta\neq 0\), for distances \(r\gg m_{g}^{-1}\), the potential is converted into a linear confining potential \(\propto r\).
Again, we can add an antivortex that enters the string later, leading to the breaking of the string and subsequent annihilation of the vortex-pair. However, the \(2+1\)-dimensional model possesses a distinctive feature not present in the magnetic monopole model. In this case, it is possible for a second vortex to enter the system instead of an antivortex. As a result, the string breaks, causing the two vortices to be drawn together until they form a bound state. This bound state exhibits a winding number of two in the \(\phi\) field and a winding number of one in the \(\chi\) field.
During the collision, we observe that the two vortices scatter at an angle of \(\pi/2\). This scattering behavior has been previously explained and analyzed using the moduli space approximation in [41; 42]. Due to the binding effect of the \(\chi\) field on the two vortices, this right-angle scattering occurs repeatedly. In Fig. 8 two moments of this bound state are illustrated.
The results of the simulations can be also found in the video attached as an ancillary file or at the following link: [https://youtu.be/IPJAPjo3nSc7si](https://youtu.be/IPJAPjo3nSc7si)
The behavior observed in the \(2+1\)-dimensional model can be seamlessly extended to the three-dimensional case. The \(\phi\)-vortices are lifted in strings that extend in an ad
Figure 8: The scalar field vector \((\mathrm{Re}\,\phi,\mathrm{Im}\,\phi)^{T}\) at times \(t=120m_{v_{\phi}}^{-1}\) and \(t=180m_{v_{\phi}}^{-1}\). The frames show moments after two vortices of the same winding entered one after one the \(Z_{2}\)-Higgsed phase. We observe that the two vortices are connected by a string and form an oscillating bound state.
Figure 7: The scalar field vector \((\mathrm{Re}\,\phi,\mathrm{Im}\,\phi)^{T}\) (top) and \((\mathrm{Re}\,\chi,\mathrm{Im}\,\chi)^{T}\) (bottom) at time \(t=100m_{v_{\phi}}^{-1}\). The length values are given in units of \(m_{v_{\phi}}^{-1}\). The red line represents the contour \(|\chi|=v_{\chi}/2\) and the black circle the contour \(|\phi|=v_{\phi}/2\). We can observe that the vortex that is entering the \(Z_{2}\)-Higgsed phase is connected to the domain wall by a \(Z_{2}\) string.
ditional dimension. Furthermore, the \(Z_{2}\) string that in \(2+1\) confines vortices, in \(3+1\) is lifted into a domain wall that confines strings.
In a manner analogous to the connected vortex-vortex pair and vortex-antivortex pair, we can now have a string-string pair and string-antistring pair connected by a domain wall. Within the string-string scenario, the entities align to create a bound state, adopting a cable-like configuration characterized by identical right-angle oscillations as witnessed in the case of vortices. In the string-antistring case, however, they will annihilate.
Just like in the monopole/quarks case, the string/vortex slingshot effect can have cosmological implications as it is expected to take place in various extensions of the standard model.
## IX Conclusion and outlook
In this paper, we introduced and numerically studied the slingshot effect and its implications such as gravitational waves.
In the first example, we studied the scattering process in which a magnetic monopole crosses a domain wall separating the vacua of magnetic-Coulomb and magnetic-confining phases. The setup is achieved by variants of an \(SU(2)\)-symmetric model with coexisting phases of the type discussed earlier [3]. It possesses two vacuum states. In one of them, the \(SU(2)\) is Higgsed down to \(U(1)\) and the spectrum contains 't Hooft-Polyakov monopoles. In the neighboring vacuum, the \(U(1)\) symmetry is further Higgsed and the photon has a non-zero mass. In this vacuum, the monopoles can only exist in a confined form. The magnetic flux of the monopole is trapped in a tube, the Nielsen-Olesen string [5]. The two vacua are separated by a domain wall.
We study a process in which the monopole with a high kinetic energy crosses over from the \(U(1)\) Coulomb phase into the Higgs phase. We observe that upon entering the \(U(1)\) Higgs domain, the monopole becomes connected to the wall by a long string. The string carries the magnetic flux of the monopole which opens up on the other side of the wall in the form of the Coulomb-magnetic field.
Despite the fact that the conservation laws permit the disposal of the kinetic energy of the monopole in the form of waves, without stretching a long string, this does not happen. Instead, the system creates a string that follows the monopole. The string tension tends to pull the monopole back towards the wall, exhibiting a sort of slingshot effect.
This outcome is different from the previously studied case [7] of scattering of a monopole-antimonopole pair connected by a string. In the point-like approximation, which does not resolve the structure of monopoles and strings, one cannot exclude that monopoles pass through each other, oscillating and re-stretching the string multiple times [9]. However, the simulation of the fully resolved system [7] showed that in a head-on collision, the monopoles never pass through each other. Instead, they decay into waves.
In [7] this behavior was explained by the following factors. First when the monopole and antimonopole overlap, they effectively erase the magnetic charges of each other and the system forgets about the magnetic dipole. Also, as in the generic cases of the erasure of defects [2], the coherence is lost. From this point on, the system evolves into the highest entropy configuration which is given by the waves, as opposed to monopoles connected by a long string. The latter configuration carries much lower entropy. The outcome can be interpreted as a particular case of a generic phenomenon the essence of which is an exponential suppression of the creation of the low-entropy macroscopic objects in collision processes [8; 27].
We explained that in the present case, the situation is very different due to the conservation of the net magnetic charge and the softness of the monopole-wall collision. At no point, the monopole encounters a phase in which the expectation value of the adjoint Higgs vanishes. Therefore, neither the coherence nor the memory of the preexisting state is lost. The monopole, due to its high kinetic energy, enters the confining phase softly and its magnetic flux stretches in the form of a string.
We argued that similarly to the earlier discussed analogy between confined quarks and monopoles [7], the current behavior must also be shared by a dual QCD-like theory.
In order to make the mapping more precise, as an electric dual version of the present model, we have used the construction analogous to [3; 10]. This gauge theory represents the \(SU(2)\) QCD which possesses two vacua. In one vacuum, the theory confines at a scale \(\Lambda\), and quarks are connected by the QCD flux tubes. In the other vacuum, \(SU(2)\) is Higgsed down to \(U(1)\) and the theory is in the \(U(1)\) Coulomb phase mediated by a massless photon. The theory possesses a domain wall separating the two phases. Due to the mass gap \(\Lambda\) in the confining vacuum, the massless photon is repelled from there and is localized within the Coulomb vacuum via the dual-Meissner mechanism of [1].
In analogy with the monopole case, we consider a scattering process in which an energetic heavy quark crosses over from the \(U(1)\) Coulomb domain into the \(SU(2)\)-confining one. We argued that the same behavior is expected as in the case of a monopole in the dual theory. That is, upon entering the confining phase, the quark will softly stretch a long QCD string. The string transports the electric flux of the quark to the wall and spreads it out in the other domain in the form of the \(U(1)\) Coulomb field. This should be the likely outcome as opposed to hadronizing into a high multiplicity of mesons and glueballs.
Our reasoning is the same as in the monopole case. The conservation of the \(U(1)\) charge forces the system to maintain the quark. The creation of additional quark-antiquark pairs that would break the string, requires collisions with high momentum transfer. These are absent
since the quark-wall collision is soft. Correspondingly, the system chooses the QCD slingshot effect as the likely outcome.
Our results have a number of implications. First, as discussed, it allows us to capture certain important parallels between the behaviors of confined monopoles and quarks. In particular, in processes involving traversing the domain walls between confined and deconfined phases, both are expected to exhibit the slingshot effects.
This effect can also have a number of important cosmological consequences since the above phases are expected to coexist at several stages of the universe's evolution. One observable imprint can occur in the form of gravitational waves. Within this paper, we scrutinized the energy spectrum and emission direction of radiation from the slingshot scenario. Our observations reveal that the spectrum exhibits an \(\omega^{-1}\) trend within our region of parameter space, akin to the behavior arising from the evolution of a cosmic string connecting a magnetic monopole and antimonopole [7; 9]. Moreover, the emission takes place in a beaming angle in the direction of acceleration and scales, as a function of frequency as \(\theta\propto\omega^{-1/2}\) within our range of parameters.
In the last part of this work, we investigated the slingshot effect for the case of a vortex in \(2+1\) dimensions, which can be extended to a theory with a cosmic string in \(3+1\) dimensions that are confined by domain walls.
Considering that cosmic strings are objects that can occur during a phase transition in the early universe, this scenario may also leave relevant marks in the gravitational wave background. Further explorations into this direction are left for future studies.
## Acknowledgements
This work was supported in part by the Humboldt Foundation under Humboldt Professorship Award, by the European Research Council Gravities Horizon Grant AO number: 850 173-6, by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2111 - 390814868, and Germany's Excellence Strategy under Excellence Cluster Origins.
**Disclaimer:** Funded by the European Union. Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
|
2310.00083 | Magnetic order in the two-dimensional metal-organic framework manganese
pyrazinecarboxylate with Mn-Mn dimers | The magnetic properties of [Mn(pyrazinecarboxylate)2]n (Mn-pyrazine),
empirical formula C10H6MnN4O4, are investigated through susceptibility, heat
capacity and neutron scattering measurements. The structure of Mn-pyrazine
consists of Mn-Mn dimers linked on a distorted 2D hexagonal structure. The weak
out of plane interactions create a quasi-2D magnetic material within the larger
three dimensional metal organic framework (MOF) structure. We show that this
material undergoes a two stage magnetic transition, related to the low
dimensionality of the Mn lattice. First at 5 K, which is assigned to the
initial development of short range order in the 2D layers. This is followed by
long range order at 3.3 K. Applied field measurements reveal the potential to
induce magnetic transitions in moderately small fields of 2 T. Neutron powder
diffraction enabled the determination of a unique magnetic space group P21'/c
(#14.77) at 1.5 K. This magnetic structure consists of antiferromagnetically
coupled Mn-Mn dimers with spins principally along the out of plane a-axis. | S. Calder, R. Baral, N. Narayanan, L. D. Sanjeewa | 2023-09-29T18:40:45Z | http://arxiv.org/abs/2310.00083v1 | Magnetic order in the two-dimensional metal-organic framework manganese pyrazinecarboxylate with Mn-Mn dimers
###### Abstract
The magnetic properties of [Mn(pyrazinecarboxylate)\({}_{2}\)]\({}_{n}\) (Mn-pyrazine), empirical formula C\({}_{10}\)H\({}_{6}\)MnN\({}_{4}\)O\({}_{4}\), are investigated through susceptibility, heat capacity and neutron scattering measurements. The structure of Mn-pyrazine consists of Mn-Mn dimers linked on a distorted 2D hexagonal structure. The weak out of plane interactions create a quasi-2D magnetic material within the larger three dimensional metal organic framework (MOF) structure. We show that this material undergoes a two stage magnetic transition, related to the low dimensionality of the Mn lattice. First at 5 K, which is assigned to the initial development of short range order in the 2D layers. This is followed by long range order at 3.3 K. Applied field measurements reveal the potential to induce magnetic transitions in moderately small fields of \(\sim\)2 T. Neutron powder diffraction enabled the determination of a unique magnetic space group \(P2_{1}^{\prime}/c\) (\(\#\)14.77) at 1.5 K. This magnetic structure consists of antiferromagnetically coupled Mn-Mn dimers with spins principally along the out of plane \(a\)-axis.
## I Introduction
Investigations of two-dimensional (2D) layered magnetic materials has revealed properties of interest for both fundamental and applied research. This is exemplified by graphene and beyond-graphene materials where the bulk compound has 2D layers weakly coupled through van der Waals bonding that can often be exfoliated or otherwise isolated down to a few or single layers [1; 2; 3; 4; 5; 6]. In these inorganic compounds numerous non-trivial topological and quantum behaviors have been observed and predicted due to the low dimensionality enhancing this behavior. Investigations on inorganic quasi-2D materials have uncovered topologically protected Skyrmions [7; 8], quantum spin liquids with emergent Majorana fermions [9] and spontaneous topological Hall effect [10]. Conversely, the tunability of coordination polymers, or equivalently magnetic metal-organic frameworks (MOFs), offers a powerful but less explored material space to achieve analogous physics when magnetic metal ions are added to well isolated 2D layered coordination structures [11; 12; 13; 14; 15; 16]. An extremely large variety of structures are available through often highly predictable organic chemistry routes. The ability to control the in-plane 2D motif, the spacing of layers, and potential introduce hybrid functionality on the organic linkers affords multiple intriguing research avenues for magnetic coordination polymers in the realm of quantum materials.
The material [Mn(pzc)\({}_{2}\)]\({}_{n}\) (pzc = pyrazinecarboxylate), henceforth referred to as Mn-pyrazine, contains well isolated 2D layers of Mn\({}^{2+}\) ions. Ref. [17] is the only previous literature report on this material, with x-ray diffraction and magnetic susceptibility measurements. The structure contains only one Mn site, however the Mn ions within the layer have two distinct bonding environments with spacings of \(\sim\)3.5 A and \(\sim\)5.6 A. This results in Mn-Mn dimers that are linked to form a distorted 2D hexagonal network. These 2D layers are well isolated by an interlayer Mn-Mn distance of \(\sim\)9.4 A, with the bonding containing weak hydrogen bonds between ligands. Mn-pyrazine is therefore expected to be a good realization of a quasi-2D material in a bulk compound. The previous powder magnetic susceptibility measurements were fit to a Curie-Weiss law down to 5 K and showed antiferromagnetic interactions with the Mn\({}^{2+}\) ion in the S=5/2 spin state. Inspecting the magnetic susceptibility in Ref. [17] reveals an apparent low temperature anomaly, however there was no discussion of any potential magnetic order transition.
Here, we undertake magnetic susceptibility, heat capacity and neutron powder diffraction measurements that reveal long range magnetic order in Mn-pyrazine. The directional dependence of the field behavior is investigated through single crystal magnetic susceptibility measurements that show a two stage magnetic transition. In addition there is a potential in-field magnetic transition, which indicates routes to tune the magnetic properties in moderate fields. Neutron powder diffraction is utilized to investigate the crystalline and magnetic structure through temperature dependent measurements. Despite the presence of increased incoherent scattering from hydrogen in Mn-pyrazine, empirical formula C\({}_{10}\)H\({}_{6}\)MnN\({}_{4}\)O\({}_{4}\), good quality neutron diffraction data are obtained. This highlights the strength of monochromatic high flux reactor based neutron instruments coupled with the choice of large moment metal ions when investigating magnetic MOFs. At the lowest temperature of 2 K long range magnetic order is observed with several magnetic reflections identified. Symmetry analysis of this magnetic structure shows that the Mn-Mn dimers form antiferro
magnetic pairs, with spins preferentially aligned in the out of plane \(a\)-direction in the \(P2_{1}^{\prime}/c\) (#14.77) magnetic space group.
## Methods
### Synthesis
Single crystals of Mn-pyrazine were grown using the low-temperature hydrothermal method. First, a total of 0.37 grams of pyrazinecarboxylate acid (C\({}_{5}\)H\({}_{4}\)N\({}_{2}\)O\({}_{2}\)) and anhydrous manganese(II) acetate (Mn(CH\({}_{3}\)COO)\({}_{2}\)) were mixed in a stoichiometric ratio of 2:1 with 5 mL of water and 5 mL of ethanol in a small beaker. The mixture was then mixed using a magnetic stirrer until everything was fully dissolved. Then the final mixture was loaded into a Teflon-lined stainless-steel autoclave, sealed well and heated at 140\({}^{\circ}\)C for 12 hrs. After cooling to room-temperature a dark yellow solution was recovered and left to evaporate at room temperature. The Mn-pyrazine crystals were formed during the solvent evaporation.
### Magnetic property characterization
Temperature-dependent and field dependent magnetic measurements were performed using a Quantum Design Magnetic Property Measurement System (MPMS). Magnetic properties were determined using one single crystal with the weight of 3.8 mg. The single crystal specimen was affixed to a quartz rod using GE varnish and temperature dependent magnetization measurements were carried out along three crystal directions. The temperature dependent data were collected in the range of 2 - 350 K in an applied magnetic field up to 50 kOe. The anisotropic isothermal magnetization measurements were performed between 2 - 100 K in magnetic fields up to 60 kOe. The heat capacity (C\({}_{\rm p}\)) of the sample was measured using a Physical Property Measurement System (PPMS) between 2 - 200 K under magnetic field in the range 0-60 kOe.
### Neutron powder diffraction
Neutron powder diffraction measurements on 5 grams of undeuterated Mn-pyrazine were carried out on the HB-2A powder diffractometer at the High Flux Isotope Reactor (HFIR), Oak Ridge National Laboratory (ORNL) [18; 19]. Hydrogen based materials present an extra challenge for neutron scattering by both adding to the neutron absorption and creating an increased background from incoherent scattering. These effects can be easier to account for with constant wavelength instruments due to the simpler data correction. Constant wavelength measurements were performed at 2.41 A from the Ge(113) monochromator reflection and 1.54 A from the Ge(115) reflection. The pre-mono, pre-sample and pre-detector collimation was open-open-12'. A pyrolytic graphite (PG) filter was placed before the sample to remove higher order reflections for the 2.41 A wavelength. The sample was contained in a 6 mm diameter vanadium can and cooled in a liquid \({}^{4}\)He cryostat in the temperature range 1.5 K - 300 K. The diffraction pattern was collected by scanning a 120\({}^{\circ}\) bank of 44 \({}^{3}\)He detectors in 0.05\({}^{\circ}\) steps to give 2\(\theta\) coverage from 5\({}^{\circ}\) to 130\({}^{\circ}\). Counting times were 8 hours for the 2.41 A measurements and 2 hours for the 1.54 A wavelength. Rietveld refinements were performed with Fullprof [20]. Symmetry allowed magnetic structures were considered using both representational analysis with SARAh [21] and magnetic space groups with the Bilbao Crystallographic Server [22]. Plots of the crystal and magnetic structure were prepared using VESTA [23].
\begin{table}
\begin{tabular}{c c c c c} \hline Atom & \(x\) & \(y\) & \(z\) & site \\ \hline Mn & 0.498(3) & 0.128(2) & 1.101(3) & 4\(e\) \\ C & 0.835(2) & 0.119(1) & 1.146(1) & 4\(e\) \\ C & 0.968(2) & 0.147(1) & 1.176(2) & 4\(e\) \\ H & 1.038(3) & 0.104(2) & 1.130(2) & 4\(e\) \\ C & 0.937(1) & 0.289(1) & 1.326(1) & 4\(e\) \\ H & 0.974(3) & 0.368(2) & 1.394(2) & 4\(e\) \\ C & 0.798(1) & 0.263(1) & 1.292(1) & 4\(e\) \\ H & 0.727(2) & 0.305(2) & 1.351(2) & 4\(e\) \\ C & 0.373(2) & 0.374(1) & 0.966(2) & 4\(e\) \\ C & 0.271(1) & 0.468(1) & 0.918(1) & 4\(e\) \\ H & 0.287(3) & 0.523(2) & 0.840(2) & 4\(e\) \\ C & 0.159(1) & 0.417(1) & 1.079(2) & 4\(e\) \\ H & 0.103(2) & 0.436(3) & 1.121(2) & 4\(e\) \\ C & 0.258(2) & 0.322(1) & 1.125(1) & 4\(e\) \\ H & 0.250(2) & 0.272(2) & 1.194(2) & 4\(e\) \\ C & 0.756(1) & 0.018(1) & 1.035(1) & 4\(e\) \\ C & 0.479(1) & 0.337(1) & 0.891(1) & 4\(e\) \\ N & 0.740(1) & 0.176(1) & 1.205(1) & 4\(e\) \\ N & 1.026(1) & 0.232(1) & 1.273(1) & 4\(e\) \\ N & 0.360(1) & 0.297(1) & 1.066(1) & 4\(e\) \\ N & 0.182(1) & 0.497(1) & 0.976(1) & 4\(e\) \\ O & 0.829(2) & -0.034(2) & 0.973(2) & 4\(e\) \\ O & 0.627(2) & -0.004(2) & 1.039(1) & 4\(e\) \\ O & 0.551(2) & 0.247(1) & 0.939(1) & 4\(e\) \\ O & 0.476(2) & 0.416(1) & 0.784(2) & 4\(e\) \\ \hline \end{tabular}
\end{table}
Table 1: Refined crystal structure parameters for Mn-pyrazine, empirical formula C\({}_{10}\)H\({}_{6}\)MnN\({}_{4}\)O\({}_{4}\), at 20 K for space group \(P2_{1}/c\) with lattice constants a=10.2078(4) Å, b=10.8444(4) Å, c=10.1095(4) Å, \(\alpha\)=90\({}^{\circ}\), \(\beta\)=108.429(3)\({}^{\circ}\), \(\gamma\)=90\({}^{\circ}\).
## Results and Discussion
### Crystal structure of Mn-pyrazine
We begin by considering the crystal structure of Mn-pyrazine, which was previously reported as being in the \(P2_{1}/c\) space group [17]. To confirm this we carried out neutron powder diffraction measurements in the paramagnetic regime of 20 K. The shorter wavelength of 1.54 A was used to give the widest Q coverage to increase the number of measured reflections. The data has the expected elevated background and Q dependence from the incoherent hydrogen scattering. There are, however, well resolved and strong nuclear Bragg peaks. This data was refined with the reported \(P2_{1}/c\) space group, with the background readily accounted for by a simple 6 coefficient polynomial function used for most measurements on this neutron powder diffractometer. The data and refinement show good agreement, see Fig. 1(a). The refined lattice constants and atomic positions are given in Table 1. Due to the large number of parameters the thermal parameters were fixed during the analysis.
The structural unit cell is shown in Fig. 1(b). Considering the local Mn environment in Fig. 1(c) indicates two distinct bonding pathways. Nearest neighbor bonds are Mn-O-Mn, which would suggest standard superexchange magnetic pathways. Whereas the next nearest neighbor
Figure 1: (a) Rietveld refinement of neutron powder diffraction data for Mn-pyrazine collected on the HB-2A instrument with a wavelength of 1.54 Å at 20 K. (b) Crystal structure of Mn-pyrazine. The box represents the unit cell in the \(P2_{1}/c\) space group. (c) Nearest and next nearest neighbor Mn ions are bonded through distinct pathways of Mn-O-Mn and Mn-O-C-O-Mn.
Figure 2: (a) View of the 2D layers in Mn-pyrazine. (b) The nearest neighbor dimers (thick red line) and next nearest neighbor (thin blue line) interactions within the 2D layers are shown between the Mn ions (purple sphere). The Mn ions form a distorted 2D honeycomb structure. The crystallographic unit cell is indicated by the grey box.
bonds are Mn-O-C-O-Mn, requiring extended superexchange magnetic interactions. The nearest neighbor distance is 3.46(5) A and next nearest neighbor is 5.71(3) A. The Mn-Mn distance between the layers is 10.2078(5) A and is mediated by a complex exchange pathway which includes weak hydrogen bonds. This creates the 2D layered structure of interest. Considering multiple unit cells, as shown in Fig. 2(a), highlights the well isolated 2D network of Mn-Mn ions. The 2D network in the \(bc\)-plane is shown in Fig. 2(b). The nearest neighbor Mn bonds can be viewed as Mn-Mn dimers which interact with the next nearest neighbor Mn ions to form a distorted 2D hexagonal layer.
### Magnetic susceptibility and heat capacity results
The initial report of Mn-pyrazine in Ref. [17] measured magnetic susceptibility of a powder sample. Despite the apparent low temperature anomaly recorded, no detailed discussion of potential magnetic ordering was reported. In the synthesis reported here, small Mn-pyrazine crystals grew as rectangular rods. Therefore, we performed our magnetic measurements on single crystals in three orientations as displayed in Fig. 3. The magnetic susceptibility produces a broad peak below 10 K with a maximum around 5 K. After that, it continuously decreases down to the lowest temperature measured of 2 K, indicating an antiferromagnetic transition. A broad transition is often a signature of short-range ordering associate with the layered nature of the structure. The inverse susceptibility was fitted using the Curie-Weiss law in the range of temperature 100 \(<\) T \(<\) 300 K, see Fig. 3(a). This gives \(\theta_{\rm CW}\) = -7 K, indicating the antiferromagnetic nature of this compound. The effective magnetic moment from the fit was \(\mu_{\rm eff}\) = 5.7 \(\mu_{B}\)/Mn, which is comparable to the expected moment of a Mn\({}^{2+}\) ion in the high-spin \(d^{5}\) state of 5.9\(\mu_{B}\).
The magnetic susceptibility measured in different crystal orientations are displayed in Fig. 3(b)-(c). All field directions of in-plane and out-of-plane show a broad peak around the same temperature of 5 K at low fields \(\leq\) 10 kOe. Increasing the field shifts the peak to lower temperatures. In general, the behavior under applied field is non-trivial with potentially a spin-flop transition which is further confirmed from our isothermal magnetization measurements in Fig. 3(c). This shows an anomaly at 2 K and 15 kOe. The inset of Fig. 3(c) showing directional dependent measurements indicates this anomaly occurs for fields applied out of the plane only. Considering the derivative of susceptibility (d\(M\)/dT) indicates a two-stage transition at T\({}_{2}\)= 5 K and T\({}_{1}\)=3 K.
To gain further insights into the low temperature behavior in Mn-pyrazine we performed heat capacity measurements. Figure 4 shows the results. The peak in the heat capacity for 0 T is observed at 3.3 K. This is lower then the broad peak from magnetic susceptibility which occurs around 5 K, but consistent with the T\({}_{2}\) transition in the d\(M\)/dT analysis. Applying a field leads to a lowering of the transition temperature in the heat capacity measurements and the observation of a further transition in the form of a shoulder in the heat capacity anomaly.
Figure 3: (a) Magnetic susceptibility measurements from 2 K to 300 K. The solid black line is a fit using the Curie-Weiss law. (b) Directional dependent measurements showing a broad anomaly centered around 5 K. Measurements were in a 10 kOe field for all three directions. (Inset) out of plane magnetic field dependence. (c) Isothermal magnetization measurements for a field applied out of the plane. (Inset) Directional dependent measurements at 2 K. (d) Derivative of the susceptibility reveals two anomalies at 5 K and 3.2 K in a field of 10 kOe out of the plane
This can be seen in the inset of Fig. 4 for a field of 3 T where a shoulder is measured at 3.2 K, followed by a further anomaly at 2.8 K.
Collectively the susceptibility and heat capacity results reveal multiple transitions that can be rationalized by considering the low dimensional nature of Mn-Pyrazine. We postulate that short range order occurs around 5 K in the 2D layers. This gives the broad peak in the susceptibility, rather than a sharp transition. Then at 3.3 K the transition to three dimensional long range order occurs. This is observed in the heat capacity and also the magnetic susceptibility through d\(M\)/dT as a two stage transition.
### Magnetic structure of Mn-pyrazine
We now turn to neutron powder diffraction measurements to investigate the microscopic magnetic spin structure. Diffraction patterns were collected at 1.5 K and 20 K using the 2.41 A wavelength, see Fig. 5(a). The high Q scattering is unchanged, indicating no structural symmetry change. The low Q behavior below 2 A\({}^{-1}\), however, reveals additional intensity in the form of Bragg peaks. These can be assigned to magnetic ordering, with the width of the new magnetic peaks the same as the nuclear peaks, indicating long range magnetic order at 1.5 K. The change in scattering is emphasized in the temperature difference plot in Fig. 5(b).
\begin{table}
\begin{tabular}{c c|c c} IR & BV & Point Group & Magnetic Space Group \\ \hline \(\Gamma_{1}\) & \(\psi_{1}\) & \(2/m\) & \(P2_{1}/c\) (\#14.75) \\ \(\psi_{2}\) & \(2/m\) & \(P2_{1}/c\) (\#14.75) \\ \(\psi_{3}\) & \(2/m\) & \(P2_{1}/c\) (\#14.75) \\ \(\Gamma_{2}\) & \(\psi_{4}\) & \(2/m\) & \(P2_{1}/c^{\prime}\) (\#14.78) \\ \(\psi_{5}\) & \(2/m\) & \(P2_{1}/c^{\prime}\) (\#14.78) \\ \(\psi_{6}\) & \(2/m\) & \(P2_{1}/c^{\prime}\) (\#14.78) \\ \(\Gamma_{3}\) & \(\psi_{7}\) & \(2/m\) & \(P2_{1}^{\prime}/c^{\prime}\) (\#14.79) \\ \(\psi_{8}\) & \(2/m\) & \(P2_{1}^{\prime}/c^{\prime}\) (\#14.79) \\ \(\psi_{9}\) & \(2/m\) & \(P2_{1}^{\prime}/c^{\prime}\) (\#14.79) \\ \(\Gamma_{4}\) & \(\psi_{10}\) & \(2/m\) & \(P2_{1}^{\prime}/c\) (\#14.77) \\ \(\psi_{11}\) & \(2/m\) & \(P2_{1}^{\prime}/c\) (\#14.77) \\ \(\psi_{12}\) & \(2/m\) & \(P2_{1}^{\prime}/c\) (\#14.77) \\ \end{tabular}
\end{table}
Table 2: The point symmetry and magnetic space group for the space group \(P2_{1}/c\) with \(\mathbf{k}=(0,0,0)\). The decomposition of the magnetic representation for the Mn site \((0.510,0.134,0.598)\) is \(\Gamma_{Mag}=3\Gamma_{1}^{1}+3\Gamma_{2}^{1}+3\Gamma_{3}^{1}+3\Gamma_{4}^{1}\).
Figure 4: (a) Zero field heat capacity measurements on Mn-pyrazine from 2 to 200 K. (b) Low temperature and field dependent heat capacity measurements.
Figure 5: (a) Neutron powder diffraction data collected at 1.5 K and 20 K. (Inset) Intensity at Q=0.65Å as a function of temperature through the magnetic transition. (b) Difference of intensity at 1.5 K and 20 K in the powder diffraction data.
The intensity of the peak at Q=0.65 A\({}^{-1}\), which has no observable nuclear contribution at 20 K, was measured as a function of temperature to follow the onset of magnetic ordering. Increased scattering is observed at 5 K, which corresponds to the broad peak in magnetic susceptibility. This increases, until it becomes saturated below 3 K. This measurement was exclusively of the intensity at Q=0.65 A\({}^{-1}\). It did not allow for a measurement the peak width, therefore it is not possible to distinguish between short or long range order scattering. Since any diffuse scattering intensity will be small, it would be of interest for future studies to measure deuterated powder or larger single crystals to further investigate the potential short range order in Mn-pyrazine through this transition.
At 1.5 K Mn-pyrazine is in the long ranged magnetically ordered state. The measured magnetic Bragg peaks can all be indexed with a **k**=(0,0,0) propagation vector. Starting from the paramagnetic space group of \(P2_{1}/c\) and using the determined propagation vector gives equivalently four irreducible representations (IRs) in a representational analysis approach or four maximal magnetic space groups. The magnetic space groups are \(P2_{1}/c\) (#14.75), \(P2^{\prime}_{1}/c\) (#14.77), \(P2_{1}/c^{\prime}\) (#14.78) and \(P2^{\prime}_{1}/c^{\prime}\) (#14.79). The corresponding IRs \(\Gamma_{1}\), \(\Gamma_{2}\), \(\Gamma_{3}\) and \(\Gamma_{4}\), in Kovalevs representation, are shown in Table. 2.
For all candidate models there is symmetry allowed spin components along all crystallographic directions (**m\({}_{\textbf{a}}\),m\({}_{\textbf{b}}\),m\({}_{\textbf{c}}\)**). The data was refined against all 4 candidate magnetic models. Only magnetic space group \(P2^{\prime}_{1}/c\) (#14.77) (\(\Gamma_{4}\)) was able to reproduce the intensity of all observed magnetic reflections. Allowing the moments to refine freely gave (**m\({}_{\textbf{a}}\),m\({}_{\textbf{b}}\),m\({}_{\textbf{c}}\)**)= (4.363(101), 1.261(562), 1.532(237) ) and total moment 4.3(2)\(\mu_{B}\)/Mn\({}^{2+}\). Which
Figure 6: (a) Refinement of neutron powder diffraction data collected at 1.5 K using the \(P2^{\prime}_{1}/c\) with moments along all (**m\({}_{\textbf{a}}\),m\({}_{\textbf{b}}\),m\({}_{\textbf{c}}\)**) directions. (b) Refinement of neutron powder diffraction data collected at 1.5 K using the \(P2^{\prime}_{1}/c\) with moments only along **m\({}_{\textbf{a}}\)**. Magnetic structures for (c) (**m\({}_{\textbf{a}}\),m\({}_{\textbf{b}}\),m\({}_{\textbf{c}}\)**) and (d) (**m\({}_{\textbf{a}}\),0, 0) models. The moments are shown as red arrows on the Mn ions, the nearest neighbor dimer bonds are blue and the next nearest neighbor bond are the dashed lines. The grey box represents the magnetic unit cell. (e) View of the 2D layers along the \(a\)-axis. The red/black circles correspond to up/down spin directions on the Mn ion.
is reduced but nevertheless close to the full spin for a S=5/2 ion of 5\(\mu_{B}\). This magnetic structure is shown in Fig.6(c). Confining the spins to only have a component along the \(a\)-axis gives an equivalently good refinement, as can be seen in Fig.6(b). The corresponding moment is slightly increased and closer to the full S=5/2 values with (\(m_{a}\),\(m_{b}\),\(m_{c}\))= (4.47(9),0,0) and total moment 4.47(9) \(\mu_{B}\)/Mn\({}^{2+}\). This magnetic structure is shown in Fig.6(d). Allowing the spins to only be constrained to either the \(b\) or \(c\) axis could not fully account for the data. These results confirm the S=5/2 nature of the Mn ion, however the reduction in moment may be a consequence of the extended nature of the Mn-Mn exchange pathways leading to a degree of moment delocalization onto the surrounding ligands.
The spin behavior in the 2D layer can be visualized in Fig.6(e). The black/red correspond to up/down Mn moments. The magnetic structure of Mn-pyrazine has antiferromagnetic dimers of Mn-Mn ions. Each next nearest neighbor Mn-Mn interaction in the layer is also antiferromagnetic. Despite the large Mn-Mn distance of \(>10\) A three dimensional long range order occurs, with the nearest neighbor Mn-Mn interlayer correlation being ferromagnetic. The large Mn moment of S=5/2 is likely a driving factor in realizing long range order.
To induce further interesting behavior in Mn-pyrazine and related materials it will be of interest to reduce the moment size down to S=1/2 to enhance quantum phenomena. This will be of particular interest with the dimer and 2D hexagonal layers that are hosts to exotic physics. As one example the Mn-pyrazine structure can be considered a distorted Shastry-Sutherland lattice [24; 25]. Removing or controlling this distortion may provide routes to investigate this physics in MOFs and allow for a wider phase space of materials than the currently limited candidates.
## Conclusions
Mn-pyrazine (C\({}_{10}\)H\({}_{6}\)MnN\({}_{4}\)O\({}_{4}\)) has been investigated with magnetic susceptibility, heat capacity and neutron powder diffraction. The magnetic susceptibility and heat capacity collectively indicate development of short range order at 5 K that proceeds the long range magnetic phase transition at 3.2 K in zero field. The applied field measurements show anisotropic behavior with a field driven anomaly above 2 T. Moreover heat capacity measurements in fields above 2 T reveal two observable anomalies with a small temperature window of \(\sim\)0.5 K. Neutron powder diffraction was able to determine the magnetic structure in this undeuterated material. Following a symmetry analysis only a single magnetic space group, \(P2^{\prime}_{1}/c\) (#14.77), was found to be consistent with the data. The analysis revealed the moments primarily aligned along the \(a\)-axis, which is out of the 2D layers. These moments form antiferromagnetic dimers that are linked within a wider distorted hexagonal network. In general the results show that coordination polymers, or equivalently magnetic metal-organic frameworks (MOFs), with both organic and inorganic building blocks offer unique material avenues to explore tailored structural motifs due to the versatility and predictability of organic chemistry.
###### Acknowledgements.
This research used resources at the High Flux Isotope Reactor, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. This research used resources at the Missouri University Research Reactor (MURR). This work was supported in part by a University of Missouri Research Council Grant (Grant Number: URC-22-021). This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paidup, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan([http://energy.gov/downloads/doepublic-access-plan](http://energy.gov/downloads/doepublic-access-plan)).
|
2309.13822 | PARTICLE: Part Discovery and Contrastive Learning for Fine-grained
Recognition | We develop techniques for refining representations for fine-grained
classification and segmentation tasks in a self-supervised manner. We find that
fine-tuning methods based on instance-discriminative contrastive learning are
not as effective, and posit that recognizing part-specific variations is
crucial for fine-grained categorization. We present an iterative learning
approach that incorporates part-centric equivariance and invariance objectives.
First, pixel representations are clustered to discover parts. We analyze the
representations from convolutional and vision transformer networks that are
best suited for this task. Then, a part-centric learning step aggregates and
contrasts representations of parts within an image. We show that this improves
the performance on image classification and part segmentation tasks across
datasets. For example, under a linear-evaluation scheme, the classification
accuracy of a ResNet50 trained on ImageNet using DetCon, a self-supervised
learning approach, improves from 35.4% to 42.0% on the Caltech-UCSD Birds, from
35.5% to 44.1% on the FGVC Aircraft, and from 29.7% to 37.4% on the Stanford
Cars. We also observe significant gains in few-shot part segmentation tasks
using the proposed technique, while instance-discriminative learning was not as
effective. Smaller, yet consistent, improvements are also observed for stronger
networks based on transformers. | Oindrila Saha, Subhransu Maji | 2023-09-25T02:08:48Z | http://arxiv.org/abs/2309.13822v1 | # PartICLE: Part Discovery and Contrastive Learning
###### Abstract
We develop techniques for refining representations for fine-grained classification and segmentation tasks in a self-supervised manner. We find that fine-tuning methods based on instance-discriminative contrastive learning are not as effective, and posit that recognizing part-specific variations is crucial for fine-grained categorization. We present an iterative learning approach that incorporates part-centric equivariance and invariance objectives. First, pixel representations are clustered to discover parts. We analyze the representations from convolutional and vision transformer networks that are best suited for this task. Then, a part-centric learning step aggregates and contrasts representations of parts within an image. We show that this improves the performance on image classification and part segmentation tasks across datasets. For example, under a linear-evaluation scheme, the classification accuracy of a ResNet50 trained on ImageNet using DetCon [17], a self-supervised learning approach, improves from 35.4% to 42.0% on the Caltech-UCSD Birds, from 35.5% to 44.1% on the FGVC Aircraft, and from 29.7% to 37.4% on the Stanford Cars. We also observe significant gains in few-shot part segmentation tasks using the proposed technique, while instance-discriminative learning was not as effective. Smaller, yet consistent, improvements are also observed for stronger networks based on transformers.
## 1 Introduction
Contrastive learning based on instance discrimination has become a leading self-supervised learning (SSL) technique for a variety of image understanding tasks (_e.g_., [6, 13, 15, 19, 41]). Yet, their performance on fine-grained categorization has been lacking, especially in the few-shot setting [10, 33]. Instances within a category often appear in a variety of poses which are highly discriminative of instances. Hence instance discrimination tends to learn representations predictive of object parts and pose, which however are a nuisance factor for categorization. Appearance of parts on the other hand enable fine-grained distinction and thus part-centric appearance have often been used to improve fine-grained recognition [3, 22, 34, 40].
Based on these observations we develop an approach for fine-tuning representations that is especially suited for fine-grained classification and segmentation tasks (e.g., recognizing species of birds and segmenting their parts). Our approach shown in Fig. 1 consists of two steps. First, we discover parts within an image by clustering pixel representations using an initial network. This is done by clustering hypercolumn representations of CNNs [7, 14], or patch embedding of vision transformers (Step I). We then train the same network using an objective where we aggregate and contrast pixel representations across parts within the same image (Step II). Similar to prior work (_e.g_., [5, 17, 35, 1]) we learn invariances and equivariances through data augmentations. The resulting network is then used to re-estimate part segmentations and the entire process repeated (see Algorithm 1). Our approach, for **part** discovery and contrastive learning (PARTICLE) can be used to adapt representations to new domains in an entirely self-supervised manner.
We test our approach for fine-tuning ImageNet [27] self
Figure 1: **Self-supervised fine-tuning using part discovery and contrastive learning (PARTICLE). Given a collection of unlabeled images, at each iteration we cluster pixels features from an initial network to obtain part segmentations (§ 3.1), and fine-tune the network using a contrastive objective between parts (§ 3.2).**
supervised residual networks (ResNet50) [16] and vision transformers (ViTs) [11] to fine-grained domains without labels. We consider two tasks: 1) classification under a linear evaluation, and 2) part segmentation with a few labeled examples. For ResNet50 networks trained with DetCon [17], PARTICLE improves the classification accuracy from 35.4% to 42.0% on Caltech-UCSD birds [39] and 35.5% to 44.1% on FGVC aircrafts [24], closing the gap over ImageNet supervised variant. On part-segmentation our approach leads to significant improvements over both the baseline and supervised ImageNet networks. Similar gains are also observed for networks trained using momentum-contrastive learning (MoCov2 [15]). ViTs, in particular those trained with DINO [4], are highly effective, surpassing the supervised ResNet50 ImageNet baseline, but our approach improves the classification accuracy from 83.3% to 84.2% on birds, 72.4% to 73.6% on aircrafts, and 72.7% to 73.9% on cars with larger gains on the part segmentation. Notably, the same objectives (i.e., MoCo, DetCon, or DINO) yield smaller and sometimes no improvements across the tasks and datasets (Tab. 1) in comparison to PARTICLE.
We also systematically evaluate various representations for part discovery. Parts generated by color and texture features are less effective. Hypercolumns are critical to obtain good parts for ResNets, which explains our improvements over related work such as ODIN [18] and PICIE [8] which are based on clustering final-layer features. On Birds, we find that parts obtained via ground-truth keypoints and figure-ground masks also lead to a significantly better categorization performance, and PARTICLE is similar to this this oracle. For ViTs we find that last layer "key" features of patches are effective and hypercolumns are not as critical, perhaps as resolution is maintained throughout the feature hierarchy. These differences are highlighted in Tab. 1, Tab. 2, and Fig. 2. Our approach is also relatively efficient as it takes only \(\approx\)2\(\times\) the amount of time to train MoCo and is \(\approx\)5\(\times\) faster than ODIN for ResNet50.
## 2 Related Work
**Fine-grained Recognition using SSL.** Cole _et al._[10] show that self-supervised CNNs trained on ImageNet do not perform well on fine-grained domains compared to their supervised counterparts in the "low-data" regime. Prior work [32, 10, 33] has also investigated the role of domain shifts on the generalization concluding that high domain similarity is critical for good transfer. Our work aims to mitigate these issues by showing that the performance of ImageNet self-supervised representations can be improved by fine-tuning the representations using iterative part-discovery and contrastive learning on moderately sized datasets (\(\leq\) 10k images). Recent work in self-supervised learning using vision transformers (ViTs) such as DINO [4] show remarkable results for fine-grained classification. DINO performs as well as supervised ImageNet ViT models and much better than supervised ImageNet ResNet50 models [20]. Our experiments show that PARTICLE still offers improvements, especially on aircrafts where the domain shift is larger.
**Part Discovery Methods.** Our approach for part discovery is motivated by work that shows that hypercolumns extracted from generative [37, 43] or contrastively [7, 28] trained networks, as well as ViTs [9, 1] lead to excellent transfer on landmark discovery or part segmentation tasks. Among techniques for part discovery on fine-grained domains the most related ones include Sanchez _et al_. [30] who use a supervised keypoint detector to adapt to the target domain. Aygun _et al_. [2] boost landmark correspondence using an objective that captures finer distances in feature space. The focus of this line of work has been on part discovery, but our goal is to also evaluate how part discovery impacts fine-grained classification. Better techniques for part discovery (e.g., [42, 31, 23], _etc_.) are complementary to our approach.
**Pixel Contrastive Learning.** Several pixel-level SSL approaches have been proposed for image segmentation or object detection tasks. Our approach for part-centric learning is based on DetCon [17] which learns by clustering pixels based on color and texture [12]. They show improved detection and semantic segmentation performance compared to image-level SSL on standard benchmarks. We adopt the underlying objective due to its computational efficiency, but instead use pixel representations based on deep networks. ODIN [18] uses k-means clustering on the last-layer features of a discovery network to find object clusters to guide a contrastive objective of a separate representation network. The training is based on the student-teacher learning framework of BYOL [13]. Similarly, PiCIE [8] considers global clustering of pixel level features within a dataset and trains a network using photometric invariance and geometric equivariance on the segmentation task. Much of the focus of the above work has been on tasks on coarse domains (e.g., ImageNet or COCO), while our work considers fine-grained image classification and part segmentation tasks. Notably, we find that unlike hypercolumns, the last layer features of a ResNet often used to discover objects do not contain finer demarcations that constitute parts of objects in fine-grained domains (see Fig. 3 for some examples).
## 3 Method
**Problem and Evaluation.** We consider the problem of learning representations on fine-grained domains (e.g., Birds or Aircrafts) for image categorization and part segmentation tasks. We consider a setting where the dataset is moderately sized (e.g., \(\leq\) 10,000 unlabeled images) and the goal is to adapt a SSL pre-trained representation trained
on ImageNet. This represents a practical setting where one might have access to a large collection of unlabeled images from a generic domain and a smaller collection of domain-specific images. For evaluation we consider classification performance under a linear evaluation scheme (i.e., using multi-class logistic regression on frozen features), or part segmentation given a few (\(\approx\) 100) labeled examples.
Approach.Given an initial network, our training procedure iterates between a part discovery step and a part-centric learning step outlined in Algorithm 1 and Fig. 1. In SS 3.1 we outline various methods to obtain parts and compare them to baselines based on low-level features as well as keypoints and figure-ground masks when available. The latter serves as an oracle "upper bound" on the performance of the approach. In SS 3.2 we present the part-level contrastive learning framework which discriminates features across parts within the same image under photometric and geometric transformations.
### Part Discovery Methods
CNNs.Hypercolumn representations of CNNs have been widely used to extract parts of an object. A deep network of \(n\) layers (or blocks) can be written as \(\Phi(\mathbf{x})=\Phi^{(n)}\circ\Phi^{(n-1)}\circ\cdots\circ\Phi^{(1)}(\mathbf{ x})\). A representation \(\Phi(\mathbf{x})\) of size \(H^{\prime}\times W^{\prime}\times K\) can be spatially interpolated to input size \(H\times W\times K\) to produce a pixel representation \(\Phi_{I}(\mathbf{x})\in\mathbb{R}^{H\times W\times K}\). We use bilinear interpolation and normalize these features using a \(\ell_{2}\) norm. The hypercolumn representation of layers \(l_{1},l_{2},\ldots,l_{n}\) is obtained by concatenating interpolated features from corresponding layers i.e.
\[\Phi_{I}(\mathbf{x})=\|\Phi_{I}^{(l_{1})}(\mathbf{x})\|_{2}\oplus\|\Phi_{I}^ {(l_{2})}(\mathbf{x})\|_{2}\oplus\cdots\oplus\|\Phi_{I}^{(l_{n})}(\mathbf{x}) \|_{2}\]
We then use k-means clustering of features within the _same image_ to generate part segmentation. We choose the layers based on a visual inspection and keep it fixed across datasets. Further details are in SS 5.1.
ViTs.Unlike CNNs, ViTs maintain constant spatial resolution throughout the feature hierarchy allowing one to obtain relatively high resolution pixel representations from the last layer. DINO [4] shows that the self-attention of the "[cls] token" has a strong figure-ground distinction. Last layer 'key' features of DINO have also been used to obtain part segmentations [1]. Motivated by this and our initial experiments that did not indicate better results using features across multiple layers, we consider the last layer 'key' features to extract pixel representations.
Baseline: Color and Texture.We extract parts using a classical image segmentation algorithm based on pixel color and texture - Felenzenzwalb Huttenlocher [12]. The parameters used to generate segmentations are described in SS4.
Baseline: Keypoints and Masks.As an oracle baseline we generate parts clustering based on keypoints or figure-ground masks. On birds dataset we assign each foreground pixel to the nearest keypoint (using a Voronoi tessellation) while all background pixels are assigned a background category. For Aircrafts, we consider the figure-ground mask as a binary segmentation (see Datasets, SS4 for details).
Analysis.Fig. 2 visualizes the part clusters obtained using various techniques and pre-trained models. Hypercolumns extracted from pre-trained ResNet50 using DetCon produces slightly better visual results than from MoCo. Previous work, ODIN and PICIE cluster last-layer features which are rather coarse and not well aligned with object parts as shown in Fig. 3. This might explain the relatively weaker performance of ODIN on our benchmarks compared to our approach that uses hypercolumns (31.19 vs 34.31 on CUB classification fine-tuned over MoCo ImageNet - more in suppl.). Parts using color and texture are often not as effective, conflating foreground and background. The bottom row shows the clusters obtained using "side information", i.e., keypoints for birds and figure-ground for airplanes.
### Part Contrastive Learning
Given an image \(\mathbf{x}\) and an encoder \(f\) we a obtain a representation \(\mathbf{y}=f(\mathbf{x})\) where \(\mathbf{y}\in\mathbb{R}^{H\times W\times K}\) for CNNs and \(\mathbf{y}\in\mathbb{R}^{(P+1)\times K}\) for ViTs where (P+1) is the number of patches and the [cls] token. We consider the representation before the last Average Pooling layer in a ResNet50 network and the last layer output tokens only for the patches in case of ViT. Given the segmentation of the image \(\mathbf{x}\) obtained in the previous step, we downsample it using nearest neighbour interpolation to get \(\mathbf{s}\) so that we have a mask value
\(m\) associated with each spatial location \((i,j)\) in \(\mathbf{y}\). A mask pooled feature vector for every mask value \(m\) can be obtained as:
\[\mathbf{y}_{m}=\frac{\sum_{i,j}\mathds{1}(\mathbf{s}[i,j]=m)*\mathbf{y}[i,j]}{ \sum_{i,j}\mathds{1}(\mathbf{s}[i,j]=m)} \tag{1}\]
Given an image we generate two views \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) using various augmentations (see supplementary). Next using Equation 1 we can obtain mask pooled features from both views as \(\mathbf{y}_{m},\mathbf{y}^{\prime}_{m^{\prime}}\) where \(m,m^{\prime}\) are mask indices. Now using a projector MLP \(g\) and a predictor MLP \(q\) we get:
\[\mathbf{p}_{m}=q_{\theta}\circ g_{\theta}(\mathbf{y}_{m})\qquad\mathbf{p}^{ \prime}_{m^{\prime}}=g_{\xi}(\mathbf{y}^{\prime}_{m^{\prime}}) \tag{2}\]
Note that the second view \(\mathbf{x}^{\prime}\) is passed to a momentum encoder \(f_{\xi}\), then the mask pooled features are fed to \(g_{\xi}\). These networks are trained using momentum update whereas \(q_{\theta},g_{\theta},f_{\theta}\) are trained using backpropagation. All the latents are rescaled so they have norm as \(1/\sqrt{\tau}\) where \(\tau=0.1\)
Next to contrast across masks we use the following loss function:
\[\mathcal{L}=\sum_{m}-\log\frac{\exp(\mathbf{p}_{m}.\mathbf{p}^{\prime}_{m})}{ \exp(\mathbf{p}_{m}.\mathbf{p}^{\prime}_{m})+\sum_{n}\exp(\mathbf{p}_{m}. \mathbf{p}^{\prime}_{n})} \tag{3}\]
where \(\mathbf{p}^{\prime}_{n}\) are the negatives _i.e_. samples from different masks from same image as well as across examples.
## 4 Datasets and Evaluation Metrics
Here we describe the datasets we use for the part aware contrastive training step and for the downstream tasks of fine-grained classification and few-shot part segmentation.
### Birds
**Self-Supervised Training.** We use the Caltech-UCSD birds (CUB) [39] dataset that has 11788 images centered
Figure 3: **Clusters features from various layers of a ResNet50. The shallower layer (left) features are similar to those based on colour and texture. As we go deeper (from left to right), the parts are more distinctive (e.g., layer B2 and B3). Layer B4, the layer before the final average pooling, fails to produce meaningful clusters. Hypercolumns (last column) clusters often result in distinct parts. This ResNet50 was trained using DetCon on ImageNet.**
Figure 2: **Visualization of the parts obtained by clustering representations. Clusters based on color and texture representations often conflate the object with the background. Clustering using hypercolumn features from ResNet50 trained using MoCo or DetCon are more aligned with semantic parts. For example, parts such as the head, tail, wing and breast in birds are distinct, and align with clusters generated using ground truth keypoints and figure-ground masks. DINO ViT representations are qualitatively similar. For Aircrafts, the only side information available is the figure-ground mask. _Note that for the purpose of this visualization we manually mask out the clusters in the background except for DINO. Refer to Fig. 3 last column to see the background clusters.**_
on birds with 5994 for training and 5794 for testing. We use the training set images for our contrastive learning part. The CUB dataset provides keypoints, figure-ground masks and classes as annotations. It has labels for 15 keypoints per-image. We remove the left/right distinctions and get a total of 12 keypoints : 'back', 'beak', 'belly', 'breast', 'crown', 'forehead', 'eye', 'leg', 'wing', 'nape', 'tail', 'throat'. Each foreground pixel is assigned a cluster based on the index of the nearest part, while background pixels are assigned their own labels. For clustering using color and texture, we use FH with the scale parameter of 400 and minimum component size of 1000 for this dataset, to get an average of 25 clusters per image. For hypercolumns we use k=25 for k-means clustering.
Classification.We again use the CUB dataset for classification. It has birds from 200 classes. We use the official train-test splits for our experiments and report the per-image accuracy on the test and validation sets.
Few-shot Part Segmentation.We use the PASCUB dataset for part segmentation with 10 part segments introduced by Saha [29]. We use the training set consisting of 421 images to train and use the validation (74) and testing (75) sets of the CUB partition to present results. We report the mean intersection-over-union (IoU).
### Aircrafts
Self-Supervised Training.We use the OID Aircraft [38] dataset for pre-training. We use the official training split containing 3701 images. Since we do not have keypoint annotations for this dataset, we only use the figure-ground masks as the side information segmentations. For the color and texture we use FH with a scale parameter of 1000 and minimum component size of 1000 and get an average of 30 clusters per image. For clustering using hypercolumns we use k=25 for k-means clustering.
Classification.For classification we use the FGVC Aircraft [24] dataset. It contains 10,000 images belonging to 100 classes. We use the official 'trainval' set to train and the 'test' set for reporting testing results. They contain 6667 and 3333 images respectively. We report the mean per-image accuracy on this dataset.
Few-shot Part Segmentation.We use the Aircraft segmentation subset extracted from OID Aircraft in Saha [29]. It contains 4 partially overlapping parts per image. We use the official 150 images for training and 75 each for validation and testing. Again, we report the mIoU.
### Cars
Self-Supervised Training.We use the Stanford Cars [21] dataset which contains 8,144 training images and 8,041 testing images belonging to 196 car classes. We use the same settings as Aircrafts for obtaining FH segmentations.
Classification.We use Stanford Cars for classification using the official train test splits and report mean accuracy.
Few-shot Part Segmentation.Here we utilize the Car Parts dataset [26] which contains 18 segments of cars with 400 images in train set and 100 in test set and report mIoU.
## 5 Implementation Details and Baselines
### ImageNet pre-trained SSL CNNs
We consider initialization using two choices of ImageNet self-supervised models both based on a ResNet50 architecture for a uniform comparison. One is based on MoCo and the other is based on DetCon. To obtain part clusters, every image in the dataset is resized to 224\(\times\)224 and hypercolumn features are extracted from the first Max-Pool, BottleNeck Block 1, BottleNeck Block 2 and BottleNeck Block 3 layers. We resample all features to a spatial resolution of 64\(\times\)64 and concatenate across channel dimension. This results in a 64\(\times\)64\(\times\)1856 feature vector. We use sklearn k-means clustering using k=25 and 500 max iterations. We provide an ablation to justify the number of clusters in supplementary. We cluster each image in the dataset independently. We use the same specifications for hypercolumn extraction and clustering while training iterations of discovery and contrast.
### ImageNet pre-trained DINO ViT
We also extend our method to vision transformers. We extract parts from ImageNet pre-trained DINO ViT by clustering the last layer (Layer 11) 'key' features using the method by Amir [1]. We fix the number of parts to 7 for birds and 5 for aircrafts. We use the 8\(\times\)8 patch version of ViT S/8 as it has the largest feature resolution for parts. For fine-tuning DINO ViT using PARTICLE, we apply the part contrastive loss over the output patch tokens of the ViT and add to the DINO student-teacher loss with equal weights. We use 224\(\times\)224 input image resulting in 28\(\times\)28 feature vector at every layer.
### Baselines for Self-Supervised Adaptation
To determine the effect of our training strategy over the boost coming from simply fine-tuning on a category specific dataset, we benchmark over some standard baselines. For each of these baselines we fine-tune over the category specific dataset (CUB for birds/OID for aircrafts) while learning using their objective. Below we list the baselines:
MoCo (V2).The Momentum Contrast (MoCo [15]) approach minimizes a InfoNCE loss [25] over a set of unlabeled images. MoCo performs instance level contrast by maintaining a queue of other examples considered negatives and treating transformations of a single image as positives.
DetCon.DetCon uses color and texture features to generate object segmentations using the Felenzswalb-Huttenlocher [12] algorithm. It uses a ResNet-50 based model to train using pixel contrast based on these object segmentations. Their loss function is the same as in SS 3.1.
Dion.This method has the same training objective of DetCon but creates segmentations by clustering the last layer features of a 'discovery' network using K-means in every iteration. This 'discovery' network is initialized randomly and is trained using momentum update from the main encoder. In Fig. 3 we show that the clusters of the last layer features of even a pre-trained network is not a good representation of object parts. We show a comparison of using ODIN vs other objectives in the Supplementary Material.
Dino ViT.We use the ViT S/8 network which the Small ViT using 8\(\times\)8 patches, trained with DINO [4]. DINO trains using a student teacher framework where the student is updated by minimizing the cross-entropy between softmax normalized outputs of the student and teacher. The teacher is updated using momentum. DINO is also an instance level contrastive method.
Picie.PiCIE [8] learns unsupervised object segmentation by clustering the features of the complete dataset using mini-batch k-means and training using invariance to photometric transformations and equivariance to geometric transformations. For part segmentation, PiCIE does not work well (see supplementary) because it uses only the last downsampled feature space of the encoder which does not have part information (see Fig. 3) and trying to fit object parts from all images to a single set of centroids for the whole dataset results in loss of information.
### Hyper-parameters
Self-Supervised Adaptation.For all baselines and our method based on CNN we finetune the initialized model for 600 epochs with a learning rate of 0.005 with a batch size of 320. We use a SGD optimizer with weight decay of 1.5E-6 and momentum of 0.9. We use a cosine learning rate decay with 10 epochs for warm up. For momentum updates we use a decay of 0.996. For all methods, we train using an image resolution of 224\(\times\)224. We utilize the augmentations as defined in BYOL [13]. We provide the details in the Supplementary. For adaptation to DINO ViT, we use a learning rate of 1E-7 with cosine decay and a weight decay of 0.4. We train for 100 epochs with a batch size of 64.
Iterative Training.For extracting hypercolumns, we use the same specification as in SS 5.1. We train for 20 epochs with a learning rate of 0.05. Rest of the hyperparameters stay the same as in the previous paragraph. For DINO ViT based models, we use a LR of 1E-8 and train for 60 epochs.
Linear Probing.We initialize a ResNet50 encoder with the contrastively trained networks as described above and SS 3. We do the evaluation using the input image of resolution 224\(\times\)224. We store the features before the last Average pooling layer for both train and test sets. We do not use any data augmentation for this. We then use the Logistic Regression method of sklearn, which we train using L-BFGS for 1000 maximum iterations. We choose the best model by evaluating on the validation set. For DINO ViT based models we average over the class token and patch tokens and use the same details as above.
Fine-Tuning.We also report results using fine-tuning in the supplementary where the entire network is trained for 200 epochs with a batch size of 200. We use SGD with a lr of 0.01 and momentum of 0.9. We train for varying number of images in the train set - 1, 3, 8, 15, 30 per class. Only flipping augmentation is used while training, except the low shot versions (1,3 and 8) where we also add random resized cropping and color jitter. For reporting scores on test set, we choose the best checkpoint based on the val set.
Part Segmentation.We add a decoder network consisting of four upsampling layers followed by convolutions to generate part segmentations from the ResNet50 features. We use the best pre-training checkpoint for each experiment obtained in linear probing on validation set. We follow all the parameters for training/evaluation of Saha [29]. We fine-tune the entire network for part segmentation. Here we train and test using input images of resolution 256\(\times\)256 following. We train the network using a cross entropy loss for PASCUB experiments. For Aircrafts, we treat it as a pixel-wise multi-label classification task and use binary cross entropy (BCE) loss. We use Adam optimizer with a learning rate of 0.0001 for 200 epochs. We use flipping and color-jitter augmentations while training. We use the mean IoU metric to report results. During evaluation, we perform 5 fold cross validation to find the best checkpoint using the validation sets and report the mean of them. For DINO ViT based models we rearrange the patch 'key' features of the last layer back to a 3D tensor and use 3 layers of upsampling each of which consists of two 3\(\times\)3 kernel Convs. We use a learning rate of 1E-5. Other details are same as above.
## 6 Results
We describe the results of evaluating the baselines and our method across different settings for fine-grained visual classification and few-shot part segmentation. In the following sections, we present a detailed analysis of various factors that affect the performance of baselines and our model.
### Particle Improves Performance Consistently
Tab. 1 shows that our method improves performance across baselines. For each model, we compare PARTI
CLE to the ImageNet pre-trained SSL model, and when the model is fine-tuned on the dataset using the objective of the underlying SSL model. We report the results of the best iteration to compare the maximum boost that PartI-CLE can contribute. However, most of the improvement is obtained after a single iteration (Tab 3). ResNet50 SSL models lag behind supervised ImageNet models for classification tasks. PartICLE fine-tuning goes a long way toward bridging this gap. DINO ViT on the other hand performs exceptionally well on fine-grained classification, even outperforming the ImageNet supervised CNNs. Yet, PARTICLE offers consistent improvements. For few-shot part segmentation, PARTICLE offers significant improvement over all baseline SSL models. We present results on an additional domain of Cars in the supplementary.
**Performance of DINO.** ImageNet pre-trained DINO is exceptionally good in fine-grained classification. It performs better than ImageNet pre-trained DetCon in classification tasks, however the difference is not as large for the part segmentation tasks. We believe that this can be attributed to DINO's strong figure-ground decomposition and the structure of it's feature space that makes it effective for linear and nearest-neighbor classification [4, 20].
### Effect of Clustering Method
As we described earlier, Fig. 2 shows a qualitative comparison of clusters obtained using various representations described in SS 3.1. Tab. 2 shows the quantitative performance of various clustering methods on classification and segmentation tasks. Hypercolumn features from ImageNet pre-trained DetCon beats the performance of color + texture features. However, it lags behind the side information oracle in the case of birds, since the weak supervision of keypoints and figure-ground mask results in better part discovery. This indicates that better part discovery methods could lead to improvements in classification tasks.
### Effect of Iterative Training
We vary the number of outer iterations on our model from zero, i.e., the initialization, to three, which consists of three iterations of part discovery and representation learn
\begin{table}
\begin{tabular}{l l|c c|c c|c c} \multirow{2}{*}{**Architecture**} & \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**Caltech-UCSD Birds**} & \multicolumn{1}{c|}{**FGVC Airerafts**} & \multicolumn{1}{c|}{**OID Aircrafts**} & \multicolumn{1}{c}{**Stanford Cars**} & \multicolumn{1}{c}{**Car Parts**} \\ & & **Cls** & **Seg** & **Cls** & **Seg** & **Cls** & **Seg** \\ \hline \multirow{8}{*}{ResNet50} & Supervised ImageNet & 66.29 & 47.41 \(\pm\) 0.88 & 46.46 & 54.39 \(\pm\) 0.52 & 45.44 & 53.95 \(\pm\) 0.71 \\ \cline{2-8} & MoCoV2 (ImageNet) & 28.92 & 46.08 \(\pm\) 0.55 & 19.62 & 51.57 \(\pm\) 0.98 & 15.79 & 51.93 \(\pm\) 0.37 \\ & MoCoV2 _fine-tuned_ & 31.17 & 46.22 \(\pm\) 0.70 & 23.99 & 52.65 \(\pm\) 0.54 & 21.23 & 52.40 \(\pm\) 0.99 \\ & PARTICLE _fine-tuned_ & **36.09** & **47.40 \(\pm\) 1.06** & **29.13** & **54.74 \(\pm\) 0.47** & **27.68** & **53.54 \(\pm\) 0.81** \\ \cline{2-8} & DetCon (ImageNet) & 35.39 & 47.42 \(\pm\) 0.92 & 35.55 & 53.62 \(\pm\) 0.67 & 29.72 & 53.88 \(\pm\) 0.75 \\ & DetCon _fine-tuned_ & 37.15 & 47.88 \(\pm\) 1.18 & 40.74 & 56.26 \(\pm\) 0.25 & 34.55 & 53.91 \(\pm\) 0.73 \\ & PARTICLE _fine-tuned_ & **41.98** & **50.21 \(\pm\) 0.85** & **44.13** & **58.99 \(\pm\) 0.61** & **37.41** & **55.23 \(\pm\) 0.50** \\ \hline \multirow{2}{*}{ViT S/8} & DINO (ImageNet) & 83.36 & 49.57 \(\pm\) 1.26 & 72.37 & 61.73 \(\pm\) 0.88 & 72.74 & 51.02 \(\pm\) 0.65 \\ & DINO _fine-tuned_ & 83.36 & 49.66 \(\pm\) 0.98 & 72.37 & 61.68 \(\pm\) 0.71 & 72.74 & 51.15 \(\pm\) 0.88 \\ \cline{1-1} & PARTICLE _fine-tuned_ & **84.15** & **51.40 \(\pm\) 1.29** & **73.64** & **62.71 \(\pm\) 0.56** & **73.89** & **52.75 \(\pm\) 0.70** \\ \end{tabular}
\end{table}
Table 1: **Performance on downstream tasks. We present the performance boost that our approach offers over various pre-trained SSL methods with backbone architecture as ResNet-50 or ViT S8. We show results for Birds, Airerafts and Cars datasets. We significantly boost classification accuracy for CNN based models. While DINO is already much better than CNN based models for fine-grained classification, we are still able to improve the performance using our method. The gap in segmentation performance for DINO ViT vs DetCon/MoCo V2 is much less pronounced. Our method contributes steady improvement over all baseline models for segmentation.**
\begin{table}
\begin{tabular}{l c c c c} \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**CUB**} & \multicolumn{1}{c}{**FGVC**} & \multicolumn{1}{c}{**OID**} \\ & **Cls** & **Seg** & **Cls** & **Seg** \\ \hline Color+Texture & 37.15 & 47.88 & 40.74 & 56.26 \\ Hypercolumns & 40.88 & 49.23 & 43.99 & 58.95 \\ Side Information & 43.72 & 50.15 & 39.03 & 55.98 \\ \end{tabular}
\end{table}
Table 2: **Effect of part discovery method. We compare the performance of one iteration of PARTICLE over the ResNet50 model trained using DetCon. Hypercolumns lead to improved results compared to color and texture, and nearly match the performance obtained by clustering keypoints + figure-ground masks on birds. On airplanes, side information beyond figure-ground is lacking, and PARTICLE performs better.**
Figure 4: **Effect of iterative training on clustering. For the first bird eg, the first iteration captures the boundary of the wing, head and belly better. Second iteration introduces a new middle part.**
ing over the entire dataset. Results are shown in Tab. 3. For both initializations we did not find significant improvements beyond the second iteration on Birds. On Aircrafts the improvements over iterations were smaller (also see Table 1, \(1\times\) vs. \(3\times\)). Fig. 4 shows how the clustering changes over iterations. To produce consistent clusters across images,, to avoid the randomness of k-means, we initialize the successive clustering for k-means using the previous partition and continue k-means for 500 iterations.
### Effect of Initialization
Fig. 5 compares the effect of initializing weights with either MoCo V2 or DetCon ImageNet pre-trained weights. We compare performance on both classification and segmentation for various clustering techniques. The initial DetCon model has a higher performance than MoCo on both tasks. The boost observed follows the same trend for both initialization strategies. For Part Segmentation again the base DetCon ImageNet performs better than MoCo, however the trend of the boost over base model is not same for both initializations. Starting with a MoCo initialization the fine-tuned models do not see an adequate boost, whereas in the case of DetCon initialization the fine-tuned models see significant boost over the base DetCon model.
### Comparison to ImageNet supervised CNNs
Tab. 1 shows that our ResNet50 based methods improve over ImageNet supervised models for few-shot part segmentation all datasets. The ImageNet pre-trained SSL baselines are close to ImageNet supervised in the case of Birds, Cars and slightly worse on Aircrafts. However, using our methods leads to a significant boost over the pre-trained SSL methods. This once again suggests that the current CNN based SSL approaches are quite effective at learning parts, but are limited in their ability to recognize categories. The aircrafts dataset has a larger domain gap from the ImageNet dataset and our CNN based methods achieve closer performance to ImageNet supervised ResNet50 models. Our linear evaluation score reaches close to ImageNet supervised for Aircrafts (\(\sim\)2 points gap) unlike for Birds where there is still a gap of about \(\sim\)24 points. ImageNet already has a large number of classes of birds and has been trained for classification, which gives it a large advantage on a fine-grained bird classification dataset. The improvement in part segmentation of our method over ImageNet supervised ResNet-50 remains similar for all datasets.
### Efficiency of Various Methods
**CNNs.** Training MoCo is fastest since it performs image level contrast. Both DetCon and our method (one iteration) take the same amount of time which is less than \(2\times\) that of MoCo. Note that we train each baseline and our method for 600 epochs. Since we use relatively small datasets to train, our approach takes less than 11 hours on 8 2080TI GPUs for the first iteration. We train the next iterations only for 20 epochs which takes around 20 minutes on the same GPU setup (total of 40 minutes for 2 extra iterations).
**ViTs.** For the first iteration, we train for 100 epochs which takes less than 2 hours on 8 2080TI GPUs. For the next iteration we train for 60 epochs which takes about an hour in the same setting.
## 7 Conclusion
We show that clustering and contrasting parts obtained through ImageNet self-supervised networks is an effective way to adapt them on small to moderately sized fine-grained datasets without any supervision. While we observe significant improvements on part segmentation tasks, even outperforming supervised ImageNet ResNets, we also show consistent improvements over the significantly better ViT models. On the Airplanes dataset where the domain gap over ImageNet is larger, our approach leads to larger gains. The analysis shows that current self-supervised models (including our own) are very effective at learning pose and parts. Moreover, conditioning and contrasting the discovered parts allows the model to learn diverse localized representations allowing better generalization to the classification tasks. However, a big limitation of the approach is that it requires a good initial model to discover parts, and the approach may not generalize to significantly different domains. Future work will explore if parts extracted from generic large-scale models lead to better guidance for part and feature learning, and will aim to characterize the effect of domain shifts on the effectiveness of transfer. Code has been released publicly here.
Acknowledgements.The project was funded in part by NSF grant #1749833 to Subhransu Maji. The experiments were performed on the University of Massachusetts GPU cluster funded by the Mass. Technology Collaborative.
Figure 5: **Effect of initialization and adaption.** The left panel shows the classification performance (Linear evaluation) while the right panel shows the part segmentation performance on the CUB dataset. In each panel we show the result of initializing the representation network using MoCo and DetCon, and various ways to obtain part segmentation via clustering. |
2309.04609 | A class of elliptic quasi-variational-hemivariational inequalities with
applications | In this paper we study a class of quasi--variational--hemi\-va\-ria\-tio\-nal
inequalities in reflexive Banach spaces. The inequalities contain a convex
potential, a locally Lipschitz superpotential, and a solution-dependent set of
constraints. Solution existence and compactness of the solution set to the
inequality problem are established based on the Kakutani--Ky Fan--Glicksberg
fixed point theorem. Two examples of the interior and boundary semipermeability
models illustrate the applicability of our results. | S. Migorski, JC. Yao, SD. Zeng | 2023-09-08T21:55:03Z | http://arxiv.org/abs/2309.04609v1 | # A Class of Elliptic Quasi-Variational-Hemivariational Inequalities with Applications
# A Class of Elliptic Quasi-Variational-Hemivariational Inequalities with Applications
Stanislaw Migorski1, Jen-Chih Yao2 and Shengda Zeng3
Footnote 1: College of Applied Mathematics, Chengdu University of Information Technology, Chengdu, 610225, Sichuan Province, P.R. China, and Jagiellonian University in Krakow, Chair of Optimization and Control, ul. Lojasiewicza 6, 30348 Krakow, Poland. Tel.: +48-12-6646666. E-mail address: [email protected].
Footnote 2: Center for General Education, China Medical University, Taichung, Taiwan. E-mail address: [email protected].
Footnote 3: Guangxi Colleges and Universities Key Laboratory of Complex System Optimization and Big Data Processing, Yulin Normal University, Yulin 537000, Guangxi, P.R. China, and Jagiellonian University in Krakow, Faculty of Mathematics and Computer Science, ul. Lojasiewicza 6, 30348 Krakow, Poland. Tel.: +86-18059034172. Corresponding author. E-mail address: [email protected].
**Abstract.** In this paper we study a class of quasi-variational-hemivariational inequalities in reflexive Banach spaces. The inequalities contain a convex potential, a locally Lipschitz superpotential, and a solution-dependent set of constraints. Solution existence and compactness of the solution set to the inequality problem are established based on the Kakutani-Ky Fan-Glicksberg fixed point theorem. Two examples of the interior and boundary semipermeability models illustrate the applicability of our results.
**Key words.** Variational-hemivariational inequality, variational inequality, Clarke subgradient, Mosco convergence, fixed point.
**2010 Mathematics Subject Classification.** 35L15, 35L86, 35L87.
## 1 Introduction
In this paper we study a quasi-variational-hemivariational inequality of elliptic type of the following form. Find \(u\in C\) such that \(u\in K(u)\) and
\[\langle Au-f,z-u\rangle+\varphi(z)-\varphi(u)+j^{0}(Mu;Mz-Mu)\geq 0\ \ \mbox{for all}\ \ z\in K(u). \tag{1}\]
Here, \(C\) is a subset of a reflexive Banach space \(V\), \(A\colon V\to V^{*}\) is a nonlinear operator, \(K\colon C\to 2^{C}\) is a set-valued mapping, \(M\colon V\to X\) is linear and compact, where \(X\) is a Hilbert space, \(\varphi\colon V\to\mathbb{R}\) is a convex function (potential), \(j\colon X\to\mathbb{R}\) is a locally Lipschitz potential where \(j^{0}(u;z)\) denotes the generalized directional derivative of \(j\) at point \(u\in X\) in the direction \(z\in X\), and \(f\in V^{*}\).
The motivation to study inequality (1) comes from modeling of physical phenomena for semipermeability problems of flow through porous media: monotone relations are described by variational inequalities [3, 8], and nonmonotone laws are governed by hemivariational inequalities [25, 31, 32, 33], see bibliographies therein. In these references, semipermeability on the boundary or in the domain is considered. Recently, various particular forms of inequality (1) have been treated for nonconvex semipermeability problems in [24, 28, 38] as well as for nonsmooth contact problems of mechanics of solids and fluids, see [22, 13, 14, 35, 36, 39].
To highlight the general form of our problem, we list the following particular cases of problem (1).
* If \(j=0\) and \(K\) is independent of the solution, then problem (1) reduces to the elliptic variational inequality of the first kind studied in [17, 35]: \[u\in K,\quad\langle Au,v-u\rangle+\varphi(v)-\varphi(u)\geq\langle f,v-u \rangle\quad\text{for all }v\in K.\]
* When \(j=0\) and \(K=V\), problem (1) takes the form of the elliptic variational inequality of the second kind investigated in [4, 17, 29, 35]: \[u\in V,\quad\langle Au,v-u\rangle+\varphi(v)-\varphi(u)\geq\langle f,v-u \rangle\quad\text{for all }v\in V.\]
* For \(j=0\), problem (1) reduces to the elliptic quasi-variational inequality treated in [23]: find \(u\in C\) such that \(u\in K(u)\) and \[\langle Au,v-u\rangle+\varphi(v)-\varphi(u)\geq\langle f,v-u\rangle\ \text{ for all }\ v\in K(u).\]
* While \(j=0\), \(K\) independent of \(u\), and \(\varphi\equiv 0\), then problem (1) becomes the elliptic variational inequality of the form \[u\in K,\quad\langle Au,v-u\rangle\geq\langle f,v-u\rangle\ \text{ for all }\ v\in K,\] which has been considered in [4, 5, 16, 17, 29].
* If \(\varphi=0\) and \(K=V\), problem (1) takes the form of elliptic hemivariational inequality of the form \[u\in V,\quad\langle Au,v\rangle+j^{0}(u;v)\geq\langle f,v\rangle\quad\text{ for all }v\in V,\] this has been studied in [31].
Elliptic variational and quasi-variational inequalities have been studied in [9, 15, 21], variational-hemivariational inequalities have been considered in [19, 20, 27, 37], while quasi-hemivariational inequalities have been treated only recently in [24, 39], and applications to implicit obstacle problems can be found in [23, 30].
We underline that under the relaxed monotonicity condition of the subgradient operator \(\partial j\) (see Remark 12 in Section 4), problem (1) has been studied only recently in [24] by the Kluge fixed point theorem applied to a set-valued variational selection
for the set of constraints. However, a question arises on how to prove the existence of a solution to the elliptic quasi-variational-hemivariational inequality (1) without the relaxed monotonicity condition on the subgradient operator \(\partial j\). The main aim of the current paper is to give a positive answer to this open question. Besides, the method applied in this paper is different from that used in [24]. More precisely, our approach is based on the Kakutani-Ky Fan-Glicksberg fixed point theorem and a convergence theorem for variational inequalities. Note that two classes of general semipermeability problems studied in Section 5 lead in their weak formulation to inequality (1) where the operator \(M\) is either the embedding operator or the trace operator. The second case for boundary semipermeability problems can not be treated by the approach used previously in [24].
Finally, we also stress that our results are applicable to a wide spectrum of problems met in contact mechanics, e.g., a nonlinear elastic contact problem with normal compliance condition with a unilateral constraint, and a contact problem with the Coulomb friction law in which the friction bound is supposed to depend on the normal displacement, studied, e.g., in [39]. For various interesting applications, we refer to [1, 2, 8, 10, 11, 12, 22, 35].
## 2 Notation and preliminary material
In this section we fix a basic notation and recall some concepts and facts we need in this paper, details can be found in [6, 7, 26].
Let \((Y,\|\cdot\|_{Y})\) be a Banach space, \(Y^{*}\) be its dual space and \(\langle\cdot,\cdot\rangle\) denote the duality bracket between \(Y^{*}\) and \(Y\). Given a locally Lipschitz function \(j\colon Y\to\mathbb{R}\), the generalized subgradient of \(j\) at \(x\in Y\) is defined by
\[\partial j(x)=\{\,x^{*}\in Y^{*}\mid\langle x^{*},v\rangle\leq j^{0}(x;v)\ \ \mbox{for all}\ \ v\in Y\,\},\]
where the generalized directional derivative of \(j\) at \(x\in Y\) in the direction \(v\in Y\) is given by
\[j^{0}(x;v)=\limsup_{y\to x,\ \lambda\downarrow 0}\frac{j(y+\lambda v)-j(y)}{ \lambda}.\]
Let \(\phi\colon Y\to\mathbb{R}\) be a convex function and \(x\in Y\). An element \(x^{*}\in Y^{*}\) is called a subgradient of \(\phi\) at \(x\) if and only if the following inequality holds
\[\phi(v)\geq\phi(x)+\langle x^{*},v-x\rangle\ \ \mbox{for all}\ \ v\in Y.\]
The set of all \(x^{*}\in Y^{*}\) that satisfy this inequality is called the subdifferential of \(\phi\) at \(x\), and it denoted by \(\partial_{c}\phi(x)\).
We recall the basic definitions for single-valued operators. Let \(A\colon Y\to Y^{*}\). An operator \(A\) is called bounded if it maps bounded sets of \(Y\) into bounded sets of \(Y^{*}\). \(A\) is called monotone if
\[\langle Av_{1}-Av_{2},v_{1}-v_{2}\rangle\geq 0\ \mbox{for all}\ v_{1},v_{2}\in Y.\]
An operator \(A\) is called \(m\)-strongly monotone if
\[\langle Av_{1}-Av_{2},v_{1}-v_{2}\rangle\geq m\|v_{1}-v_{2}\|^{2}\text{ for all }v_{1},v_{2}\in Y.\]
\(A\) is called coercive if
\[\langle Av,v\rangle\geq\alpha(\|v\|)\|v\|\text{ for all }v\in V,\]
where \(\alpha\colon\mathbb{R}_{+}\to\mathbb{R}\) is such that \(\alpha(s)\to+\infty\) as \(s\to+\infty\). An operator \(A\) is called pseudomonotone, if it is bounded and if \(u_{n}\to u\) weakly in \(Y\) and \(\limsup\langle Au_{n},u_{n}-u\rangle\leq 0\) implies
\[\langle Au,u-v\rangle\leq\liminf\langle Au_{n},u_{n}-v\rangle\text{ for all }v\in Y.\]
The space of linear bounded operators from a Banach space \(E\) to a Banach space \(F\) is denoted by \(\mathcal{L}(E,F)\), it is a Banach space endowed with the usual norm \(\|\cdot\|_{\mathcal{L}(E,F)}\). For a subset \(D\) of a normed space \(Y\), we write \(\|D\|_{Y}=\sup\{\|u\|_{Y}\mid u\in D\}\). The symbol \(Y_{w}\) is used for the space \(Y\) endowed with the weak topology, and \(2^{Y}\) stands for the set of all subsets of \(Y\). For a set-valued map \(F\colon X_{1}\to 2^{X_{2}}\) between topological spaces \(X_{1}\) and \(X_{2}\), the graph of \(F\) is the set \(\operatorname{Gr}(F)=\{(x,y)\in X_{1}\times X_{2}\mid y\in F(x)\}\).
We recall a definition of Mosco convergence, see [29].
Definition 1.: _Given a normed space \(Y\), a sequence \(\{C_{n}\}\) of closed and convex sets in \(Y\), is said to converge to a closed and convex set \(C\subset Y\) in the Mosco sense, denoted by \(C_{n}\ \xrightarrow{M}\ C\) as \(n\to\infty\), if we have_
* _if_ \(\{n_{k}\}\) _is a sequence of indices converging to_ \(\infty\)_,_ \(\{z_{k}\}\) _is a sequence such that_ \(z_{k}\in C_{n_{k}}\) _for every_ \(k\) _and_ \(z_{k}\to z\) _weakly in_ \(Y\)_, then_ \(z\in C\)_,_
* _for every_ \(z\in C\)_, there exists a sequence_ \(z_{n}\in C_{n}\) _with_ \(z_{n}\to z\) _in_ \(Y\)_._
There is an alternative definition of Mosco convergence defined in terms of Kuratowski limits [7, Section 4.7]. We end the section by recalling the Kakutani-Ky Fan-Glicksberg theorem for a reflexive Banach space, see e.g. [34, Theorem 2.6.7].
Theorem 2.: _Let \(Y\) be a reflexive Banach space and \(D\subseteq Y\) be a nonempty, bounded, closed and convex set. Let \(\Lambda\colon D\to 2^{D}\) be a set-valued map with nonempty, closed and convex values such that its graph is sequentially closed in \(Y_{w}\times Y_{w}\) topology. Then \(\Lambda\) has a fixed point._
## 3 Well-posedness result for variational inequalities
In this section, we formulate an abstract elliptic variational inequality with constraints for which we provide results on its unique solvability and convergence under the perturbation on the data. We will need such results in Section 4 to investigate a class of quasi-variational-hemivariational inequalities.
Let \(Y\) be a reflexive Banach space with a norm \(\|\cdot\|\), \(Y^{*}\) be its dual space, \(\langle\cdot,\cdot\rangle\) denote the the duality brackets for the pair \(Y^{*}\) and \(Y\), \(E\) be a nonempty, closed and convex subset of \(Y\).
Consider the following elliptic variational inequality of the first kind.
Problem 3.: _Find an element \(u\in E\) such that_
\[\langle Au-g,z-u\rangle+\varphi(z)-\varphi(u)\geq 0\ \ \mbox{for all}\ \ z\in E.\]
We impose the following hypotheses on the data of Problem 3.
\[\left\{\begin{array}{l}A\colon Y\to Y^{*}\mbox{ is pseudomonotone and }m\mbox{-strongly monotone,}\\ E\mbox{ is a nonempty, closed, convex subset of }Y,\\ \varphi\colon Y\to\mathbb{R}\mbox{ is convex and lower semicontinuous,}\\ g\in Y^{*}.\end{array}\right. \tag{2}\]
Lemma 4.: _Under the hypothesis \((\ref{eq:1})\), Problem 3 has a unique solution \(u\in E\)._
Proof.: It is a consequence of [18, Theorem 3.2]. We only note that if \(A\) is strongly monotone with constant \(m>0\), i.e.,
\[\langle Av_{1}-Av_{2},v_{1}-v_{2}\rangle\geq m\left\|v_{1}-v_{2}\right\|^{2} \ \mbox{for all}\ v_{1},v_{2}\in Y,\]
then \(A\) is coercive too:
\[\langle Av,v\rangle=\langle Av-A0,v\rangle+\langle A0,v\rangle\geq m\left\|v \right\|^{2}+\|A0\|_{Y^{*}}\|v\|\]
for all \(v\in Y\).
The following result is the well-known Minty formulation and it follows from [29, Lemma 4.1].
Lemma 5.: _Let \(A\colon Y\to Y^{*}\), \(E\subset Y\), \(\varphi\colon Y\to\mathbb{R}\) and \(g\in Y^{*}\)._
* _If_ \(A\) _is monotone, then any solution to the inequality_ \[u\in E:\ \langle Au-g,z-u\rangle+\varphi(z)-\varphi(u)\geq 0\ \ \mbox{for all}\ \ z\in E\] (3) _is also a solution to the following problem_ \[u\in E:\ \langle Az-g,z-u\rangle+\varphi(z)-\varphi(u)\geq 0\ \ \mbox{for all}\ \ z\in E.\] (4)
* _If_ \(A\) _is hemicontinuous,_ \(E\) _is convex and_ \(\varphi\) _is convex, then any solution of (_4_) is a solution to (_3_)._
We will show the result of the continuous dependence of solution to Problem \((P)\) on the data \((E,g)\). To this end, we consider the perturbed elliptic variational inequality \((P_{n})\) for \(n\in\mathbb{N}\).
Problem 6.: _Find an element \(u_{n}\in E_{n}\) such that_
\[\langle Au_{n}-g_{n},z-u_{n}\rangle+\varphi(z)-\varphi(u_{n})\geq 0\ \ \mbox{for all}\ \ z\in E_{n}.\]
Also, we impose the following hypotheses.
\[\left\{\begin{array}{ll}(a)\ E_{n}\ \mbox{are nonempty, closed, and convex subsets of}\ Y\ \mbox{such that}\ E_{n}\ \stackrel{{ M}}{{\longrightarrow}}\ E,\\ (b)\ g_{n}\in Y^{*},\ g_{n}\to g\ \mbox{in}\ Y^{*}.\end{array}\right. \tag{5}\]
Proposition 7.: _Under hypotheses (2) and (5), for each \(n\in\mathbb{N}\), Problem \((P_{n})\) has a unique solution \(u_{n}\in E_{n}\) such that the sequence \(\{u_{n}\}\) converges in \(Y\), as \(n\to\infty\), to the unique solution \(u\in E\) of Problem (P)._
Proof.: It suffices to apply Lemma 4 with \(E=E_{n}\) and \(g=g_{n}\) for every \(n\in\mathbb{N}\), to deduce that Problems \((P_{n})\) and \((P)\) have unique solutions \(u_{n}\in E_{n}\) and \(u\in E\), respectively.
We show that the sequence \(\{u_{n}\}\subset E_{n}\) is bounded in \(Y\). From the condition (m2) of the Mosco convergence in Definition 1, there exists \(w_{n}\in E_{n}\) such that \(w_{n}\to u\) in \(Y\), as \(n\to\infty\). We choose \(z=w_{n}\in E_{n}\) in Problem \((P_{n})\) to get
\[\langle Au_{n},u_{n}-w_{n}\rangle\leq\langle g_{n},u_{n}-w_{n}\rangle+\varphi( w_{n})-\varphi(u_{n}).\]
Combining the latter with the \(m\)-strong monotonicity of \(A\) implies
\[m \|u_{n}-w_{n}\|^{2}\leq\langle Au_{n}-Aw_{n},u_{n}-w_{n}\rangle= \langle Au_{n},u_{n}-w_{n}\rangle-\langle Aw_{n},u_{n}-w_{n}\rangle\] \[\leq\langle g_{n},u_{n}-w_{n}\rangle+\varphi(w_{n})-\varphi(u_{n} )-\langle Aw_{n},u_{n}-w_{n}\rangle\] \[=\langle Aw_{n}-g_{n},w_{n}-u_{n}\rangle+\varphi(w_{n})-\varphi(u _{n}). \tag{6}\]
Since, by (2), the function \(\varphi\) is convex, lower semicontinuous and finite, it is continuous on \(Y\), see [7, Theorem 5.2.8], and hence \(\varphi(w_{n})\to\varphi(u)\), as \(n\to\infty\), and
\[|\varphi(w_{n})|\leq|\varphi(w_{n})-\varphi(u)|+|\varphi(u)|\leq 1+|\varphi(u)| \tag{7}\]
for sufficiently large \(n\). On the other hand, the function \(\varphi\) has an affine minorant, see [7, Proposition 5.2.25], that is, there exist \(l\in Y^{*}\) and \(b\in\mathbb{R}\) such that \(\varphi(z)\geq\langle l,z\rangle+b\) for all \(z\in Y\). Hence, we have
\[-\varphi(u_{n})\leq\|l\|_{Y^{*}}\|u_{n}\|+|b|. \tag{8}\]
Now, using (6), (7), and (8), we obtain
\[m\,\|u_{n}-w_{n}\|^{2}\leq\|Aw_{n}-g_{n}\|_{Y^{*}}\|u_{n}-w_{n}\|+1+|\varphi(u )|+\|l\|_{Y^{*}}\|u_{n}\|+|b|.\]
Next, since \(\{g_{n}\}\), \(\{w_{n}\}\) and \(\{Aw_{n}\}\) are bounded (recall that \(A\) is a bounded operator by the pseudomonotonicity), we get \(\|Aw_{n}-g_{n}\|_{Y^{*}}\leq c_{0}\) and \(\|w_{n}\|\leq c_{1}\) with \(c_{0}\), \(c_{1}>0\) independent of \(n\). Taking these inequalities into account, we obtain
\[m \|u_{n}-w_{n}\|^{2}\leq c_{0}\|u_{n}-w_{n}\|+1+|\varphi(u)|+\|l \|_{Y^{*}}\|u_{n}-w_{n}\|+\|l\|_{Y^{*}}\|w_{n}\|+|b|\] \[\leq(c_{0}+\|l\|_{Y^{*}})\|u_{n}-w_{n}\|+c_{1}\|l\|_{Y^{*}}+1+| \varphi(u)|+|b|,\]
\[m\left\|u_{n}-w_{n}\right\|^{2}\leq c_{2}\|u_{n}-w_{n}\|+c_{3},\]
with \(c_{2}\), \(c_{3}>0\) independent of \(n\). By the elementary inequality
\[x^{2}\leq\delta_{1}x+\delta_{2}\ \ \Longrightarrow\ \ x^{2}\leq\delta_{1}^{2}+2\delta_{2} \tag{9}\]
for all \(\delta_{1}\), \(\delta_{2}\), \(x>0\), we deduce that \(\{u_{n}-w_{n}\}\) is bounded in \(Y\). Finally, \(\{u_{n}\}\) is bounded in \(Y\) independently of \(n\in\mathbb{N}\). By the reflexivity of \(Y\), there are a subsequence of \(\{u_{n}\}\), denoted in the same way, and \(u_{0}\in Y\) such that
\[u_{n}\to u_{0}\ \ \mbox{weakly in}\ \ Y. \tag{10}\]
From the condition \((m1)\) of the Mosco convergence in Definition 1, it is clear that \(u_{0}\in E\).
Let \(z\in E\) be arbitrary. By the condition (m2) of the Mosco convergence, there exist two sequences \(\{w_{n}\}\) and \(\{z_{n}\}\) with
\[z_{n},w_{n}\in E_{n}\ \ \mbox{such that}\ \ z_{n}\to z\ \ \mbox{and}\ \ w_{n}\to u_{0}\ \ \mbox{in}\ \ Y,\ \mbox{as}\ n\to\infty. \tag{11}\]
We insert \(z=w_{n}\in E_{n}\) in \((P_{n})\) to get
\[\langle Au_{n},u_{n}-w_{n}\rangle\leq\langle g_{n},u_{n}-w_{n}\rangle+\varphi( w_{n})-\varphi(u_{n}).\]
Note that
\[\limsup_{n\to\infty}\langle Au_{n},u_{n}-w_{n}\rangle=\limsup_{n\to \infty}\langle Au_{n},u_{n}-u_{0}\rangle+\lim_{n\to\infty}\langle Au_{n},u_{0} -w_{n}\rangle\] \[=\limsup_{n\to\infty}\langle Au_{n},u_{n}-u_{0}\rangle.\]
Passing to the upper limit as \(n\to\infty\) in the inequality above implies
\[\limsup_{n\to\infty}\langle Au_{n},u_{n}-u_{0}\rangle\leq\lim_{n\to\infty} \langle g_{n},u_{n}-w_{n}\rangle+\lim_{n\to\infty}\varphi(w_{n})-\liminf_{n \to\infty}\varphi(u_{n})\leq 0,\]
where we have used the fact that \(\varphi\) is weakly semicontinuous. Combining the last inequality with (10), and the pseudomonotonicity of \(A\), we have
\[\left\{\begin{array}{l}Au_{n}\to Au_{0}\ \ \mbox{weakly in}\ \ Y^{*},\\ \langle Au_{0},u_{0}-v\rangle\leq\liminf_{n\to\infty}\langle Au_{n},u_{n}-v \rangle\ \ \mbox{for all}\ \ v\in Y.\end{array}\right. \tag{12}\]
Taking \(z=z_{n}\in E_{n}\) in \((P_{n})\), we have
\[\langle Au_{n},u_{n}-z_{n}\rangle\leq\langle g_{n},u_{n}-z_{n}\rangle+\varphi (z_{n})-\varphi(u_{n}),\]
which together with (11) and (12) yields
\[\langle Au_{0},u_{0}-z\rangle\] \[\leq\liminf_{n\to\infty}\langle Au_{n},u_{n}-z\rangle\leq\limsup _{n\to\infty}\langle Au_{n},u_{n}-z\rangle\] \[\qquad=\limsup_{n\to\infty}\langle Au_{n},u_{n}-z\rangle+\lim_{n \to\infty}\langle Au_{n},z-z_{n}\rangle=\limsup_{n\to\infty}\langle Au_{n},u_ {n}-z_{n}\rangle\] \[\leq\lim_{n\to\infty}\langle g_{n},u_{n}-z_{n}\rangle+\lim_{n\to \infty}\varphi(z_{n})-\liminf_{n\to\infty}\varphi(u_{n})\] \[\qquad\leq\langle g,u_{0}-z\rangle+\varphi(z)-\varphi(u_{0}).\]
Because \(z\in E\) is arbitrary, we deduce that \(u_{0}\in E\) is a solution to Problem \((P).\) By the uniqueness of solution to Problem \((P),\) we get
\[u_{0}=u.\]
Further, the convergence of the whole sequence follows by a contradiction argument.
Finally, we show the strong convergence of \(\{u_{n}\}\) to \(u\) in \(Y.\) We begin with the proof of the inequality
\[\limsup_{n\to\infty}\langle Au_{n},u_{n}-u\rangle\leq 0. \tag{13}\]
From the condition \((m2)\) of the Mosco convergence applied to \(u=u_{0},\) we find a sequence \(\{\eta_{n}\}\subset E_{n}\) such that \(\eta_{n}\to u\) in \(Y,\) as \(n\to\infty.\) Since \(u_{n}\in E_{n}\) solves Problem \((P_{n}),\) we choose \(z=\eta_{n}\in E_{n}\) in \((P_{n})\) to obtain
\[\langle Au_{n}-g_{n},\eta_{n}-u_{n}\rangle+\varphi(\eta_{n})-\varphi(u_{n}) \geq 0\ \ \mbox{for all}\ \ n\in\mathbb{N},\]
which entails
\[\langle Au_{n},u_{n}-u\rangle\leq\langle Au_{n},\eta_{n}-u\rangle+\langle g_{ n},u_{n}-\eta_{n}\rangle+\varphi(\eta_{n})-\varphi(u_{n}). \tag{14}\]
We observe the following convergences
* \(\langle Au_{n},\eta_{n}-u\rangle\to 0,\) since \(\eta_{n}-u\to 0\) in \(Y\), and \(A\) is a bounded operator,
* \(\langle g_{n},u_{n}-\eta_{n}\rangle\to 0,\) by (5)(b), and \(u_{n}-\eta_{n}\to 0\) weakly in \(Y\),
* \(\varphi(\eta_{n})\to\varphi(u),\) since \(\varphi\) is a continuous function,
* \(\limsup_{n\to\infty}(-\varphi(u_{n}))\leq\varphi(u),\) since \(\varphi\) is sequentially weakly lower semicontinuous.
Using (a)-(d) in (14), we deduce (13). By the \(m\)-strong monotonicity of \(A,\) inequality (13) and the fact that \(u_{n}\to u\) weakly in \(Y\), we have
\[m\,\limsup_{n\to\infty}\|u_{n}-u\|^{2}\leq m\,\limsup_{n\to \infty}\langle Au_{n}-Au,u_{n}-u\rangle\] \[\quad\leq m\,\limsup_{n\to\infty}\langle Au_{n},u_{n}-u\rangle+ \limsup_{n\to\infty}\langle Au,u-u_{n}\rangle\leq 0.\]
This proves that \(\|u_{n}-u\|\to 0,\) as \(n\to\infty.\) This completes the proof of the proposition.
## 4 Quasi-variational-hemivariational inequality
The goal of this section is to establish an existence theorem for the elliptic quasi-variational-hemivariational inequalities and demonstrate that the solution set is a compact set in \(V.\) Such class of problems represent elliptic variational-hemivariational inequalities with a constraint set depending on a solution.
Let \(V\) be a reflexive Banach space and \(V^{*}\) its dual space. The norm in \(V\) and the duality brackets for the pair \((V^{*},V)\) are denoted by \(\|\cdot\|\) and \(\langle\cdot,\cdot\rangle\), respectively. Further, let \(X\) be a Hilbert space with the norm \(\|\cdot\|_{X}\) and the inner product \(\langle\cdot,\cdot\rangle_{X}\). Given a set \(C\subset V\), operators \(A\colon V\to V^{*}\) and \(M\colon V\to X\), functions \(\varphi\colon V\to\mathbb{R}\) and \(j\colon X\to\mathbb{R}\), a multifunction \(K\colon C\to 2^{C}\), and \(f\in V^{*}\), we consider the following problem.
**Problem 8**.: _Find \(u\in C\) such that \(u\in K(u)\) and_
\[\langle Au-f,z-u\rangle+\varphi(z)-\varphi(u)+j^{0}(Mu;Mz-Mu)\geq 0\ \ \mbox{for all}\ \ z\in K(u).\]
To deliver the existence of a solution to Problem 8, we will need the following hypotheses on the data.
\(H(A)\): \(A\colon V\to V^{*}\) is an operator such that
1. \(A\) is pseudomonotone,
2. \(A\) is \(m\)-strongly monotone.
\(H(C)\): \(C\subset V\) is nonempty, closed and convex.
\(H(j)\): \(j\colon X\to\mathbb{R}\) is a locally Lipschitz function such that
\[\|\partial j(x)\|_{X}\leq\alpha+\beta\|x\|_{X}\ \ \mbox{for all}\ \ x\in X\]
with \(\alpha\), \(\beta\geq 0\).
\(H(K)\): \(K\colon C\to 2^{C}\) is a multifunction with nonempty, closed, convex values which is weakly Mosco continuous, i.e., for any \(\{v_{n}\}\subset V\) such that \(v_{n}\to v\) weakly in \(V\), one has \(K(v_{n})\ \stackrel{{ M}}{{\longrightarrow}}\ K(v)\), and there exists a bounded set \(Q\subset V\) such that \(K(u)\cap Q\neq\emptyset\) for all \(u\in C\).
\(H(M)\): \(M\colon V\to X\) is a linear and compact operator.
\(H(\varphi)\): \(\varphi\colon V\to\mathbb{R}\) is a convex, and lower semicontinuous function.
\(H(f)\): \(f\in V^{*}\).
\((H_{0})\): \(m>\beta\,\|M\|_{\mathcal{L}(V,X)}^{2}\).
**Theorem 9**.: _If hypotheses \(H(A)\), \(H(C)\), \(H(j)\), \(H(K)\), \(H(M)\), \(H(\varphi)\), \(H(f)\), and \((H_{0})\) hold, then Problem 8 has a solution._
Proof.: The proof will be done in several steps. The main idea is to apply the Kakutani-Ky Fan-Glicksberg fixed point theorem to an equivalent form of the problem.
**Step 1.** First we note that Problem 8 can be formulated in the following equivalent form.
Problem 10.: _Find \(u\in C\) such that \(u\in K(u)\) and there exists \(w\in\partial j(Mu)\), \(w\in X\) and_
\[\langle Au-f,z-u\rangle+\varphi(z)-\varphi(u)+\langle M^{*}w,z-u\rangle\geq 0 \ \ \mbox{for all}\ \ z\in K(u). \tag{15}\]
Here and in what follows, \(M^{*}\colon X^{*}\to V^{*}\) denotes the adjoint operator to \(M\).
Let \(u\in C\) be a solution to Problem 8. Using [26, Proposition 3.23(iii)], we have the following property
\[j^{0}(Mu;Mz-Mu)=\max\{\,\langle\zeta,Mz-Mu\rangle_{X}\mid\zeta\in\partial j(Mu )\,\}\]
for all \(z\in K(u)\). Thus, for each \(z\in K(u)\), there exists \(w_{z}\in\partial j(Mu)\) such that
\[j^{0}(Mu;Mz-Mu)=\langle w_{z},Mz-Mu\rangle_{X}=\langle M^{*}w_{z},z-u\rangle_{ X}.\]
So, for each \(z\in K(u)\) there exists \(w_{z}\in\partial j(Mu)\) such that
\[\langle Au-f,z-u\rangle+\varphi(z)-\varphi(u)+\langle M^{*}w_{z},z-u\rangle \geq 0. \tag{16}\]
Arguing as in the proof of [9, Proposition 3.3], we are able to find an element \(w\in\partial j(Mu)\), which is independent of \(z\in K(u)\), such that inequality (15) holds. Hence, we see that \(u\in C\) is a solution to Problem 10.
Conversely, let \(u\in C\) be a solution to Problem 10. Then there is \(w\in\partial j(Mu)\) which, by the definition of the subgradient, means that \(j^{0}(Mu,\eta)\geq\langle w,\eta\rangle_{X}\) for all \(\eta\in X\). Hence,
\[\langle M^{*}w,z-u\rangle_{X}=\langle w,Mz-Mu\rangle_{X}\leq j^{0}(Mu;Mz-Mu)\]
for all \(z\in K(u)\). Therefore \(u\in C\) solves Problem 8. In conclusion, Problems 8 and 10 are equivalent. In what follows we show existence of solutions to Problem 10.
**Step 2.** We obtain "a priori" estimate for the solutions to Problem 10. Let \(u\in C\) be a solution to Problem 10. So \(u\in K(u)\) and there exists \(w\in\partial j(Mu)\) such that inequality (15) holds. Since, by \(H(K)\), \(K(u)\cap Q\) is nonempty, there exists \(z_{0}\in K(u)\cap Q\) such that
\[\langle Au-f,z_{0}-u\rangle+\varphi(z_{0})-\varphi(u)+\langle M^{*}w,z_{0}-u \rangle\geq 0.\]
Hence
\[\langle Au-Az_{0},u-z_{0}\rangle\leq\langle Az_{0}-f+M^{*}w,z_{0}-u\rangle+ \varphi(z_{0})-\varphi(u).\]
From [7, Proposition 5.2.25], it is known that there are \(l\in Y^{*}\) and \(b\in\mathbb{R}\) such that \(\varphi(z)\geq\langle l,z\rangle+b\) for all \(z\in V\). Hence, we have \(-\varphi(u)\leq\|l\|_{Y^{*}}\|u\|+|b|\) which together with \(m\)-strong monotonicity of \(A\) implies
\[m\,\|u-z_{0}\|^{2}\leq(\|Az_{0}\|_{V^{*}}+\|f\|_{V^{*}}+\|M\|\|w\|_{X^{*}})\, \|u-z_{0}\|+|\varphi(z_{0})|+\|l\|_{V^{*}}\|u\|+|b|,\]
where \(\|M\|=\|M\|_{\mathcal{L}(V,X)}\). We use the growth condition in \(H(j)\) and the elementary inequality \(\|u\|\leq\|u-z_{0}\|+\|z_{0}\|\) to obtain
\[m\,\|u-z_{0}\|^{2}\leq(\|Az_{0}\|_{V^{*}}+\|f\|_{V^{*}}+\alpha\|M\|+\|l\|_{X^{ *}})\,\|u-z_{0}\|\]
\[+\beta\|M\|^{2}(\|u-z_{0}\|+\|z_{0}\|)\|u-z_{0}\|+|\varphi(z_{0})|+\|l\|_{V^{*} }\|z_{0}\|+|b|,\]
which implies
\[\left(m-\beta\|M\|^{2}\right)\,\|u-z_{0}\|^{2}\leq c_{1}\,\|u-z_{0}\|+c_{2},\]
where
\[c_{1} =\|Az_{0}\|_{V^{*}}+\|f\|_{V^{*}}+\alpha\|M\|+\|l\|_{X^{*}}+\beta\|M \|^{2}\|z_{0}\|,\] \[c_{2} =|\varphi(z_{0})|+\|l\|_{V^{*}}\|z_{0}\|+|b|.\]
By the smallness condition \((H_{0})\) and the elementary inequality (9), we deduce that \(\|u-z_{0}\|\) is bounded. So there are constants \(R_{1}\), \(R_{2}>0\) such that
\[\|u\|\leq R_{1}\quad\mbox{and}\quad\|Mu\|_{X}\leq\|M\|\|u\|\leq\|M\|R_{1}=:R_{2},\]
where \(R_{1}\) and \(R_{2}\) rely on the set \(Q\) but are independent of \(z_{0}\), since \(z_{0}\in Q\) and \(Q\) is bounded in \(V\). This completes the proof of the desired estimates.
**Step 3.** We now define a modification of the multivalued subgradient term \(\partial j\). We introduce \(F\colon X\to 2^{X}\) given by
\[F(z)=\partial j(P_{R_{2}}(z))\ \mbox{ for all }\ z\in X,\]
where \(P_{R_{2}}\colon X\to X\) is the \(R_{2}\)-radial retraction defined by
\[P_{R_{2}}(z)=\begin{cases}z,&\mbox{if }\|z\|_{X}\leq R_{2},\\ R_{2}\dfrac{z}{\|z\|_{X}},&\mbox{otherwise},\end{cases}\quad\mbox{ for }\ z\in X.\]
Hence, we have
\[F(z)=\begin{cases}\partial j(z),&\mbox{if }\|z\|_{X}\leq R_{2},\\ \partial j\Big{(}R_{2}\dfrac{z}{\|z\|_{X}}\Big{)},&\mbox{otherwise},\end{cases} \quad\mbox{ for }\ z\in X.\]
Note that \(F\) has the same properties as \(\partial j\) (nonempty, closed, weakly compact values, the graph of \(F\) is closed in \(X\times X_{w}\)-topology) and, moreover, by \(H(j)\), it follows
\[\|F(z)\|_{X}\leq\alpha+\beta R_{2}=:R\ \mbox{ for all }\ z\in X.\]
**Step 4.** Let \(D:=\{(v,w)\in C\times X\mid\|v\|\leq R_{1},\ \|w\|_{X}\leq R\}\). For each \((v,w)\in D\) fixed, we consider the following auxiliary problem:
\[P(v,w)\]
Let \(p\colon D\subset C\times X\to C\) be the solution map of problem \(P(v,w)\) defined by
\[p(v,w)=u. \tag{17}\]
The map \(p\) is well defined since, for each \((v,w)\in D\), problem \(P(v,w)\) has a unique solution \(u\in C\). Indeed, problem \(P(v,w)\) can be equivalently written as
\[\left\{\begin{array}{c}\mbox{find $u\in C$ such that $u\in E$ \ and }\\ \quad\langle Au-g,z-u\rangle+\varphi(z)-\varphi(u)\geq 0\ \mbox{ for all }\ z\in E\end{array}\right.\]
with \(g=f-M^{*}w\) and \(E=K(v)\). From \(H(A)\), \(H(K)\), \(H(M)\), \(H(\varphi)\), and \(H(f)\), the latter has a unique solution, see Lemma 4.
Furthermore, we claim that \(p\colon D\subset C\times X\to C\) is continuous from \(V_{w}\times X_{w}\) to \(V\). To this end, let \((v_{n},w_{n})\in C\times X\), \(v_{n}\to v\) weakly in \(V\), and \(w_{n}\to w\) weakly in \(X\) as \(n\to\infty\). We show that \(u_{n}=p(v_{n},w_{n})\to p(v,w)=u\) in \(V\). Since, by the definition of \(p\), \(u_{n}\) is the unique solution to problem \(P(v_{n},w_{n})\), we have
\[P(v_{n},w_{n})\qquad\left\{\begin{array}{c}\mbox{find $u_{n}\in C$ such that $u_{n}\in K(v_{n})$ and }\\ \quad\langle Au_{n}-f,z-u_{n}\rangle+\varphi(z)-\varphi(u_{n})+\langle M^{*}w_ {n},z-u_{n}\rangle_{X}\geq 0\\ \qquad\mbox{ for all }\ z\in K(v_{n}).\end{array}\right.\]
Again, we remark that problem \(P(v_{n},w_{n})\) can be formulated as follows
\[\left\{\begin{array}{c}\mbox{find $u_{n}\in C$ \ such that $u_{n}\in E_{n}$ \ and }\\ \quad\langle Au_{n}-g_{n},z-u_{n}\rangle+\varphi(z)-\varphi(u_{n})\geq 0 \ \mbox{ for all }\ z\in E_{n}\end{array}\right.\]
with \(g_{n}=f-M^{*}w_{n}\in V^{*}\) and \(E_{n}=K(v_{n})\subset V\). Using the convergences \(v_{n}\to v\) weakly in \(V\) and \(w_{n}\to w\) weakly in \(X\), by the compactness of \(M^{*}\) and \(H(K)\), we get
\[g_{n}\to g\ \mbox{ in }\ V^{*},\] \[E_{n}=K(v_{n})\ \stackrel{{ M}}{{\longrightarrow}}\ K(v)=E.\]
By applying Proposition 7, we obtain that \(u_{n}\to u\) in \(V\), where \(u\in K(v)\) is the unique solution to problem \(P(v,w)\). This proves the claim.
**Step 5.** We define the multifunction \(\Lambda\colon D\to 2^{D}\) by
\[\Lambda(v,w)=\big{(}p(v,w),F(Mp(v,w))\big{)}\ \mbox{ for }\ (v,w)\in D. \tag{18}\]
The values of the map \(\Lambda\) stay in \(D\). In fact, if \((v,w)\in D\), then \(\|v\|\leq R_{1}\) and \(\|w\|_{X}\leq R\). Then \(\|p(v,w)\|=\|u\|\leq R_{1}\) by Step 2, and \(\|F(Mp(v,w))\|_{X}\leq\alpha+\beta\|M\|\|u\|\leq R\) by Step 3. This means that \((p(v,w),F(Mp(v,w)))\in D\). Further, the values of \(\Lambda\) are nonempty, closed and convex sets, by the analogous properties of the Clarke subgradient.
The next claim is that the graph of \(\Lambda\) is sequentially weakly closed in \(D\times D\). Let \((v_{n},w_{n})\in D\), \((\overline{v}_{n},\overline{w}_{n})\in\Lambda(v_{n},w_{n})\), \((v_{n},w_{n})\to(v,w)\) weakly in \(V\times X\), and \((\overline{v}_{n},\overline{w}_{n})\to(\overline{v},\overline{w})\) weakly in \(V\times X\). We show that \((\overline{v},\overline{w})\in\Lambda(v,w)\). By the definition of \(\Lambda\), we have
\[\overline{v}_{n}=p(v_{n},w_{n})\quad\mbox{and}\quad\overline{w}_{n}\in F(Mp(v _{n},w_{n})). \tag{19}\]
From the continuity of the map \(p\) (see Proposition 7) and continuity of \(M\), we get
\[p(v_{n},w_{n})\to p(v,w)\ \ \mbox{in}\ V\ \ \ \mbox{and}\ \ \ Mp(v_{n},w_{n})\to Mp(v,w)\ \ \mbox{in}\ X\]
which, together with (19), entails
\[\overline{v}=p(v,w)\ \ \ \mbox{and}\ \ \ \overline{w}\in F(Mp(v,w)).\]
The latter is a consequence of the closedness of the graph of \(F\) in \(X\times X_{w}\) topology. Hence \((\overline{v},\overline{w})\in\big{(}p(v,w),F(Mp(v,w))\big{)}=\Lambda(v,w)\). The claim is proved.
**Step 6.** We apply the Kakutani-Ky Fan-Glicksberg theorem with \(Y=V\times X\) and the map \(\Lambda\) given by (18). Thus, there exists \((v^{*},w^{*})\in D\) such that \((v^{*},w^{*})\in\Lambda(v^{*},w^{*})\). Hence, \(v^{*}=u^{*}\) and \(w^{*}\in F(Mu^{*})\), where \(u^{*}\in C\) satisfies \(u^{*}\in K(u^{*})\) and
\[\langle Au^{*}-f,z-u^{*}\rangle+\varphi(z)-\varphi(u^{*})+\langle M^{*}w^{*},z -u^{*}\rangle_{X}\geq 0\ \ \mbox{for all}\ \ z\in K(u^{*})\]
with \(w^{*}\in F(Mu^{*})\). Repeating the estimate of Step 2 for \(u^{*}\in C\), we have \(\|u^{*}\|\leq R_{1}\) and \(\|Mu^{*}\|_{X}\leq R_{2}\) which implies \(F(Mu^{*})=\partial j(Mu^{*})\). Hence, \(u^{*}\in C\) is such that \(u^{*}\in K(u^{*})\) and
\[\langle Au^{*}-f,z-u^{*}\rangle+\varphi(z)-\varphi(u^{*})+\langle M^{*}w^{*}, z-u^{*}\rangle_{X}\geq 0\ \ \mbox{for all}\ \ z\in K(u^{*})\]
with \(w^{*}\in\partial j(Mu^{*})\). Thus \(u^{*}\in C\) solves Problem 10. By Step 1, we conclude that \(u^{*}\in C\) is a solution to Problem 8. The proof is complete.
The following result concerns the compactness of the solution set.
Theorem 11.: _If the hypotheses of Theorem 9 hold, then the solution set to Problem 8 is a nonempty and compact subset of \(V\)._
Proof.: Let \(\mathbb{S}\) be the solution set to Problem 8. It is a nonempty set by Theorem 9. From the proof of Theorem 9, it follows that \(\mathbb{S}\subset p(D)\), where the solution map \(p\colon D\subset C\times X\to C\) is defined by (17) with
\[D:=\{(v,w)\in C\times X\ |\ \|v\|\leq R_{1},\ \|w\|_{X}\leq R\}.\]
We establish compactness of \(\mathbb{S}\) based on two claims below.
**Claim 1.** The set \(p(D)\) is compact in \(C\). It suffices to show that \(p(D)\) is sequentially compact, that is, any sequence extracted from \(p(D)\) contains a subsequence that converges to an element in this set. Let \(\{u_{n}\}\subset p(D)\). From the definition of \(p\), we find a sequence \(\{(v_{n},w_{n})\}\subset D\) such that \(u_{n}=p(v_{n},w_{n})\) for all \(n\in\mathbb{N}\).
Note that \(D\) is bounded, closed and convex in the reflexive Banach space \(V\times X\), hence it is sequentially weakly compact. So we may assume, by passing to a subsequence if necessary, that \((v_{n},w_{n})\to(v,w)\) weakly in \(V\times X\) with \((v,w)\in D\). Since \(p\) is sequentially continuous from \(V_{w}\times X_{w}\) to \(C\), we have
\[p(v_{n},w_{n})\to p(v,w)\ \ \mbox{in}\ \ V.\]
This means that \(u_{n}:=p(v_{n},w_{n})\) and \(u_{0}:=p(v,w)\) satisfy \(u_{n}\to u_{0}\) in \(V\), as \(n\to\infty\), and \(u_{0}\in p(D)\). Hence, the claim follows.
**Claim 2.** The set \(\mathbb{S}\) is sequentially closed in \(V\). Let \(\{u_{n}\}\subset\mathbb{S}\) be such that \(u_{n}\to\overline{u}\) in \(V\), as \(n\to\infty\) with \(\overline{u}\in V\). We will show that \(\overline{u}\in\mathbb{S}\). It is clear that \(\overline{u}\in C\). We have \(u_{n}\in C\), \(u_{n}\in K(u_{n})\) and
\[\langle Au_{n}-f,z-u_{n}\rangle+\varphi(z)-\varphi(u_{n})\] \[\quad+j^{0}(Mu_{n};Mz-Mu_{n})\geq 0\ \ \mbox{for all}\ \ z\in K(u_{n}). \tag{20}\]
Let \(w\in K(\overline{u})\) be arbitrary. Since \(u_{n}\in C\), by the condition \((m2)\) of the Mosco convergence \(K(u_{n})\ \stackrel{{ M}}{{\longrightarrow}}\ K(\overline{u})\), there exists \(w_{n}\in C\) such that \(w_{n}\in K(u_{n})\) and \(w_{n}\to w\) in \(V\), as \(n\to\infty\). We choose \(z=w_{n}\in K(u_{n})\) in (20) to get
\[\langle Au_{n}-f,w_{n}-u_{n}\rangle+\varphi(w_{n})-\varphi(u_{n})+j^{0}(Mu_{n} ;Mw_{n}-Mu_{n})\geq 0. \tag{21}\]
Next, we will pass to the limit in (21). The following convergences hold:
* \(Au_{n}\to A\overline{u}\) weakly in \(V^{*}\), it is a consequence of [26, Proposition 3.66] and the facts that \(A\) is a bounded operator and \(\langle Au_{n},u_{n}-\overline{u}\rangle=0\),
* \(Mu_{n}\to M\overline{u}\) in \(X\), by hypothesis \(H(M)\),
* \(\varphi(w_{n})-\varphi(u_{n})\to\varphi(w)-\varphi(\overline{u})\), since \(\varphi\) is a continuous function,
* \(\limsup_{n\to\infty}j^{0}(Mu_{n};Mw_{n}-Mu_{n})\leq j^{0}(M\overline{u};Mw-M \overline{u})\), since \(j^{0}(\cdot;\cdot)\) is upper semicontinuous, see [26, Proposition 3.23(ii)].
We employ the convergences (i)-(iv) and take the upper limit in the inequality (21) to obtain
\[\langle A\overline{u}-f,w-\overline{u}\rangle+\varphi(w)-\varphi(\overline{u} )+j^{0}(M\overline{u};Mw-M\overline{u})\geq 0\]
for all \(w\in K(\overline{u})\). Finally, since \(u_{n}\in K(u_{n})\), \(u_{n}\to\overline{u}\) in \(V\), by the condition \((m1)\), we deduce \(\overline{u}\in K(\overline{u})\). Hence \(\overline{u}\in\mathbb{S}\) and the claim is proved.
Since a closed subset of a compact set is compact, by Claims 1 and 2, we conclude that the solution set \(\mathbb{S}\) of Problem 8 is nonempty and compact subset of \(V\). The proof is complete.
We complete this section by providing a simple example of function \(j\) which satisfies \(H(j)\) and whose subgradient is not relaxed monotone. This means that Theorems 9 and 11 are applicable in this case while the results of [24] can not be used.
Remark 12.: _We recall that a locally Lipschitz function \(j\colon X\to\mathbb{R}\) satisfies the relaxed monotonicity condition, if_
\[\exists\,m\geq 0\ \ \mbox{such that}\ \ \langle\partial j(x_{1})-\partial j(x_{2}),x_{ 1}-x_{2}\rangle_{X}\geq-m\,\|x_{1}-x_{2}\|_{X}^{2} \tag{22}\]
_for all \(x_{1}\), \(x_{2}\in X\), see [26, Section 3.3]. For a convex function \(j\), condition (22) reduces to the monotonicity of the (convex) subdifferential, i.e., \(m=0\). Now, let \(j\colon\mathbb{R}\to\mathbb{R}\) be defined by_
\[j(r)=\begin{cases}0&\text{if}\ \ r<0,\\ \frac{r^{2}}{2}&\text{if}\ \ r\in[0,1),\ \ \text{for}\ \ r\in\mathbb{R}.\\ 1&\text{if}\ \ r\geq 1,\end{cases}\]
_It is clear that \(j\) is a locally Lipschitz, nonconvex function with_
\[\partial j(r)=\begin{cases}0&\text{if}\ \ r<0,\\ r&\text{if}\ \ r\in[0,1),\\ [0,1]&\text{if}\ \ r=1,\\ 0&\text{if}\ \ r>1\end{cases}\ \ \text{for}\ \ r\in\mathbb{R},\]
_and \(|\partial j(r)|\leq 1\) for all \(r\in\mathbb{R}\), so \(H(j)\) holds with \(\alpha=1\) and \(\beta=0\). Note that \(j\) satisfies (22) if and only if \(\partial\phi\) is a monotone graph, i.e.,_
\[\exists\,m\geq 0\ \ \text{such that}\ \ (\partial\phi(r)-\partial\phi(s))(r-s) \geq 0\ \ \text{for all}\ \ r,s\in\mathbb{R}, \tag{23}\]
_where \(\phi(r)=j(r)+\frac{m}{2}r^{2}\) for \(r\in\mathbb{R}\). We claim that \(j\) does not satisfy (23). Suppose, contrary to our claim, that (23) holds. Let_
\[r_{n}=1-\frac{1}{n},\ \ s_{n}=1+\frac{1}{n}\ \ \text{for}\ \ n\in\mathbb{N}.\]
_Then, there exists \(m\geq 0\) such that for all \(n\in\mathbb{N}\), we have \((\partial\phi(r_{n})-\partial\phi(s_{n}))(r_{n}-s_{n})\geq 0\). Hence, a short computation leads to \(n\leq m+1\), a contradiction. Note also that in this example, the smallness condition \((H_{0})\) is trivially satisfied._
## 5 Applications to semipermeability models
In this section we study two applications of elliptic quasi-variational-hemivariatio-nal inequalities for the semipermeable media. The general inequalities of this kind incorporate both the interior and boundary semipermeability. In both applications, we simultaneously treat monotone and nonmonotone relations described by subdifferential operators. In the first model the interior semipermeability law is governed by the subgradient of a locally Lipschitz potential while the heat flux on a part of the boundary is formulated as a monotone relation of the temperature. In the second model the boundary semipermeability condition is nonmonotone and involves unilateral constraints while the interior semipermeability is described by a subdifferential of a convex function. The weak formulations of both models turn out to be quasi-variational-hemivariational inequalities of elliptic type.
Let \(\Omega\) be a bounded domain of \(\mathbb{R}^{d}\) (that is, \(\Omega\) is an open, bounded and connected set) with Lipschitz continuous boundary \(\partial\Omega=\Gamma\). Let \(\nu\) denote the unit outward normal vector on \(\Gamma\) which exists a.e. on \(\Gamma\). We suppose the boundary consists of measurable and disjoint parts \(\Gamma_{1}\) and \(\Gamma_{2}\) such that \(m(\Gamma_{1})>0\) and \(\Gamma=\overline{\Gamma}_{1}\cup\overline{\Gamma}_{2}\).
The classical model for the heat conduction problem is described by the following boundary value problem.
Problem 13.: _Find a temperature \(u\colon\Omega\to\mathbb{R}\) such that \(u\in U(u)\) and_
\[-\mathrm{div}\,a(x,\nabla u(x))=g(x,u(x))\qquad\mathrm{in}\ \ \Omega \tag{24}\]
\[\left\{\begin{array}{ll}g(x,u(x))=g_{1}(x)+g_{2}(x,u(x)),&\qquad\mathrm{in} \ \ \Omega\\ -g_{2}(x,u(x))\in\partial h(x,u(x))&\qquad\mathrm{in}\ \ \Omega\\ \end{array}\right. \tag{25}\]
\[u(x)=0\qquad\mathrm{on}\ \ \Gamma_{1} \tag{26}\]
\[-\frac{\partial u(x)}{\partial\nu_{a}}\in\partial_{c}k(x,u(x))\qquad\mathrm{ on}\ \ \Gamma_{2}, \tag{27}\]
_where \(U\colon V\to 2^{V}\) is defined by_
\[U(u):=\{\,w\in V\mid r(w)\leq m(u)\,\}. \tag{28}\]
Here, the conormal derivative associated with the field \(a\) is given by
\[\frac{\partial u(x)}{\partial\nu_{a}}=a(x,\nabla u(x))\cdot\nu.\]
We will provide the variational formulation of Problem 13 within the framework of Section 4. We introduce the following space
\[V=\{\,v\in H^{1}(\Omega)\mid v=0\ \mathrm{on}\ \Gamma_{1}\,\}. \tag{29}\]
Since \(m(\Gamma_{1})>0\), on \(V\) we can consider the norm \(\|v\|_{V}=\|\nabla v\|_{L^{2}(\Omega;\mathbb{R}^{d})}\) for \(v\in V\) which is equivalent on \(V\) to the standard \(H^{1}(\Omega)\) norm due to Korn's inequality. By \(\gamma\colon V\to L^{2}(\Gamma)\) we denote the trace operator which is known to be linear, bounded and compact. Moreover, by \(\gamma v\) we denote the trace of an element \(v\in H^{1}(\Omega)\). In the sequel, we denote by \(i\colon V\to L^{2}(\Omega)\) the embedding operator from \(V\) to \(L^{2}(\Omega)\) and \(\|i\|=\|i\|_{\mathcal{L}(V,L^{2}(\Omega))}\) stands for its norm. We also set \(C=V\) and \(X=L^{2}(\Omega)\).
In order to study the variational formulation of Problem 13, we need the following hypotheses.
\[\left\{\begin{array}{ll}a\colon\Omega\times\mathbb{R}^{d}\to\mathbb{R}^{d} \ \mathrm{is\ such\ that}\\ &\mathrm{(a)}\ a(\cdot,\xi)\ \mathrm{is\ measurable\ on}\ \Omega\ \mathrm{for\ all}\ \xi\in\mathbb{R}^{d},\\ &\mathrm{and}\ a(x,0)=0\ \mathrm{for\ a.e.}\ x\in\Omega.\\ &\mathrm{(b)}\ a(x,\cdot)\ \mathrm{is\ continuous\ on}\ \mathbb{R}^{d}\ \mathrm{for\ a.e.}\ x\in\Omega.\\ &\mathrm{(c)}\ \|a(x,\xi)\|\leq m_{a}\left(1+\|\xi\|\right)\ \mathrm{for\ all}\ \ \xi\in\mathbb{R}^{d},\ \mathrm{a.e.}\ x\in\Omega\\ &\mathrm{with}\ m_{a}>0.\\ &\mathrm{(d)}\ (a(x,\xi_{1})-a(x,\xi_{2}))\cdot(\xi_{1}-\xi_{2})\geq\alpha_{a}\,\| \xi_{1}-\xi_{2}\|^{2}\\ &\mathrm{for\ all}\ \ \xi_{1},\xi_{2}\in\mathbb{R}^{d},\ \mathrm{a.e.}\ x\in\Omega\ \ \mathrm{with}\ \alpha_{a}>0.\end{array}\right. \tag{30}\]
\[\left\{\begin{array}{ll}h\colon\Omega\times\mathbb{R}\to\mathbb{R}\mbox{ is such that}\\ &\\ &\mbox{(a) $h(\cdot,r)$ is measurable on $\Omega$ for all $r\in\mathbb{R}$ and there}\\ &\\ &\mbox{exists $\overline{e}\in L^{2}(\Omega)$ such that $h(\cdot,\overline{e}(\cdot))\in L^{1}(\Omega)$.}\\ &\\ &\mbox{(b) $h(x,\cdot)$ is locally Lipschitz on $\mathbb{R}$,}\ \mbox{ for a.e. $x\in\Omega$.}\\ &\\ &\mbox{(c) there exist $\overline{c}_{0}$, $\overline{c}_{1}\geq 0$ such that}\\ &\\ &|\partial h(x,r)|\leq\overline{c}_{0}+\overline{c}_{1}|r|\mbox{ for all $r\in\mathbb{R}$, a.e. $x\in\Omega$.}\\ &\\ &\left\{\begin{array}{ll}k\colon\Gamma_{2}\times\mathbb{R}\to\mathbb{R} \mbox{ is such that}\\ &\\ &\mbox{(a) $k(\cdot,r)$ is measurable on $\Gamma_{2}$ for all $r\in\mathbb{R}$.}\\ &\\ &\mbox{(b) $k(x,\cdot)$ is convex and continuous on $\mathbb{R}$, a.e. $x\in\Omega$.}\\ &\\ &\mbox{(c) $x\mapsto k(x,v(x))$ belongs to $\,L^{1}(\Gamma_{2})$ \,for $\,v\in L^{2}(\Gamma_{2})$.}\\ &\\ &\left\{\begin{array}{ll}r\colon V\to\mathbb{R}\mbox{ is positively homogeneous, subadditive and}\\ &\\ &\mbox{lower semicontinuous,}\\ m\colon L^{2}(\Omega)\to\mathbb{R}\mbox{ is continuous such that}\\ &\\ &\rho:=\inf_{v\in L^{2}(\Omega)}m(v)>0\mbox{ and }r(0)\leq\rho,\\ &\\ g_{1}\in L^{2}(\Omega).\end{array}\right.\end{array}\right. \tag{33}\]
Using the standard procedure, we obtain the weak formulation of Problem 13 which is a quasi-variational-hemivariational inequality.
Problem 14.: _Find \(u\in V\) such that \(u\in U(u)\) and_
\[\int_{\Omega}a(x,\nabla u(x))\cdot\nabla(z(x)-u(x))\,dx+\int_{ \Gamma_{2}}\left(k(x,z(x))-k(x,u(x))\right)d\Gamma\] \[\quad+\int_{\Omega}h^{0}(x,u(x);z(x)-u(x))\,dx\geq\int_{\Omega}g_ {1}(x)(z(x)-u(x))\,dx\ \mbox{ for all }\ z\in U(u).\]
Theorem 15.: _Assume that conditions (30)-(33) are satisfied and the following smallness condition holds_
\[\overline{c}_{1}\sqrt{2}\left\|i\right\|^{2}<\alpha_{a}. \tag{34}\]
_Then, Problem 14 has a nonempty and compact solution set in \(V\)._
Proof.: We apply Theorems 9 and 11 with the following data: \(C=V\), \(X=L^{2}(\Omega)\), \(K(\cdot)=U(\cdot)\), \(f=g_{1}\) and
\[A\colon V\to V^{*}, \left\langle Au,v\right\rangle=\int_{\Omega}a(x,\nabla u(x)) \cdot\nabla v(x)\,dx\ \mbox{ for }\ u,v\in V, \tag{35}\] \[\varphi\colon V\to\mathbb{R}, \varphi(v)=\int_{\Gamma_{2}}k(x,v(x))\,d\Gamma\ \mbox{ for }\ v\in V,\] (36) \[j\colon X\to\mathbb{R}, j(w)=\int_{\Omega}h(x,w(x))\,dx\ \mbox{ for }\ w\in X,\] (37) \[M\colon V\to X, M=i\colon V\to X\ \mbox{ the embedding map.} \tag{38}\]
First, we note that by [26, Theorem 3.47] and hypothesis (31), \(j\colon X\to\mathbb{R}\) is a locally Lipschitz function such that
\[j^{0}(w;z)\leq\int_{\Omega}h^{0}(x,w(x);z(x))\,dx, \tag{39}\]
\[\partial j(w)\subset\int_{\Omega}\partial h(x,w(x))\,dx \tag{40}\]
for all \(w\), \(z\in X\). Under our notation, from (39) it is clear that any solution of Problem 8 is also a solution of Problem 14. Based on this fact, we are going to check the validity of the hypotheses \(H(A)\), \(H(C)\), \(H(j)\), \(H(K)\), \(H(M)\), \(H(\varphi)\), \(H(f)\), and \((H_{0})\) of Theorem 9 and show the existence of a solution to Problem 14.
Second, by virtue of the definition of \(A\), see (35), and conditions (30)(a)-(b), we can see that \(A\) is continuous. Besides, hypothesis (30)(d) reveals that \(A\) is strongly monotone with constant \(m=m_{a}\). The boundedness of \(A\) can be verified directly by applying Holder inequality and (30)(c). This means that \(A\) is pseudomonotone (see [26, Proposition 3.69]). Thus \(H(A)\) holds.
Employing the growth condition (31)(c), inclusion (40), and the Holder inequality, we deduce that \(j\) satisfies condition \(H(j)\) with \(\beta=\overline{c}_{1}\sqrt{2}\). From the definition of \(\varphi\), we can see that it is convex and continuous which implies \(H(\varphi)\). Furthermore, condition \(H(M)\) holds by the well-known properties of the embedding operator. It is also obvious that \(H(C)\) and \(H(f)\) are satisfied.
Next, we shall prove that the multifunction \(K\) fulfills hypothesis \(H(K)\). Set \(Q=\{0_{V}\}\) which is a zero element of \(V\). Because of the condition \(r(0)\leq\rho:=\inf_{v\in L^{2}(\Omega)}m(v)\), for each \(v\in V\), we have
\[0_{V}\in K(v)\cap Q\ \ \mbox{for all}\ \ v\in V.\]
For any \(v\in V\) fixed, let \(u\), \(w\in K(v)\) and \(\lambda\in(0,1)\) be arbitrary. The convexity of \(r\) (since \(r\) is positively homogeneous and subadditive) entails
\[r(\lambda u+(1-\lambda)w)\leq\lambda r(u)+(1-\lambda)r(w)\leq\lambda m(v)+(1- \lambda)m(v)=m(v),\]
this means \(\lambda u+(1-\lambda)w\in K(v)\), i.e., \(K(v)\) is convex. Let \(\{u_{n}\}\subset K(v)\) be such that \(u_{n}\to u\) as \(n\to\infty\) for some \(u\in V\). The continuity of \(r\) implies
\[r(u)\leq\liminf_{n\to\infty}r(u_{n})\leq m(v).\]
Hence, \(K(v)\) is a closed set for each \(v\in V\). Thus, the multifunction \(K\colon V\to 2^{V}\) has nonempty, closed, and convex values.
Next, we fix a sequence \(\{v_{n}\}\subset V\) which is such that
\[v_{n}\to v\ \ \mbox{weakly in}\ V\ \mbox{as}\ n\to\infty \tag{41}\]
for some \(v\in V\). We shall verify that \(K(v_{n})\ \stackrel{{ M}}{{\longrightarrow}}\ K(v)\) by checking the conditions (m1) and (m2) of Definition 1. For the proof of (m1), let \(u\in K(v)\) be arbitrary
and we set \(u_{n}=\frac{m(v_{n})}{m(v)}u.\) Then, from the positive homogeneity of \(r\) and the condition \(\rho>0\), it follows
\[r(u_{n})=\frac{m(v_{n})}{m(v)}r(u)\leq m(v_{n}),\]
which gives \(u_{n}\in K(v_{n})\) for every \(n\in\mathbb{N}\). Since \(V\) embeds into \(X\) compactly, and \(m\) is continuous, a short calculation gives
\[\lim_{n\to\infty}\|u_{n}-u\|=\lim_{n\to\infty}\left\|\frac{m(v_{n})}{m(v)}u-u \right\|=\lim_{n\to\infty}\frac{|m(v_{n})-m(v)|}{m(v)}\|u\|=0,\]
and so, \(u_{n}\to u\) in \(V\) as \(n\to\infty\). This proves (m1). Next, we show the condition (m2). Let \(\{u_{n}\}\subset V\) be such that
\[u_{n}\in K(v_{n}),\text{ and }u_{n}\to u\text{ weakly in }V\text{ as }n\to\infty\]
for some \(u\in V\). The inequalities
\[r(u)\leq\liminf_{n\to\infty}r(u_{n})\leq\liminf_{n\to\infty}m(v_{n})=m(v),\]
imply \(u\in K(v)\), where we have used the weak lower semicontinuity of \(r\), the continuity of \(m\) and the compactness of the embedding of \(V\) into \(X\). Hence (m2) follows. This means that condition \(H(K)\) holds. Additionally, \((H_{0})\) is a consequence of (34).
Therefore, all hypotheses of Theorem 9 are verified. We are now in a position to invoke this theorem to conclude that the set of solutions to Problem 8 is nonempty. This means that Problem 14 has at least one solution. Moreover, arguing as in the proof of Theorem 11, we can demonstrate that the solution set of Problem 14 is compact in \(V\).
Example 16.: _Let \(r\colon V\to\mathbb{R}\) and \(m\colon L^{2}(\Omega)\to\mathbb{R}\) be defined by_
\[r(v):=\int_{\Omega}|\nabla v(x)|\varrho_{1}(x)\,dx\ \ \text{for all }\ v\in V,\]
_and_
\[m(v):=m_{0}+\int_{\Omega}|v(x)|\varrho_{2}(x)\,dx\ \ \text{for all }\ v\in L^{2}(\Omega),\]
_where \(\varrho_{1}\), \(\varrho_{2}\in L^{2}(\Omega)\), \(\varrho_{1}\), \(\varrho_{2}\geq 0\) and \(m_{0}>0\). It is not difficult to prove that the functions \(r\) and \(m\) satisfy the condition (33)._
We complete this section by presenting another mixed boundary value problem with a unilateral condition on a part of the boundary to which our abstract results of Section 3 could be applied. This model describes a boundary and interior semiper-meability problem.
Let \(\Omega\) be a bounded domain of \(\mathbb{R}^{d}\), \(d=2\), \(3\) with Lipschitz continuous boundary \(\partial\Omega=\Sigma\) which consists of disjoint measurable parts \(\Sigma_{1}\), \(\Sigma_{2}\) and \(\Sigma_{3}\) such that \(\Sigma=\overline{\Sigma}_{1}\cup\overline{\Sigma}_{2}\cup\overline{\Sigma}_{3}\) and \(m(\Sigma_{1})>0\).
Problem 17.: _Find a function \(u\colon\Omega\to\mathbb{R}\) such that \(u\in U(u)\) and_
\[-\mathrm{div}\,a(x,\nabla u(x))+\partial_{c}p(x,u(x))\ni g_{1}(x) \text{in}\ \ \ \Omega,\] \[u=0 \text{on}\ \ \ \Sigma_{1},\] \[-\frac{\partial u(x)}{\partial\nu_{a}}\in\partial h_{2}(x,u(x)) \text{on}\ \ \Sigma_{2},\] \[\left\{\begin{array}{ll}u(x)\leq k_{2}(x),\\ \frac{\partial u(x)}{\partial\nu_{a}}\leq 0,&\text{on}\ \ \Sigma_{3},\\ \frac{\partial u(x)}{\partial\nu_{a}}(u(x)-k_{2}(x))=0,\end{array}\right.\]
_where \(U\colon C\to 2^{C}\) is defined by_
\[U(u):=\{\,w\in C\mid r(w)\leq m(u)\,\},\]
_and the set \(C\subset V\) is given by_
\[C:=\{\,v\in V\mid v(x)\leq k_{2}(x)\ \ \text{on}\ \ \Sigma_{3}\,\}.\]
We impose the following assumptions on the data.
\[\left\{\begin{array}{ll}h_{2}\colon\Sigma\times\mathbb{R}\to\mathbb{R}\ \text{is such that}\\ \text{(a)}\ h_{2}(\cdot,r)\ \text{is measurable on}\ \Sigma\ \text{for all}\ r\in\mathbb{R}\ \text{and there}\\ \text{exists}\ \overline{e}\in L^{2}(\Sigma)\ \text{such that}\ h_{2}(\cdot,\overline{e}(\cdot))\in L^{1}( \Sigma).\\ \text{(b)}\ h_{2}(x,\cdot)\ \text{is locally Lipschitz on}\ \mathbb{R},\ \ \text{for a.e.}\ x\in\Sigma.\\ \text{(c)}\ \text{there exist}\ \overline{c}_{0},\,\overline{c}_{1}\geq 0\ \text{such that}\\ \
For Problem 18 we need the following notation. Let the space \(V\) be defined by (29), \(X=L^{2}(\Sigma)\), \(M=\gamma\colon V\to X\) be the trace operator, \(A\colon V\to V^{*}\) be an operator given by (35), \(f=g_{1}\), and \(K(\cdot)=U(\cdot)\). Let the functions \(\varphi\colon V\to\mathbb{R}\) and \(j\colon X\to\mathbb{R}\) be defined by
\[\varphi(v)=\int_{\Omega}p(x,v(x))\,dx\ \ \text{for}\ \ v\in V,\]
\[j(w)=\int_{\Sigma_{2}}h_{2}(x,w(x))\,d\Gamma\ \ \text{for}\ \ w\in X,\]
respectively. Under the above notation Problem 18 enters the abstract framework of Section 3. Arguing as in the proof of Theorem 15, we obtain the following result.
Theorem 19.: _Assume that conditions (30), (33), and (42)-(44) are satisfied and the following smallness condition holds_
\[\overline{c}_{1}\sqrt{2}\,\|\gamma\|^{2}<\alpha_{a}.\]
_Then, Problem 18 has a nonempty and compact solution set in \(V\)._
## Acknowledgements
Project is supported by the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant agreement No. 823731 CONMECH, National Science Center of Poland under Preludium Project No. 2017/25/N/ST1/00611, NNSF of China Grant Nos. 12001478 and 12026256, and the Startup Project of Doctor Scientific Research of Yulin Normal University No. G2020ZK07. It is also supported by Natural Science Foundation of Guangxi Grant Nos. 2021GXNSFFA196004, 2020GXNSFBA297137 and 2018GXNSFAA281353, and by the Beibu Gulf University under project No. 2018KYQD06. The first author is also supported by the projects financed by the Ministry of Science and Higher Education of Republic of Poland under Grants Nos. 4004/GGPJII/H2020/2018/0 and 440328/PnH2/2019, and by the National Science Centre of Poland under Project No. 2021/41/B/ST1/01636.
|
2309.17113 | Meta-Path Learning for Multi-relational Graph Neural Networks | Existing multi-relational graph neural networks use one of two strategies for
identifying informative relations: either they reduce this problem to low-level
weight learning, or they rely on handcrafted chains of relational dependencies,
called meta-paths. However, the former approach faces challenges in the
presence of many relations (e.g., knowledge graphs), while the latter requires
substantial domain expertise to identify relevant meta-paths. In this work we
propose a novel approach to learn meta-paths and meta-path GNNs that are highly
accurate based on a small number of informative meta-paths. Key element of our
approach is a scoring function for measuring the potential informativeness of a
relation in the incremental construction of the meta-path. Our experimental
evaluation shows that the approach manages to correctly identify relevant
meta-paths even with a large number of relations, and substantially outperforms
existing multi-relational GNNs on synthetic and real-world experiments. | Francesco Ferrini, Antonio Longa, Andrea Passerini, Manfred Jaeger | 2023-09-29T10:12:30Z | http://arxiv.org/abs/2309.17113v2 | # Meta-Path Learning for Multi-relational Graph Neural Networks
###### Abstract
Existing multi-relational graph neural networks use one of two strategies for identifying informative relations: either they reduce this problem to low-level weight learning, or they rely on handcrafted chains of relational dependencies, called meta-paths. However, the former approach faces challenges in the presence of many relations (e.g., knowledge graphs), while the latter requires substantial domain expertise to identify relevant meta-paths. In this work we propose a novel approach to learn meta-paths and meta-path GNNs that are highly accurate based on a small number of informative meta-paths. Key element of our approach is a scoring function for measuring the potential informativeness of a relation in the incremental construction of the meta-path. Our experimental evaluation shows that the approach manages to correctly identify relevant meta-paths even with a large number of relations, and substantially outperforms existing multi-relational GNNs on synthetic and real-world experiments.
## 1 Introduction
Graph neural networks (GNNs) have emerged as a powerful framework for analyzing networked data [6; 8; 18; 24], enabling effective learning and representation of complex relationships in several real-world applications [2; 23; 31; 39]. Standard GNN approaches have mostly focused on homogeneous graphs [7; 30; 34], where all nodes and edges belong to a single type. However, many real-world graph datasets exhibit heterogeneity, with multiple types of nodes and relations [4; 22; 28].
Treating heterogeneous graphs as homogeneous and aggregating information uniformly across all relations is a suboptimal approach, as different relations can convey largely different semantic information about the nodes they connect. A simple and effective strategy to retain the rich semantic information present in heterogeneous graphs is relying on meta-paths, which are chains of relational dependencies (e.g., "actor -> acted in -> movie -> has genre"). The challenge lies in determining the relevant meta-paths in a given graph. Existing methods either rely on predefined meta-paths defined by domain experts [3; 5; 32], which are extremely expensive to collect, or learn "soft" meta-paths by learning to assign weights to relations [25; 37; 38], an approach that only works with few relations and fails to scale to knowledge graphs. Solutions conceived for mining meta-paths from knowledge graphs typically consider relations only, ignoring node features altogether [16; 33].
To overcome these limitations, we propose a novel approach to learn meta-paths and meta-path GNNs that are highly accurate based on a small number of informative meta-paths. Key to our approach is the formalization of a scoring function, inspired by the relational information gain principle [14], that evaluates the potential informativeness of a relation in the incremental construction of the meta-path. This allows to learn a Meta-Path Graph Neural Network (MP-GNN) in which different layers convey information from different relations while retaining node-specific features in the aggregation process.
The main contributions of this work can be summarized as follows:
* We propose a scoring function evaluating the potential informativeness of a relation in the meta-path construction.
* We introduce MP-GNN, a simple variant of the RGCN architecture, which effectively combines learned meta-paths and node features into a multi-relational graph processing architecture.
* We provide an extensive experimental evaluation on synthetic and real-world datasets showing how our approach substantially outperforms existing multi-relational GNNs when dealing with graphs with a large number of relations.
## 2 Related work
In recent research, meta-path mining has emerged as an effective approach for analyzing heterogeneous graphs, relying on frequency cutoffs and sequential pattern mining strategies to identify promising meta-paths [17, 26, 36]. In the field of neuro-symbolic reasoning for knowledge graph completion (KGC), various approaches use reinforcement learning-based algorithms to explore relation-paths and derive logical formulas [16, 33]. Other approaches [11, 12, 20], search for the most relevant meta-path using variants of random walk search. A major limitation of all these approaches is that they are incapable of accounting for node features in determining the relevance of a candidate meta-path, making them unusable in knowledge graph embedding scenarios.
In the field of heterogeneous graph embedding, several methods have been proposed to enhance node and graph embedding by incorporating meta-paths. These methods can be broadly categorized into two groups: those using predefined meta-paths and those learning meta-paths by weighting the contribution of different relations.
In the first group, Meta-path Aggregated Graph Neural Network [5] focuses on aggregating node features along predefined meta-paths using an attention mechanism, capturing diverse structural patterns. Heterogeneous Attention Network [32] introduces a hierarchical attention mechanism to handle heterogeneous graphs, enhancing performance and interpretability. GraphMSE [13] tackles the problem of information-rich meta-path selection by aggregating information using different meta-paths and adopting BFS (Breadth First Search) as meta-path expansion strategy. Meta-path extracted graph neural network [3] incorporates message passing and emphasizes interpretability and semantic relationships. However, these approaches require that meta-paths are provided beforehand, something which severely limits their adaptability.
In the second group, Relational Graph Convolutional Networks (RGCN) [25] capture relation-specific patterns with distinct trainable parameters for each edge type. R-HGNN [35] uses a dedicated graph convolution component for unique node representations from relation-specific graphs. RSHN [40] integrates Coarsened Line Graph Neural Network (CL-GNN) for enhanced embedding in large-scale heterogeneous graphs. Graph Transformer Networks (GTN) [37] learn task-specific multi-hop connections (i.e., meta-paths) for useful node representations. FastGTN [38] addresses GTN's high complexity by implicitly transforming graphs. HGN [15] employs GAT as a backbone for a simple yet effective model. HGT [9] uses node and edge-type dependent parameters for heterogeneous attention. MVHRE [19] enriches relational representations for link prediction using a multi-view network representation learning framework. While effective with a small number of candidate relations, these approaches' performance degrades as the number increases, as shown in our experimental evaluation.
## 3 Preliminary
In this section, we provide an overview of fundamental concepts of our approach.
**Heterogeneous graph.** A heterogeneous graph is a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{T}_{v},\mathcal{T}_{e})\) where \(\mathcal{V}\) is the set of nodes or entities and \(\mathcal{E}\) is the set of edges. Each node \(v\) and edge \(e\) has a type, specified by the mapping functions \(\tau_{v}(v):\mathcal{V}\rightarrow\mathcal{T}_{v}\) and \(\tau_{e}(e):\mathcal{E}\rightarrow\mathcal{T}_{e}\). Moreover, each node \(v\) has a feature vector \(x_{v}\in\mathrm{I\!R}^{d}\).
**Meta-path** A meta-path \(mp\) is a relation sequence defined on a heterogeneous graph \(\mathcal{G}\), denoted in the form \(\xrightarrow{r_{1}}\xrightarrow{r_{2}}...\xrightarrow{r_{L}}\), where \(r_{1},...,r_{L}\) are relation types and for each consecutive pair of relations \(\xrightarrow{r_{i}}\xrightarrow{r_{i+1}}\) the intersection between the valid node types that are the destination of \(\xrightarrow{r_{i}}\) and the valid node types that are the source of \(\xrightarrow{r_{i+1}}\) is non-empty. Note that this is a more general definition than
the one in [27], in that it allows multiple node types as sources and destinations of a given relation, consistently with what can be found in large general-purpose knowledge graphs.
**RGCN layer** The relational graph convolutional layer from [25] extends the standard convolution operation on graphs [10] to the multi-relational setting by assigning specific parameters for each relation type. Message passing update for node \(i\) at layer \(l\) is given by:
\[h_{i}^{(l+1)}=\sigma\left(W_{0}^{(l)}h_{i}^{(l)}+\sum_{r\in\mathcal{R}}\sum_{j \in\mathcal{N}_{i}^{r}}\frac{1}{c_{i,r}}W_{r}^{(l)}h_{j}^{(l)}\right) \tag{1}\]
where \(\mathcal{R}\) is the set of relations in the graph, \(\mathcal{N}_{i}^{r}\) is the set of \(r\)-neighbours of node \(i\) and \(c_{i,r}\) is a fixed or learnable normalizing parameter.
## 4 MP-GNN learning
The goal of our approach is to learn relevant meta-paths that can serve as predictive features for the node classification task. 1 Differently from the approaches that use all relations at the same time by weighting each edge type contribution, we focus on finding the relevant chains of relations (i.e., meta-paths) beneficial for making accurate predictions. Note that, differently from what happens in purely relational settings [11, 12, 17, 20, 26, 36], we assume here that the informativeness of a meta-path also depends on the features of the nodes that are being traversed (which include the node type, but also node attributes and potentially pre-computed node embeddings). Our approach accounts for this aspect in mining relevant meta-paths. Meta-paths are constructed in a greedy, incremental approach using the idea of relational information gain [14] to score candidate extensions of an already constructed meta-path prefix. Consider the toy node classification task in Figure 1. To incrementally build the correct meta-path (bottom right in the legend), one has to realize that "Main actor in" is a better candidate than "Appeared in". Intuitively, our scoring function does this by assigning weights (i.e., pseudo-labels) to nodes reached by a candidate relation in such a way that the label of the target node can be inferred by propagating the pseudo-label of the neighbour. Figure 2 shows an example of weight assignment for the "Main actor in" and "Appeared in" relations, indicating a higher score for the former. However, these pseudo-labels only hint at the potential informativeness of the relation. Indeed, being a main actor in a movie is not enough to qualify as an award winning actor, even in the toy example of Figure 1. The movie should be a drama (node feature), and be directed by an award winning director. Whether this potential informativeness actually materializes is determined in the following steps, where the pseudo-labels become new prediction targets for the next extension of the meta-path under construction. Details of this method are described in Section 4.1.
Figure 1: A toy node classification problem. Node shapes indicate types, while node attributes and edge types are encoded as colors. The task consist in labelling actor nodes (pentagons, which do not have attributes). An \(Actor\) is labelled as positive if involved as main actor in a drama directed by and award winning director.
Once a candidate meta-path has been extracted, it is used to build a MP-GNN in which each layer corresponds to a relation on the meta-path. Section 4.2 presents a formalization of this architecture, and shows how to extend it to account for multiple meta-paths. Finally, these ingredients are combined into an overall algorithm to jointly learn a meta-path and a corresponding MP-GNN. For the sake of readability, the algorithm is presented for the single meta-path case, but its extension to multiple meta-paths using a beam search is straightforward (we employed a beam size equal to three in the experiments). Note that this algorithm is designed to identify existential meta-path features, i.e., cases where the existence of an instance of a meta-path is informative for the class label. Adaprations and extensions where counts or proportions of meta-path realizations are the relevant feature are subject of future work.
### Scoring function
The goal of the scoring function is that of providing a measure of informativeness of a relation towards predicting node labels. We start discussing the first iteration, i.e., identifying the first relation in the meta-path, and then show how the function can be adapted to deal with meta-path extension.
In the first iteration, the scoring function takes as input a list of nodes together with their target labels. Under the previously introduced existential quantification assumption, a candidate relation \(r\) is informative for the label of a node \(i\) if at least one of the neighbors \(\mathcal{N}_{i}^{\tau}\) of \(i\) according to \(r\) belongs to the ground-truth meta-path, and \(i\) has the right features (remember that the label is assumed to depend on the combination of the meta-path and the features of the nodes being traversed). This can be formalized as follows:
\[\tilde{y}_{i}^{r}=\Theta^{T}h_{i}^{(0)}\cdot\max_{j\in\mathcal{N}_{i}^{r}}w_{j} \tag{2}\]
Here \(\Theta\) is a learnable weight vector accounting for the contribution of the node features, while \(w_{j}\) is a learnable node weight that is set (close) to 1 if node \(j\) is predicted as belonging to the ground-truth meta-path, and (close to) zero otherwise. The score of \(r\) is computed by minimizing the MSE between the predicted and ground truth node labels over \(\Theta\) and \(\mathbf{w}\):
\[s_{r}=\min_{\Theta,\mathbf{w}}\frac{1}{N}\sum_{i=1}^{N}(\tilde{y}_{i}^{r}-y_{ i})^{2} \tag{3}\]
The relation with the minimum score is selected as the first relation of the meta-path.
To explain how the scoring of the following relations in the meta-path works, it is important to remember that the weights \(\mathbf{w}\) represent a tentative assignment to neighbours as belonging or not-belonging to the ground-truth meta-path (i.e., their _potential informativeness_). Multiple potential assignments can be minimizers of Eq. 3. In the left panel of Figure 2, where relation \(r_{1}\) (green) is being scored, any minimizers of Eq. 3 requires \(w_{E}=1\) (to account for the positive label of node \(A\)) and \(W_{F}=0\) (to account for the negative label of node \(B\)). On the other hand, (0,1), (1,0) and (1,1) are all valid assignments to the \((W_{G},W_{A})\) pair. Indeed, the only constraint that the positive label of \(C\) enforces is that the bag \((W_{G},W_{A})\) contains at least one node with value 1, as happens in multi-instance classification settings [1]. We thus generate labelled bags of nodes for the following iteration(s) of meta-path construction, that will play the role of the node labels \(y\) in the initial iteration.
Figure 2: **First iteration:** the scoring function assigns a high score to the red (“Main actor in”) relation (left panel) by giving large weights to movies D and E, that are only connected to a positive node, and small weights to the other movie nodes. On the other hand, the green (“Appeared in”) relation (right panel) has low score, as no weight assignment can jointly explain the positive label of the A node and the negative label of the B node.
Positive bags are computed as follows:
\[B^{+}(i)=\{j\in\mathcal{N}_{i}^{r}\mid\nexists k:j\in\mathcal{N}_{k}^{r}\wedge y_{ k}=-1\} \tag{4}\]
where \(i\) is a positive-labelled node (\(y_{i}=+1\)). Negative bags, on the other hand, are singletons, i.e., given a negatively-labelled node \(j\), we create a negative bag \(B^{-}(k)=\{k\}\) for each of its neighbors \(k\in\mathcal{N}_{j}^{r}\). The informativeness of the new relation \(s\) (as extension of relation \(r\)) can now be computed in terms of its potential in predicting bag labels:
\[\tilde{y}_{B(i)}^{s}=\max_{j\in B(i)}\Bigl{(}\Theta^{T}h_{j}^{(0)}\cdot\max_{k \in\mathcal{N}_{j}^{r}}w_{k}\Bigr{)} \tag{5}\]
and obtained minimizing MSE at the bag-label level. See Figure 3 for a graphical representation of the components involved.
Once the next relation is selected, the procedure could in principle continue, by further expanding positive bags with a procedure analogous to Eq. 4, where \(i\) is itself replaced with a bag of nodes. However, this procedures ends up diluting information too much, so that the informativeness of relations becomes difficult to predict. We rather assign a positive label to a node within a bag if it is used to predict the positive label of the bag (Eq. 2) at least once out of \(M\) restarts with randomly initialized weights. See the Appendix for the details.
### Mp-Gnn
In the single meta-path MP-GNN, a meta-path \(mp=r_{1},...,r_{L}\) induces a multi-relational GNN with \(L\) layers, that we denote by MP-GNN(\(mp\)). The first layer is associated to the last relation of the meta-path \(r_{L}\), and so on until the final layer which is associated with \(r_{1}\). The message passing update is formalized as follows:
\[h_{i}^{(l+1)}=\sigma\left(W_{0}^{(l)}h_{i}^{(l)}+\sum_{j\in\mathcal{N}_{i}^{r _{L}-l+1}}\frac{1}{|\mathcal{N}_{i}^{r_{L}-l+1}|}W^{(l)}h_{j}^{(l)}\right) \tag{6}\]
where \(l\) ranges from \(1\) to \(L\).
The definition can be generalized to deal with multiple meta-paths by concatenating embeddings coming from the different meta-paths:
\[h_{i}^{(l)}=\big{\|}_{k=1}^{K}h_{(i,k)}^{(l)} \tag{7}\]
where \(K\) is the number of meta-paths, \(h_{(i,k)}^{(l)}\) is the embedding of node \(i\) according to meta-path \(k\) computed using Eq. 6 and \(\|\) is the concatenation operator.
It is worth mentioning here that while this definition of MP-GNN is a straightforward adaptation of the RGCN architecture to deal with learned meta-paths, more complex architectures involving
Figure 3: **Second iteration**: The scoring function assigns a high score to the purple (”Directed by”) relation (left panel) by assigning a large weight to the M director, which is only one connected to a positive bag, and small weights to the other directors. On the other hand, the “pink” (Inspired) relation (right panel) gets a low score as no weight assignment is compatible with the positive bag.
pre-defined meta-paths could in principle be employed [3; 5; 13; 32]. We opted for this simple choice in the paper so as to best single out the contribution of the scoring function in determining the performance of the resulting architecture.
### Overall algorithm
The overall algorithm for learning MP-GNN is outlined in Algorithm 1. The algorithm takes as inputs a heterogeneous graph \(\mathcal{G}\), a set of candidate relations \(\mathcal{R}\), a set of node-label pairs \(labels\) and a hyper-parameter \(L_{MAX}\) indicating the maximal length of candidate meta-paths. The algorithm repeatedly call the scoring function (Eq. 3) to score candidate relations and keeps track of the best scoring one. It then builds an MP-GNN with the current (partial) meta-path and trains it to predict node labels, using \(F_{1}\) score (computed on a validation set, omitted in the algorithm for the sake of compactness) as final meta-path evaluation metric. Note that this is the only "real" measure of meta-path quality, as the one computed by the scoring function is still a "potential" informativeness, that only fully materializes when the meta-path is embedded into an MP-GNN. The algorithm keeps track of the highest \(F_{1}\) meta-path prefix so far, and proceeds by generating labelled bags as described in Section 4.1 for the next round of relation scoring.
As previously mentioned, the algorithm is presented for the sake of simplicity in the single meta-path case. However, the actual implementation performs beam search on the space of meta-paths, retaining the \(K\) top-scoring ones according to Eq. 3 and concatenating their embeddings into the MP-GNN as per Eq. 7. Notice that in evaluating the resulting MP-GNN, meta-paths not contributing to increasing \(F_{1}\) are discarded, so as to retain only the informative meta-paths in the final architecture.
```
1:procedureLearnMP-GNN(\(\mathcal{G}\), \(\mathcal{R}\), \(labels\), \(L_{MAX}\))
2: Initialize \(mp^{*}\leftarrow[\ ]\), \(mp\leftarrow[\ ]\), \(F_{1}^{*}\gets 0\), \(target\gets labels\)
3:while\(|mp|<L_{MAX}\)do
4:for\(r\in\mathcal{R}\)do
5:\(s_{r}\leftarrow\textsc{score-relation}(\mathcal{G},target,r)\)\(\triangleright\) Equation 3
6:endfor
7:\(r^{*}\leftarrow\) best scoring relation
8:\(mp\gets mp,r^{*}\)
9:\(gnn\leftarrow\textsc{train}(\textsc{MP-GNN}(mp),\mathcal{G},labels)\)
10:\(F_{1}\leftarrow\textsc{test}(gnn)\)
11:if\(F_{1}>F_{1}^{*}\)then
12:\(mp^{*}\gets mp,\ F_{1}^{*}\gets F_{1}\)
13:endif
14:\(target\leftarrow\textsc{generate-bags}(target,r^{*})\)\(\triangleright\) Section 4.1
15:endwhile
16:return\(mp^{*}\)
17:endprocedure
```
**Algorithm 1**LearnMP-GNN algorithm. \(\mathcal{G}\) is a heterogeneous graph, \(\mathcal{R}\) is the set of possible relations, \(labels\) is the initial set of node-label pairs and \(L_{MAX}\) is the maximal meta-path length
## 5 Experimental results
Our experimental evaluation aims to answer the following research questions:
1. Can MP-GNN recover the correct meta-path for an increasing number of candidate relations?
2. Is MP-GNN competitive with existing approaches in real-world datasets with few relations?
3. Does MP-GNN outperform existing approaches in real-world datasets with many relations?
We compared MP-GNN with existing solutions that: 1) do not require to pre-specify relevant meta-paths, 2) can handle (possibly high-dimensional) node features. Given these requirements, we identified the following competitors:
* **RGCN**[25], a generalization of the GCN architecture to the multi-relational case, that employs a different matrix of parameters for each edge type.
* **GTN**[37] can convert an input graph into different meta-path graphs for specific tasks and learn node representations within these graphs.
* **FastGTN**[38], an efficient variant of GTN that avoids adjacency matrices multiplication for graph transformation.
* **R-HGNN**[35], employs a different convolution for each edge type. Finally combines different embeddings with a cross-relation message passing.
* **HGN**[15], utilizes GAT as backbone to design an extremely simple HGNN model.
We implemented MP-GNN using Pytorch Geometric [21], while the code of the competitors was taken from their respective papers. For MP-GNN we used Adam optimizer with a learning rate of 0.01. We set the maximum meta-path length \(L_{MAX}=4\) and the beam size \(K=3\). We used an 80/20/10 split between train, validation and test in all cases, with model selection performed on the validation set for all methods. We employed F1-macro score on the test set as evaluation metric to account for the unbalancing present in many of the datasets. The code is available at LINK.
In the following we report the experimental setting and the results we obtained in addressing each of the research questions under investigation. The statistics of the datasets used in the experiments are reported in the Appendix.
### Q1: MP-GNN consistently identifies the correct meta-path
In order to answer the first research question, we designed a controlled setting where the correct meta-path is known, and experiments can be run for an increasing number of candidate relations. We generated synthetic datasets where nodes are typed A or B, the number of relations \(|\mathcal{R}|\) varies in \(\{4,8,10,14\}\), and the number of relations that can connect more than one pair of node types (e.g., \(A\xrightarrow{\tau}B\) and \(A\xrightarrow{\tau}A\)). The ground truth meta-path consists of a (valid) sequence of relations and nodes of a given type (e.g., \(x\xrightarrow{\tau}A\xrightarrow{\tau}B\), with \(x\) being a node of arbitrary type). Nodes are labelled as positive if found to be starting points of a ground-truth meta-path, and negative otherwise. We generated labelled datasets using ground-truth meta-paths of different lenghts \(L\in\{2,3,4\}\). Details of the different settings can be found in the Appendix (Figure 7).
Figure 4 shows the \(F_{1}\) score for each model when varying the overall number of relations and the number of shared relations, for a ground-truth meta-path of length three. Darker cells correspond to higher \(F_{1}\) value. Results show that the performance of existing multi-relational GNN approaches is severely affected by the relational complexity of the graph, with RGCN and R-HGNN being more sensible to the overall number of candidate relations and GTN and FastGTN having bigger problems with the number of shared relations, while HGN has poor performance in all settings, likely due to its lack of an explicit modelling of relation types. Conversely, MP-GNN consistently achieves optimal or quasi-optimal performance in all settings. Whenever \(F_{1}=1\), MP-GNN manages to perfectly recover the ground-truth meta-path, while values smaller than one are due to spurious relations being added at the end of the meta-path (which however have a limited impact on predictive performance).
Figure 5 shows results when increasing the relational complexity of the network _and_ the length of the meta-path characterizing the positive class. Each setting corresponds to an entry in the main diagonal of Figure 4, where we additionally varied the length of the meta-path from 2 to 4. Results show that GTN, FastGTN and HGN struggle in most settings, while RGCN and R-HGNN are competitive in the simplest settings (few relations and/or short meta-paths) but its performance quickly degrade when the size of the search space increases. Again, MP-GNN consistently achieves excellent performance in all settings, almost always perfectly recovering the ground-truth meta-path.
Figure 4: Synthetic setting: F1-score (the darker the better) as a function of the overall number of relations (rows) and the number of shared relations (columns).
### Q2: MP-GNN achieves state-of-the-art results on real-world datasets with few relations
The second set of experiments focuses on popular real-world benchmarks for multi-relational GNNs. In all cases the task is multi-class classification at the node level. We quickly summarize the characteristics of the benchmarks in the following: **IMDB**: a dataset extracted from the popular Internet Movie Database. It contains 3 types of nodes (movies (M), directors (D) and actors (A)) and uses the genres of movies as labels. **DBLP**: citation network where nodes are of paper (P), author (A) or conference (C) type, connected by edge types PA, AP, PC, CP, and the task is predicting the research area of authors. **ACM**: again a citation network, similar to the one of DBLP with conference nodes replaced by subject (S) nodes (and edge types replaced accordingly).
Table 1 (top) shows the \(F_{1}\) scores achieves by the different methods. As expected, all approaches achieve similar results, which are consistent with the ones observed in previous work [38]. Indeed, the number of relations is very limited (three for IMDB, four for DBLP and ACM) and, most importantly, no relations are shared among different node pair types, substantially restricting the number of candidate meta-paths. Still, MP-GNN achieves slightly better results, most likely thanks to its ability to select a minimal set of meta-paths, as shown in Table 1 (bottom).
### Q3: MP-GNN substantially outperforms competitors on real-world datasets with many relations
The last set of experiments aims to evaluate MP-GNN in a complex real-world setting characterized by a large set of relations, as typical of general-purpose knowledge graphs. We thus designed a set of node-classification tasks over **FB15K-237**[29], which is a large knowledge graph derived from Freebase. Each entity in the graph is associated with a text description, that we transformed into a bag-of-words representation of length 100 (retaining the most frequent words in the dataset). We identified as target labels all many-to-one relations that have from 2 to 20 possible destination types (to avoid having classes with too few examples). Examples include gender, event type and a number of currency-related relations. See the Appendix for the statistics of the datasets.
Table 2 reports \(F_{1}\) scores for the different methods. GTN and FastGTN have serious difficulties in learning reasonable models in all cases. Indeed, the unbalancing in the class distribution, combined with the large pool of candidate relations to learn from, drives them to boil down to majority class
\begin{table}
\begin{tabular}{l l l l} \hline Model & DBLP & IMDB & ACM \\ \hline R-HGNN & 0.86\(\pm\)(0.04) & **0.64\(\pm\)**(0.01) & 0.9(\(\pm\)0.01) \\ HGN & **0.94\(\pm\)**(0.01) & 0.63(\pm\)(0.02) & 0.92(\(\pm\)0.02) \\ RGCN & 0.91\(\pm\)(0.01) & 0.6(\(\pm\)0.01) & 0.9(\(\pm\)0.02) \\ GTN & 0.9(\(\pm\)0.01) & 0.62(\(\pm\)0.01) & 0.91(\(\pm\)0.01) \\ FastGTN & 0.92(\(\pm\)0.00) & 0.63(\(\pm\)0.01) & **0.93(\(\pm\)**0.00) \\ MP-GNN & **0.94\(\pm\)**(0.01) & **0.64**(\(\pm\)0.01) & **0.93(\(\pm\)**0.00) \\ \hline GTN/ & APCPA, & MAM, & PAP, \\ FastGTN & APAPA, & MDM, & PSP, \\ & APA & MDMM & \\ MP-GNN & APCPA, & MAM, & PAP, \\ & APAPA & MDM & PSP \\ \hline \hline \end{tabular}
\end{table}
Table 1: Few-relations datasets. **(Top)**: \(F_{1}\) scores, mean and std computed over five runs. Best results highlighted in bold. **(Bottom)**: learnt meta-paths for MP-GNN and GTN/FastGTN (which learn the very same meta-paths). Other baselines not reported as they do not explicitly extract meta-paths.
Figure 5: Synthetic setting: F1-score as a function of the ground-truth meta-path length, for an increasing complexity of the search space.
prediction in most cases. Despite the better performance of RGCN, HGN, and R-HGNN, they still exhibit substantially lower F1-scores compared to MP-GNN. Notably MP-GNN is surpassed only by RGCN and R-HGNN in the "event" and "team sport" classification tasks, respectively. Figure 6 shows some examples of extracted meta-paths for two different classification tasks, namely predicting the currency of domestic tuition fees in educational institutions and predicting the sport a team is playing. In the former case, extracted meta-paths lead to the headquarters of the organization delivering the educational program, which clearly correlate with the currency being used. In the latter case, meta-paths include the league where the team is playing, which again carries information about the sport being played. Note that in both cases, node features are crucial in leveraging meta-path information, as there are not enough examples to generalize via, e.g., specific headquarter or sport league name. Indeed, an ablation experiment excluding node feature information (the typical setting of meta-path mining approaches [11, 17, 26]), shows that none of the methods manages to learn any sensible meta-path, always boiling down to learning majority class prediction rules (see Appendix 8). For the same reasons, plain meta-path mining fails to extract sensible meta-paths, resulting in poor performance (see Appendix 9 for the results using the popular PRA meta-path miner [11]).
Finally, to assess the computational efficiency of MP-GNN, we conducted a running time comparison, detailed in Appendix 10. Results show that our approach is comparable with that of the competitors on the synthetic and few relation (IMDB, DBLP, ACM) datasets. On the freebase tasks, which have a larger set of candidate relations, our approach is more expensive than (most) competitors, but these have substantially lower performance in terms of F1, with the fastest approaches (GTN and FastGTN) completely failing to learn anything sensible.
## 6 Conclusion
In this work we introduced a novel approach inspired by information theoretic principles to effectively learn meta-paths and meta-path based multi-relational GNNs in settings characterized by a large number of candidate relations combined with arbitrarily rich node features. Our experimental evaluation confirms the potential of the approach in recovering correct (in synthetic tasks) and informative (in real-world tasks) meta-paths despite the large number of candidate relations, a setting where existing multi-relational GNNs struggle to learn meaningful models.
Future work includes generalizing the approach to account for counts or proportions of meta-path realizations as relevant features, as well as more complex relational structures like meta-trees.
\begin{table}
\begin{tabular}{l l l l l l l} \hline Label & R-HGNN & HGN & RGCN & GTN & FastGTN & MP-GNN \\ \hline PNC & 0.72 & 0.68 & 0.74 & 0.33 & 0.33 & **0.83** \\ EDC & 0.6 & 0.75 & 0.71 & 0.12 & 0.12 & **0.96** \\ EIC & 0.63 & 0.65 & 0.73 & 0.12 & 0.12 & **0.8** \\ ELC & 0.47 & 0.74 & 0.72 & 0.12 & 0.15 & **0.78** \\ FBC & 0.45 & 0.48 & 0.42 & 0.14 & 0.14 & **0.61** \\ GNC & 0.8 & 0.74 & 0.82 & 0.19 & 0.19 & **0.90** \\ OC & 0.67 & 0.73 & 0.78 & 0.14 & 0.14 & **0.93** \\ G & 0.81 & 0.64 & 0.8 & 0.44 & 0.44 & **0.84** \\ TS & **0.67** & 0.53 & 0.62 & 0.09 & 0.09 & 0.63 \\ E & 0.89 & 0.8 & **0.98** & 0.07 & 0.07 & 0.96 \\ \hline \end{tabular}
\end{table}
Table 2: Many-relations dataset: F1-scores for the different node classification tasks on the FB15K-237 dataset. Results with standard deviations can be found in Table 7 in Appendix. See Table 3 in the Appendix for the meaning of the label acronyms.
Figure 6: Examples of learned meta-paths on two node classification tasks
## Acknowledgments
This research was supported by TAILOR, a project funded by the EU Horizon 2020 research and innovation program under GA No 952215. AL acknowledges the support of the MUR PNRR project FAIR - Future AI Research (PE00000013) funded by the NextGenerationEU.
|
2301.13557 | On locating and neighbor-locating colorings of sparse graphs | A proper $k$-coloring of a graph $G$ is a \emph{neighbor-locating
$k$-coloring} if for each pair of vertices in the same color class, the two
sets of colors found in their respective neighborhoods are different. The
\textit{neighbor-locating chromatic number} $\chi_{NL}(G)$ is the minimum $k$
for which $G$ admits a neighbor-locating $k$-coloring. A proper
$k$-vertex-coloring of a graph $G$ is a \emph{locating $k$-coloring} if for
each pair of vertices $x$ and $y$ in the same color-class, there exists a color
class $S_i$ such that $d(x,S_i)\neq d(y,S_i)$. The locating chromatic number
$\chi_{L}(G)$ is the minimum $k$ for which $G$ admits a locating $k$-coloring.
Our main results concern the largest possible order of a sparse graph of given
neighbor-locating chromatic number. More precisely, we prove that if $G$ has
order $n$, neighbor-locating chromatic number $k$ and average degree at most
$2a$, where $2a\le k-1$ is a positive integer, then $n$ is upper-bounded by
$\mathcal{O}(a^2(k^{2a+1}))$. We also design a family of graphs of bounded
maximum degree whose order is close to reaching this upper bound. Our upper
bound generalizes two previous bounds from the literature, which were obtained
for graphs of bounded maximum degree and graphs of bounded cycle rank,
respectively. Also, we prove that determining whether $\chi_L(G)\le k$ and
$\chi_{NL}(G)\le k$ are NP-complete for sparse graphs: more precisely, for
graphs with average degree at most 7, maximum average degree at most 20 and
that are $4$-partite. We also study the possible relation between the ordinary
chromatic number, the locating chromatic number and the neighbor-locating
chromatic number of a graph. | Dipayan Chakraborty, Florent Foucaud, Soumen Nandi, Sagnik Sen, D K Supraja | 2023-01-31T11:11:58Z | http://arxiv.org/abs/2301.13557v3 | # New bounds and constructions for neighbor-locating colorings of graphs
###### Abstract
A proper \(k\)-coloring of a graph \(G\) is a _neighbor-locating \(k\)-coloring_ if for each pair of vertices in the same color class, the sets of colors found in their neighborhoods are different. The neighbor-locating chromatic number \(\chi_{NL}(G)\) is the minimum \(k\) for which \(G\) admits a neighbor-locating \(k\)-coloring. A proper \(k\)-coloring of a graph \(G\) is a _locating \(k\)-coloring_ if for each pair of vertices \(x\) and \(y\) in the same color-class, there exists a color class \(S_{i}\) such that \(d(x,S_{i})\neq d(y,S_{i})\). The locating chromatic number \(\chi_{L}(G)\) is the minimum \(k\) for which \(G\) admits a locating \(k\)-coloring. It follows that \(\chi(G)\leq\chi_{L}(G)\leq\chi_{NL}(G)\) for any graph \(G\), where \(\chi(G)\) is the usual chromatic number of \(G\).
We show that for any three integers \(p,q,r\) with \(2\leq p\leq q\leq r\) (except when \(2=p=q<r\)), there exists a connected graph \(G_{p,q,r}\) with \(\chi(G_{p,q,r})=p\), \(\chi_{L}(G_{p,q,r})=q\) and \(\chi_{NL}(G_{p,q,r})=r\). We also show that the locating chromatic number (resp., neighbor-locating chromatic number) of an induced subgraph of a graph \(G\) can be arbitrarily larger than that of \(G\).
Alcon _et al._ showed that the number \(n\) of vertices of \(G\) is bounded above by \(k(2^{k-1}-1)\), where \(\chi_{NL}(G)=k\) and \(G\) is connected (this bound is tight). When \(G\) has maximum degree \(\Delta\), they also showed that a smaller upper-bound on \(n\) of order \(k^{\Delta+1}\) holds. We generalize the latter by proving that if \(G\) has order \(n\) and at most \(an+b\) edges, then \(n\) is upper-bounded by a bound of the order of \(k^{2a+1}+2b\). Moreover, we describe constructions of such graphs which are close to reaching the bound.
**Keywords:** coloring, neighbor-locating coloring, neighbor-locating chromatic number.
Introduction
Taking cues from the concepts of location [15], neighbor-location [16] and locally identifying colorings [10], recently, two variants of graph coloring were introduced, namely, _locating coloring_[8] and _neighbor-locating coloring_[2, 4]. While the former concept has been well-studied since 2002 [4, 5, 6, 7, 8, 9, 13, 14, 17, 18, 19]), our focus of study is the latter, which was introduced in 2014 in [4] under the name of _adjacency locating coloring_, renamed in 2020 in [2] and studied in a few papers since then [1, 3, 11, 12].
Throughout this article, we will use the standard terminologies and notations used in "Introduction to Graph Theory" by West [20].
Given a graph \(G\), a _(proper) \(k\)-coloring_ is a function \(f:V(G)\rightarrow\{1,2,\cdots,k\}\) such that \(f(u)\neq f(v)\) whenever \(u\) is adjacent to \(v\). The value \(f(v)\) is called the _color_ of \(v\). The _chromatic number_ of \(G\), denoted by \(\chi(G)\), is the minimum \(k\) for which \(G\) admits a \(k\)-coloring.
Given a (proper) \(k\)-coloring \(f\) of \(G\), its \(i^{th}\) color class is the collection \(S_{i}\) of vertices that have received the color \(i\). The distance between a vertex \(x\) and a set \(S\) of vertices is given by \(d(x,S)=\min\{d(x,y):y\in S\}\), where \(d(x,y)\) is the number of edges in a shortest path connecting \(x\) and \(y\). Two vertices \(x\) and \(y\) are _metric-distinguished_ with respect to \(f\) if either \(f(x)\neq f(y)\) or \(d(x,S_{i})\neq d(y,S_{i})\) for some color class \(S_{i}\). A (proper) \(k\)-coloring \(f\) of \(G\) is a _locating \(k\)-coloring_ if any two distinct vertices are metric-distinguished with respect to \(f\). The _locating chromatic number_ of \(G\), denoted by \(\chi_{L}(G)\), is the minimum \(k\) for which \(G\) admits a locating \(k\)-coloring.
Given a (proper) \(k\)-coloring \(f\) of \(G\), suppose that a neighbor \(y\) of a vertex \(x\) belongs to the color class \(S_{i}\). In such a scenario, we say that \(i\) is a _color-neighbor_ of \(x\) (with respect to \(f\)). The set of all color-neighbors of \(x\) is denoted by \(N_{f}(x)\). Two vertices \(x\) and \(y\) are _neighbor-distinguished_ with respect to \(f\) if either \(f(x)\neq f(y)\) or \(N_{f}(x)\neq N_{f}(y)\). A (proper) \(k\)-coloring \(f\) is _neighbor-locating \(k\)-coloring_ if each pair of distinct vertices are neighbor-distinguished. The _neighbor-locating chromatic number_ of \(G\), denoted by \(\chi_{NL}(G)\), is the minimum \(k\) for which \(G\) admits a neighbor-locating \(k\)-coloring.
Observe that a neighbor-locating coloring is, in particular, a locating coloring as well. Therefore, we have the following obvious relation among the three parameters [2]:
\[\chi(G)\leq\chi_{L}(G)\leq\chi_{NL}(G).\]
Note that for complete graphs, all three parameters have the same value, that is, equality holds in the above relation. Nevertheless, the difference between the pairs of values of parameters \(\chi(\cdot),\chi_{NL}(\cdot)\) and \(\chi_{L}(\cdot),\chi_{NL}(\cdot)\), respectively, can be arbitrarily large. Moreover, it was proved that for any pair \(p,q\) of integers with \(3\leq p\leq q\), there exists a connected graph \(G_{1}\) with \(\chi(G_{1})=p\) and \(\chi_{NL}(G_{1})=q\)[2] and a connected graph \(G_{2}\) with \(\chi_{L}(G_{2})=p\) and \(\chi_{NL}(G_{2})=q\)[12]. The latter of the two results positively settled a conjecture posed in [2]. We strengthen these results by showing that for any three integers \(p,q,r\) with \(2\leq p\leq q\leq r\), there exists a connected graph \(G_{p,q,r}\) with \(\chi(G_{p,q,r})=p\), \(\chi_{L}(G_{p,q,r})=q\) and \(\chi_{NL}(G_{p,q,r})=r\), except when \(2=p=q<r\).
One fundamental difference between coloring and locating coloring (resp., neighbor-locating coloring) is that the restriction of a coloring of \(G\) to an (induced) subgraph \(H\) is necessarily a coloring, whereas the analogous property is not true for locating coloring
(resp., neighbor-locating coloring). Interestingly, we show that the locating chromatic number (resp., neighbor-locating chromatic number) of an induced subgraph \(H\) of \(G\) can be arbitrarily larger than that of \(G\).
Alcon _et al._[2] showed that the number \(n\) of vertices of \(G\) is bounded above by \(k(2^{k-1}-1)\), where \(\chi_{NL}(G)=k\) and \(G\) has no isolated vertices, and this bound is tight. This exponential bound is reduced to a polynomial one when \(G\) has maximum degree \(\Delta\), indeed it was further shown in [2] that the upper-bound \(n\leq k\sum_{j=1}^{\Delta}{k-1\choose j}\) holds (for graphs with no isolated vertices and when \(\Delta\leq k-1\)). It was left open whether this bound is tight. The _cycle rank_\(c\) of a graph \(G\), denoted by \(c(G)\), is defined as \(c(G)=|E(G)|-n(G)+1\). Alcon _et al._[3] gave the upper bound \(n\leq\frac{1}{2}(k^{3}+k^{2}-2k)+2(c-1)\) for graphs of order \(n\), neighbor-locating chromatic number \(k\) and cycle rank \(c\). Further, they also obtained tight upper bounds on the order of trees and unicyclic graphs in terms of the neighbor-locating chromatic number [3], where a unicyclic graph is a connected graph having exactly one cycle.
As a connected graph with cycle rank \(c\) and order \(n\) has \(n+c-1\) edges and a graph of order \(n\) and maximum degree \(\Delta\) has at most \(\frac{\Delta}{2}n\) edges, the two latter bounds can be seen as two approaches for studying the neighbor-locating coloring for sparse graphs. We generalize this approach by studying graphs with given average degree, or in other words, graphs of order \(n\) having at most \(an+b\) edges for some constants \(a,b\) (such graphs have average degree \(2a+2b/n\)). For such graphs, we prove the upper bound \(n\leq 2b+k\sum\limits_{i=1}^{2a}(2a+1-i){k-1\choose i}\). Furthermore, we show that this bound is asymptotically tight, by a construction of graphs with \(an+b\) edges (where \(2a\) is any positive integer and \(2b\) any integer) and neighbor-locating chromatic number \(\Theta(k)\), whose order is \(\Theta(k^{2a+1})\). Moreover, when \(b=0\), the graphs can be taken to have maximum degree \(2a\). This implies that our bound and the one from [2] are roughly tight.
In Section 2, we study the connected graphs with prescribed values of chromatic number, locating chromatic number and neighbor-locating chromatic number. We also study the relation between the locating chromatic number (resp., neighbor-locating chromatic number) of a graph and its induced subgraphs. Finally, in Section 3 we study the density of graphs having bounded neighbor-locating chromatic number.
## 2 Gap between \(\chi(G)\), \(\chi_{L}(G)\) and \(\chi_{NL}(G)\)
The first result we would like to prove involves three different parameters, namely, the chromatic number, the locating chromatic number, and the neighbor-locating chromatic number.
**Theorem 2.1**.: _For all \(2\leq p\leq q\leq r\), except when \(p=q=2\) and \(r>2\), there exists a connected graph \(G_{p,q,r}\) satisfying \(\chi(G_{p,q,r})=p\), \(\chi_{L}(G_{p,q,r})=q\), and \(\chi_{NL}(G_{p,q,r})=r\)._
Proof.: First of all, let us assume that \(p=q=r\). In this case, for \(G_{p,q,r}=K_{p}\), it is trivial to note that \(\chi(G_{p,q,r})=\chi_{L}(G_{p,q,r})=\chi_{NL}(G_{p,q,r})=p\). This completes the case when \(p=q=r\).
Second of all, let us handle the case when \(p<q=r\). If \(2=p<q=r\), then take \(G_{p,q,r}=K_{1,q-1}\). Therefore, we have \(\chi(G_{p,q,r})=2\) as it is a bipartite graph, and it is known that \(\chi_{L}(G_{p,q,r})=\chi_{NL}(G_{p,q,r})=q\)[2, 8].
If \(3\leq p<q=r\), then we construct \(G_{p,q,r}\) as follows: start with a complete graph \(K_{p}\), on vertices \(v_{0},v_{1},\cdots,v_{p-1}\), take \((q-1)\) new vertices \(u_{1},u_{2},\cdots,u_{q-1}\), and make them adjacent to \(v_{0}\). It is trivial to note that \(\chi(G_{p,q,r})=p\) in this case. Moreover, note that we need to assign \(q\) distinct colors to \(v_{0},u_{1},u_{2},\cdots,u_{q-1}\) under any locating or neighbor-locating coloring. On the other hand, \(f(v_{i})=i+1\) and \(f(u_{j})=j+1\) is a valid locating \(q\)-coloring as well as neighbor locating \(q\)-coloring of \(G_{p,q,r}\). Thus we are done with the cases when \(p<q=r\).
Thirdly, we are going to consider the case when \(p=q<r\). If \(3=p=q<r\), then let \(G_{p,q,r}=C_{n}\) where \(C_{n}\) is an odd cycle of suitable length, that is, a length which will imply \(\chi_{NL}(C_{n})=r\). It is known that such a cycle exists [1, 4]. As we know that \(\chi(G_{p,q,r})=3\), \(\chi_{L}(G_{p,q,r})=3\)[8], and \(\chi_{NL}(G_{p,q,r})=r\)[1, 4], we are done.
If \(4\leq p=q<r\), then we construct \(G_{p,q,r}\) as follows: start with a complete graph \(K_{p}\) on vertices \(v_{0},v_{1},\cdots,v_{p-1}\), and an odd cycle \(C_{n}\) on vertices \(u_{0},u_{1},\cdots,u_{n-1}\), and identify the vertices \(v_{0}\) and \(u_{0}\). Moreover, we say that the length of the odd cycle \(C_{n}\) is a suitable length, that is, it is of a length which ensures \(\chi_{NL}(C_{n})=r\). It is known that such a cycle exists [1, 4]. Notice that \(\chi(G_{p,q,r})=p\). A locating coloring \(f\) can be assigned to \(G_{p,q,r}\) as follows: \(f(v_{i})=i+1\), \(f(u_{j})=a\) for odd integers \(1\leq j\leq n-1\) and \(f(u_{l})=b\) for even integers \(2\leq l\leq n-1\), where \(a,b\in\{2,3,\ldots,p\}\). A vertex \(v_{i}\in K_{p}\) (other than \(v_{0}\)) and a vertex \(u_{j}\in C_{n}\) such that \(f(v_{i})=f(u_{j})\) are metric-distinguished with respect to \(f\) since \(d(v_{i},S_{l})=1\neq d(u_{j},S_{l})\) for at least one \(l\in\{2,3,\ldots,p\}\setminus\{a,b\}\). Thus, \(\chi_{L}(G_{p,q,r})=p\). On the other hand, as the neighborhood of the vertices of the cycle \(C_{n}\) (subgraph of \(G_{p,q,r}\)) does not change if we consider it as an induced subgraph except for the vertex \(v_{0}=u_{0}\). Thus, we will need at least \(r\) colors to color \(C_{n}\) while it is contained inside \(G_{p,q,r}\) as a subgraph. Assign a neighbor-locating coloring \(c\) to \(G_{p,q,r}\) as follows: assign \(p\) distinct colors to the complete graph \(K_{p}\). Use \(p\) colors from \(K_{p}\) and \(r-p\) new colors to color the odd cycle \(C_{n}\). A vertex \(v_{i}\in K_{p}\) (other than \(v_{0}\)) and a vertex \(u_{j}\in C_{n}\) such that \(c(v_{i})=c(u_{j})\) are neighbor-distinguished with respect to \(c\) since \(v_{i}\) has \(p-1\) ditinct color neighbors whereas \(u_{j}\) can have at most two distinguished color neighbors. Hence \(\chi_{NL}(G_{p,q,r})=r\). Thus, we are done in this case also.
Finally, we are into the case when \(p<q<r\). If \(2=p<q<r\), then refer [12] for this case.
If \(3=p<q<r\), then we start with an odd cycle \(C_{n}\) on vertices \(v_{1},v_{2},\cdots,v_{n}\), where \(n=k=\frac{r(r-1)(r-2)}{2}\) if \(k\) is odd and \(n=k-1\) if \(k\) is even. It is known that \(\chi_{NL}(C_{n})=r\) from [1, 4]. Take \(q-1\) new vertices \(u_{1},u_{2},\cdots,u_{q-1}\) and make all of them adjacent to \(v_{0}\). This so obtained graph is \(G_{p,q,r}\). It is trivial to note that \(\chi(G_{p,q,r})=3\) in this case. Note that we need to assign \(q\) distinct colors to \(v_{1},u_{1},u_{2},\cdots,u_{q-1}\) under any locating or neighbor-locating coloring. Now, we assign a locating coloring \(f\) to \(G_{p,q,r}\) as follows: \(f(v_{1})=1\), \(f(v_{i})=2\) for all even integers \(2\leq i\leq n\), \(f(v_{j})=3\) for all odd integers \(3\leq j\leq n\), \(f(u_{s})=s+1\) for all \(1\leq s\leq q-1\). This gives us \(\chi_{L}(G_{p,q,r})=q\).
On the other hand, as the neighborhood of the vertices of the cycle \(C_{n}\) (subgraph of \(G_{p,q,r}\)) does not change if we consider it as an induced subgraph except for the vertex \(v_{1}\)
Thus, we will need at least \(r\) colors to color \(C_{n}\) while it is contained inside \(G_{p,q,r}\) as a subgraph. Assign a neighbor-locating coloring \(c\) to \(G_{p,q,r}\) as follows: assign a neighbor-locating \(r\)-coloring to the odd cycle \(C_{n}\) such that each vertex has two distinct color neighbors in case of \(n=k\) and all vertices except the two vertices say \(v_{i}\) and \(v_{j}\), have two distinct color neighbors in case of \(n=k-1\) (refer [1] for such a neighbor-locating \(r\)-coloring). Assign distinct colors to the \(q-1\) leaf vertices by choosing any \(q-1\) colors from \(r\) colors (except \(c(v_{i})\) and \(c(v_{j})\) in case of \(n=k-1\)) given to the cycle \(C_{n}\). A vertex \(v_{i}\) (\(i\neq 1\)) in the cycle and a leaf vertex \(u_{j}\) such that \(f(v_{i})=f(u_{j})\) are neighbor distinguished since \(v_{i}\) has two distinct color neighbors whereas \(u_{j}\) has only one color neighbor. Hence we have \(\chi_{NL}(G_{p,q,r})=r\).
If \(4\leq p<q<r\), then we start with a path \(P_{n}\) on \(n\) vertices, where \(n=\frac{r(r-1)(r-2)}{2}\). It is known that \(\chi_{NL}(P_{n})=r\) from [1, 4]. Let \(P_{n}=u_{0}u_{1}\cdots u_{n-1}\). Now let us take a complete graph on \(p\) vertices \(v_{0},v_{1},\cdots,v_{p-1}\). Identify the two graphs at \(u_{0}\) and \(v_{0}\) to obtain a new graph. Furthermore, take \((q-2)\) independent vertices \(w_{1},w_{2},\cdots,w_{q-2}\) and make them adjacent to \(u_{n-2}\). This so obtained graph is \(G_{p,q,r}\). It is trivial that \(\chi(G_{p,q,r})=p\). Note that under any locating or neighbor-locating coloring, \(q\) distinct colors have to be given to the vertices \(u_{n-2},w_{1},w_{2},\cdots,w_{q-2},u_{n-1}\). Now, define a locating coloring \(f\) of \(G_{p,q,r}\) as follows: \(f(v_{0})=f(v_{n-3})=1\), \(f(v_{i})=2\) for all odd integers \(1\leq i\leq n-2\), \(f(v_{j})=3\) for all even integers \(2\leq j\leq n-2\), \(f(v_{n-2})=2,f(v_{n-1})=3\) if \(n\) is even (\(f(v_{n-2})=3,f(v_{n-1})=2\) if \(n\) is odd), colors \(1,4,5,6,\ldots,q\) to the leaf vertices and \(f(k_{s})=s+1\) for all \(1\leq s\leq p-1\). Thus, \(\chi_{L}(G_{p,q,r})=q\).
Moreover, as the neighborhood of the vertices of the path \(P_{n}\) (subgraph of \(G_{p,q,r}\)) does not change if we consider it as an induced subgraph except for the vertices \(u_{0}\) and \(u_{n-2}\). Thus, we will need at least \(r\) colors to color \(P_{n}\) while it is contained inside \(G_{p,q,r}\) as a subgraph. Assign a neighbor-locating coloring \(c\) to \(G_{p,q,r}\) as follows: assign a neighbor-locating \(r\)-coloring to the path \(P_{n}\) such that each vertex (except the end vertices \(u_{0}\) and \(u_{n-1}\)) has two distinct color-neighbors (refer [1] for such a neighbor-locating \(r\)-coloring). Choose any \(p-1\) distinct colors from \(r\) colors (except \(c(u_{0})\)) used in neighbor-locating \(r\)-coloring of \(P_{n}\) and assign them to the remaining \(p-1\) vertices of the complete graph. Assign distinct colors to the \(q-2\) leaf vertices by choosing any \(q-2\) colors from \(r\) colors of \(P_{n}\) except the colors \(c(u_{0})\), \(c(u_{n-2})\) and \(c(u_{n-1})\). A vertex \(u_{i}\) (\(i\neq 0,n-2,n-1\)) in the path and a leaf vertex \(w_{j}\) such that \(c(u_{i})=c(w_{j})\) are neighbor distinguished since \(u_{i}\) has two distinct color neighbors whereas \(w_{j}\) has only one color neighbor. Hence, we have \(\chi_{NL}(G_{p,q,r})=r\).
Furthermore, we show that, unlike the case of chromatic number, an induced subgraph can have an arbitrarily higher locating chromatic number (resp., neighbor-locating chromatic number) than that of the graph.
**Theorem 2.2**.: _For every \(k\geq 0\), there exists a graph \(G_{k}\) having an induced subgraph \(H_{k}\) such that \(\chi_{L}(H_{k})-\chi_{L}(G_{k})=k\) and \(\chi_{NL}(H_{k})-\chi_{NL}(G_{k})=k\)._
Proof.: The graph \(G_{k}\) is constructed as follows. We start with \(2k\) independent vertices \(a_{1},a_{2},\cdots,a_{2k}\) and \(k\) disjoint edges \(b_{1}b_{1}^{\prime},b_{2}b_{2}^{\prime},\cdots,b_{k}b_{k}^{\prime}\). After that we make all the above mentioned vertices adjacent to a special vertex \(v\) to obtain our graph \(G_{k}\). Notice that
\(v\) and the \(a_{i}\)s must all receive distinct colors under any locating coloring or neighbor-locating coloring. On the other hand, the coloring \(f\) given by \(f(v)=1\), \(f(a_{i})=i+1\), \(f(b_{i})=2i+1\), and \(f(b_{i}^{\prime})=2i\) is indeed a locating coloring as well as a neighbor-locating coloring of \(G_{k}\). Hence we have \(\chi_{L}(G_{k})=\chi_{NL}(G_{k})=(2k+1)\).
Now take \(H_{k}\) as the subgraph induced by \(v\), \(a_{i}\)s and \(b_{i}\)s. It is the graph \(K_{1,3k}\). Hence we have \(\chi_{L}(H_{k})=\chi_{NL}(H_{k})=(3k+1)\)[2, 8].
This completes the proof.
## 3 Bounds and constructions for sparse graphs
In this section, we study the density of graphs having bounded neighbor-locating chromatic number.
### Bounds
The first among those results provides an upper bound on the number of vertices of a graph in terms of its neighbor-locating chromatic number. This, in particular shows that the number of vertices of a graph \(G\) is bounded above by a polynomial function of \(\chi_{NL}(G)\).
**Theorem 3.1**.: _Let \(G\) be a connected graph on \(n\) vertices and \(m\) edges such that \(m\leq an+b\), where \(2a\) is a positive integer and \(2b\) is an integer. If \(\chi_{NL}(G)=k\), then_
\[n\leq 2b+k\sum_{i=1}^{2a}(2a+1-i)\binom{k-1}{i}.\]
_In particular, any graph whose order attains the upper bound must be of maximum degree \(2a+1\) and with exactly \(k\binom{k-1}{i}\) number of vertices of degree \(i\)._
Proof.: Let \(D_{i}\) and \(d_{i}\) denote the set and the number of vertices in \(G\) having degree equal to \(i\), respectively, and let \(D_{i}^{+}\) and \(d_{i}^{+}\) denote the set and the number of vertices in \(G\) having degree at least \(i\), for all \(i\geq 1\). Using the handshaking lemma, we know that
\[\sum_{v\in V(G)}deg(v)=2|E(G)|=2m\leq 2(an+b).\]
Notice that, as \(G\) is connected, and hence does not have any vertex of degree \(0\), it is possible to write
\[\sum_{v\in V(G)}deg(v)=\sum_{i=1}^{2a}i\cdot d_{i}+\sum_{v\in D_{2a+1}^{+}}deg (v).\]
Moreover, the number of vertices of \(G\) can be expressed as
\[n=(d_{1}+d_{2}+\cdots+d_{2a})+d_{2a+1}^{+}=d_{2a+1}^{+}+\sum_{i=1}^{2a}d_{i}.\]
Therefore, combining the above equations and inequalities, we have
\[\sum_{i=1}^{2a}i\cdot d_{i}+\sum_{v\in D_{2a+1}^{+}}deg(v)\leq 2b+2a\left(d_{2a+1} ^{+}+\sum_{i=1}^{2a}d_{i}\right)\]
which implies
\[d_{2a+1}^{+}\leq\sum_{v\in D_{2a+1}^{+}}(deg(v)-2a)\leq\left(\sum_{v\in D_{2a+1 }^{+}}deg(v)\right)-2ad_{2a+1}^{+}\leq 2b+\sum_{i=1}^{2a}(2a-i)d_{i}\]
since there are exactly \(d_{2a+1}^{+}\) terms in the summation \(\sum_{v\in D_{2a+1}^{+}}(deg(v)-2a)\) where each term is greater than or equal to \(1\), as \(deg(v)\geq 2a+1\) for all \(v\in D_{2a+1}^{+}\).
Let \(f\) be any neighbor-locating \(k\)-coloring of \(G\). Consider an ordered pair \((f(u),N_{f}(u))\), where \(u\) is a vertex having degree at most \(s\). Thus, \(u\) may receive one of the \(k\) available colors, while its color neighborhood may consist of at most \(s\) of the remaining \((k-1)\) colors. Thus, there are at most \(k\sum\limits_{i=1}^{s}\binom{k-1}{i}\) choices for the ordered pair \((f(u),N_{f}(u))\). As for any two vertices \(u,v\) of degree at most \(s\), the following ordered pairs \((f(u),N_{f}(u))\) and \((f(v),N_{f}(v))\) must be distinct, we have
\[\sum_{i=1}^{s}d_{i}\leq k\sum_{i=1}^{s}\binom{k-1}{i}.\]
Using the above relation, we can show that
\[\sum_{i=1}^{2a}(2a+1-i)d_{i}=\sum_{s=1}^{2a}\left(\sum_{i=1}^{s}d_{i}\right) \leq\sum_{s=1}^{2a}\left(k\sum_{i=1}^{s}\binom{k-1}{i}\right)=k\sum_{i=1}^{2a} (2a+1-i)\binom{k-1}{i}.\]
As
\[\sum_{i=1}^{2a}(2a+1-i)d_{i}=\sum_{i=1}^{2a}d_{i}+\sum_{i=1}^{2a}(2a-i)d_{i} \text{ and }d_{2a+1}^{+}\leq 2b+\sum_{i=1}^{2a}(2a-i)d_{i},\]
we have
\[n=d_{2a+1}^{+}+\sum_{i=1}^{2a}d_{i}\leq 2b+k\sum_{i=1}^{2a}(2a+1-i)\binom{k-1}{ i}.\]
This completes the first part of the proof.
For the proof of the second part of the Theorem, we notice that if the order of a graph \(G^{*}\) attains the upper bound, then equality holds in all of the above inequations. In particular, we must have \(d_{2a+1}^{+}=\sum_{v\in D_{2a+1}^{+}}(deg(v)-2a)\) which implies that \(G^{*}\) cannot have a vertex of degree more than \(2a+1\). Moreover, we also have the following equality.
\[\sum_{i=1}^{s}d_{i}=k\sum_{i=1}^{s}\binom{k-1}{i}\text{ for }s=1,2,\ldots,2a+1.\]
This proves that \(G^{*}\) has exactly \(k\binom{k-1}{i}\) vertices of degree \(i\)
Next we are going to present some immediate corollaries of Theorem 3.1. A _cactus_ is a connected graph in which no two cycles share a common edge.
**Corollary 3.2**.: _Let \(G\) be a cactus on \(n\) vertices and \(m\) edges. If \(\chi_{NL}(G)=k\), then_
\[n\leq\frac{k^{4}+11k^{2}-12k-6}{6}.\]
_Moreover, if the cactus has exactly \(t\) cycles, then we have_
\[n\leq 2(t-1)+\frac{k^{3}+k^{2}-2k}{2}.\]
Proof.: Observe that \(G\) has at most \(\frac{3(n-1)}{2}\) edges. So, by substituting \(a=\frac{3}{2}\) and \(b=-\frac{3}{2}\) in the bound for \(n\) established in Theorem 3.1, we have
\[n\leq 2b+k\sum_{i=1}^{2a}(2a+1-i)\binom{k-1}{i} =-3+k\sum_{i=1}^{3}(4-i)\binom{k-1}{i}\] \[=-3+3k\binom{k-1}{1}+2k\binom{k-1}{2}+k\binom{k-1}{3}\] \[=\frac{k^{4}+11k^{2}-12k-6}{6}.\]
Note that, if the cactus \(G\) has has exactly \(t\) cycles, then \(G\) has exactly \((n+t-1)\) edges. Hence, replacing \(a=1\) and \(b=(t-1)\) in the bound for \(n\) established in Theorem 3.1, we have
\[n\leq 2b+k\sum_{i=1}^{2a}(2a+1-i)\binom{k-1}{i} =2(t-1)+k\sum_{i=1}^{2}(3-i)\binom{k-1}{i}\] \[=2(t-1)+2k\binom{k-1}{1}+k\binom{k-1}{2}\] \[=2(t-1)+\frac{k^{3}+k^{2}-2k}{2}.\]
A graph is _\(t\)-degenerate_ if its every subgraph has a vertex of degree at most \(t\).
**Corollary 3.3**.: _Let \(G\) be a \(t\)-degenerate graph on \(n\) vertices and \(m\) edges. If \(\chi_{NL}(G)=k\), then_
\[n\leq k\sum_{i=1}^{2t}(2t+1-i)\binom{k-1}{i}-t(t+1).\]
Proof.: Observe that the number of edges in a \(t\)-degenerate graph is \(m\leq tn-\frac{t(t+1)}{2}\). Substituting \(a=t\) and \(b=-\frac{t(t+1)}{2}\) in the bound for \(n\) established in Theorem 3.1, we have
\[n\leq 2b+k\sum_{i=1}^{2a}(2a+1-i)\binom{k-1}{i}=-t(t+1)+k\sum_{i=1}^{2t}(2t+1-i )\binom{k-1}{i}.\]
A planar graph is \(5\)-degenerate, thus using the above corollary, we know that for a planar graph \(G\) one can obtain an upper bound of \(|V(G)|\). However, since \(|E(G)|\leq 3|V(G)|-6\), we are able to obtain a better bound.
**Corollary 3.4**.: _Let \(G\) be a planar graph on \(n\) vertices and \(m\) edges. If \(\chi_{NL}(G)=k\), then_
\[n\leq k\sum_{i=1}^{6}(7-i)\binom{k-1}{i}-12.\]
Proof.: Note that the number of edges in a planar graph is at most \(3n-6\). Substituting \(a=3\) and \(b=-6\) in the bound for \(n\) established in Theorem 3.1, we have
\[n\leq 2b+k\sum_{i=1}^{2a}(2a+1-i)\binom{k-1}{i}=-12+k\sum_{i=1}^{6}(7-i)\binom{k -1}{i}.\]
### Tightness
Next we show the asymptotic tightness of Theorem 3.1. To that end, we will prove the following result.
**Theorem 3.5**.: _Let \(2a\) be a positive integer and let \(2b\) be an integer. Then, there exists a graph \(G\) on \(n\) vertices and \(m\) edges satisfying \(m\leq an+b\) such that \(n=\Theta(k^{2a+1})\) and \(\chi_{NL}(G)=\Theta(k)\). Moreover, when \(b=0\), \(G\) can be taken to be of maximum degree \(2a\)._
The proof of this theorem is contained within a number of observations and lemmas. Also, the proof is constructive, and the constructions depend on particular partial colorings. Therefore, we are going to present a series of graph constructions, their particular colorings, and their structural properties. We are also going to present the supporting observations and lemmas in the following.
**Lemma 3.6**.: _Let us consider a \((p\times q)\) matrix whose \(ij^{th}\) entry is \(m_{i,j}\), where \(p<q\). Let \(M\) be a complete graph whose vertices are the entries of the matrix. Then there exists a matching of \(M\) satisfying the following conditions:_
1. _The endpoints of an edge of the matching are from different columns._
2. _Let_ \(e_{1}\) _and_ \(e_{2}\) _be two edges of the matching. If one endpoint of_ \(e_{1}\) _and_ \(e_{2}\) _are from the same column, then the other endpoints of them must belong to distinct columns._
3. _The matching saturates all but at most one vertex of_ \(M\) _per column._
Proof.: Consider the permutation \(\sigma=(1\ 2\ \cdots q)\). The matching consists of edges of the type \(m_{(2i-1),j}m_{2i,\sigma^{i}(j)}\) for all \(i\in\{1,2,\cdots,\lfloor\frac{p}{2}\rfloor\}\) and \(j\in\{1,2,\cdots,q\}\). We will show that this matching satisfies all listed conditions.
Observe that, a typical edge of the matching is of the form \(m_{(2i-1),j}m_{2i,\sigma^{i}(j)}\). As the second co-ordinates of the subscript of the endpoints of the said edge is different, condition \((i)\) from the statement is verified.
Suppose that there are two edges of the type \(m_{(2i-1),j}m_{2i,\sigma^{i}(j)}\) and \(m_{(2i^{\prime}-1),j^{\prime}}\)
\(m_{2i^{\prime},\sigma^{i^{\prime}}(j^{\prime})}\). If \(m_{(2i-1),j}\) and \(m_{(2i^{\prime}-1),j^{\prime}}\) are from the same column, that is, \(j=j^{\prime}\), then we must have \(i\neq i^{\prime}\) as they are different vertices. Thus \(\sigma^{i}(j)\neq\sigma^{i^{\prime}}(j)=\sigma^{i^{\prime}}(j^{\prime})\) as \(i\neq i^{\prime}\). If \(m_{(2i-1),j}\) and \(m_{2i^{\prime},\sigma^{i^{\prime}}(j^{\prime})}\) are from the same column, then we have \(j=\sigma^{i^{\prime}}(j^{\prime})\). Moreover, if we have \(j^{\prime}=\sigma^{i}(j)\), then it will imply that
\[j=\sigma^{i^{\prime}}(\sigma^{i}(j))=\sigma^{i+i^{\prime}}(j).\]
This is only possible if \(q|(i+i^{\prime})\), which is not possible as \(i,i^{\prime}\in\{1,2,\cdots,\lfloor\frac{p}{2}\rfloor\}\). Therefore, we have verified condition \((ii)\) of the statement.
Notice that, the matching saturates all the vertices of \(M\) when \(p\) is even, whereas it saturates all except the vertices in the \(p^{th}\) row of the matrix when \(p\) is odd. This verifies condition \((iii)\) of the statement.
**Corollary 3.7**.: _Let \(G\) be a graph with an independent set \(M\) of size \((p\times q)\), where \(M=\{m_{ij}:1\leq i\leq p,1\leq j\leq q\}\) and \(p<q\). Moreover, let \(\phi\) be a \((k^{\prime}+q)\)-coloring of \(G\) satisfying the following conditions:_
1. \(k^{\prime}+1\leq\phi(x)\leq k^{\prime}+q\) _if and only if_ \(x\in M\)_,_
2. \(x\) _and_ \(y\) _are neighbor-distinguished unless both belong to_ \(M\)_,_
3. \(\phi(m_{ij})=k^{\prime}+j\)_._
_Then it is possible to find spanning supergraph \(G^{\prime}\) of \(G\) by adding a matching between the vertices of \(M\) which will make \(\phi\) a neighbor-locating \((k^{\prime}+q)\)-coloring of \(G^{\prime}\)._
Proof.: First of all build a matrix whose \(ij^{th}\) entry is the vertex \(m_{ij}\). After that, build a complete graph whose vertices are entries of this matrix. Now using Lemma 3.6, we can find a matching of this complete graph that satisfies the three conditions mentioned in the statement of Lemma 3.6. We construct \(G^{\prime}\) by including exactly the edges corresponding to the edges of the matching, between the vertices of \(M\). We want to show that after adding these edges and obtaining \(G^{\prime}\), indeed \(\phi\) is a neighbor-locating \((k^{\prime}+q)\)-coloring of \(G^{\prime}\).
Notice that by the definition of \(\phi\), \((k^{\prime}+q)\) colors are used. So it is enough to show that the vertices of \(G^{\prime}\) are neighbor-distinguished with respect to \(\phi\). To be precise, it is enough to show that two vertices \(x,y\) from \(M\) are neighbor-distinguished with respect to \(\phi\) in \(G^{\prime}\). If \(\phi(x)=\phi(y)\), then they must have different color-neighborhood inside \(M\) according to the conditions of the matching. This is enough to make \(x,y\) neighbor-distinguished.
Now we are ready to present our iterative construction. However, given the involved nature of it, we need some specific nomenclatures to describe it. For convenience, we will list down some points to describe the whole construction.
1. An \(i\)_-triplet_ is a 3-tuple of the type \((G_{i},\phi_{i},X_{i})\) where \(G_{i}\) is a graph, \(\phi_{i}\) is a neighbor-locating \((ik)\)-coloring of \(G_{i}\), \(X_{i}\) is a set of \((i+1)\)-tuples of vertices of \(G_{i}\), each having non-repeating elements. Also, two \((i+1)\)-tuples from \(X_{i}\) do not have any entries in common.
2. Let us describe the 1-triplet \((G_{1},\phi_{1},X_{1})\) explicitly. Here \(G_{1}\) is the path \(P_{t}=v_{1}v_{2}\cdots v_{t}\) on \(t\) vertices where \(t=4\left\lfloor\frac{k^{2}(k-1)}{8}\right\rfloor\). As \[\frac{(k-1)^{2}(k-2)}{2}<4\left\lfloor\frac{k^{2}(k-1)}{8}\right\rfloor\leq \frac{k^{2}(k-1)}{2},\] we must have \(\chi_{NL}(P_{t})=k\) (see [1]). Let \(\phi_{1}\) be any neighbor-locating \(k\)-coloring of \(G_{1}\) and \[X_{1}=\{(v_{i-1},v_{i+1}):i\equiv 2,3\pmod{4}\}.\]
3. Suppose an \(i\)-triplet \((G_{i},\phi_{i},X_{i})\) is given. We will (partially) describe a way to construct an \((i+1)\)-triplet from it. To do so, first we will construct an intermediate graph \(G^{\prime}_{i+1}\) as follows: for each \((i+1)\)-tuple \((x_{1},x_{2},\cdots,x_{i+1})\in X_{i}\) we will add a _new vertex_\(x_{i+2}\) adjacent to each vertex from the \((i+1)\)-tuple. Moreover, \((x_{1},x_{2},\cdots,x_{i+1},x_{i+2})\) is designated as an \((i+2)\)-tuple in \(G^{\prime}_{i+1}\). After that, we will take \(k\) copies of \(G^{\prime}_{i+1}\) and call this so-obtained graph as \(G^{\prime\prime}_{i+1}\). Furthermore, we will extend \(\phi_{i}\) to a function \(\phi_{i+1}\) by assigning the color \((ik+j)\) to the new vertices from the \(j^{th}\) copy of \(G^{\prime}_{i+1}\). The copies of the \((i+2)\)-tuples are the \((i+2)\)-tuples of \(G^{\prime\prime}_{i+1}\).
4. Consider the \((i+1)\)-triplet \((G^{\prime\prime}_{i+1},\phi_{i+1},X^{\prime\prime}_{i+1})\) where \(X^{\prime\prime}_{i+1}\) denotes the set of all \((i+2)\)-tuples of \(G^{\prime\prime}_{i+1}\). The _color of an \((i+2)\)-tuple_\((x_{1},x_{2},\cdots,x_{i+2})\) is the set \[C((x_{1},x_{2},\cdots,x_{i+2}))=\{\phi_{i}(x_{1}),\phi_{i}(x_{2}),\cdots,\phi_ {i}(x_{i+2})\}.\] Let us partition the set of new vertices based on the colors used on the elements (all but the last one) of the \((i+2)\)-tuple of which it is (uniquely) part of. To be explicit, the last elements of two \((i+2)\)-tuples are in the same partition if and only if they have the same color neighbors. Let this partition be denoted by \(X_{i1},X_{i2},\cdots,X_{is_{i}}\), for some integer \(s_{i}\).
5. First fix a partition \(X_{ir}\) of \(X_{i}\). Next construct a matrix with its \(\ell^{th}\) column having vertices from \(X_{ir}\) as its entries if they are also from the \(\ell^{th}\) copy of \(G^{\prime}_{i+1}\) in \(G^{\prime\prime}_{i+1}\). Thus the matrix is a \((p\times q)\) matrix where \(p=|X_{ir}|\) and \(q=k\). We are going to show that, \(p<q\). However, for convenience, we will defer it to a later part (Lemma 3.8).
6. Let us delete all the new vertices from \(G^{\prime\prime}_{i+1}\) except for the ones in \(X_{ir}\). This graph has the exact same properties of the graph \(G\) from Corollary 3.7 where \(X_{ir}\) plays the role of the independent set \(M\). Thus it is possible to add a matching and extend the coloring (like in Corollary 3.7). We do that for each value of \(r\) and add the corresponding matching to our graph \(G^{\prime\prime}_{i+1}\). After adding all such matchings, the graph we obtain is \(G_{i+1}\).
**Lemma 3.8**.: _We have \(|X_{ir}|<k\), where \(X_{ir}\) is as in Item(v) of the above list._
Proof.: We show that the set of new vertices having the same color neighbors in \(G_{1}\) is strictly less than \(k\) as follows:
Any vertex (other than the end vertices) in \(G_{1}\) has two color neighbors say \(i\) and \(j\) (\(i\) may or may not be equal to \(j\)). Having fixed the two color neighbors, this vertex will have at most \(k-1\) choices of colors. This implies that there can be at most \(k-1\) new vertices with given color neighbors, which would also ensure that the set of new vertices having the same color neighbors in \(G_{1}\) (that is the set of \(2\)-tuples having the same color in \(G_{1}\)) is strictly less than \(k\).
After that we are done by induction.
**Lemma 3.9**.: _The function \(\phi_{i+1}\) is a neighbor-locating coloring of \(G_{i+1}\)._
Proof.: The function \(\phi_{i+1}\) is constructed from \(\phi_{i}\), alongside constructing the triplet \(G_{i+1}\) from \(G_{i}\). While constructing, we use the same steps from that of Corollary 3.7. Thus, the newly colored vertices become neighbor-distinguished in \(G_{i+1}\) under \(\phi_{i+1}\).
The above two lemmas validate the correctness of the iterative construction of \(G_{i}\)s. However, it remains showing how \(G_{i}\)s help us prove our result. To do so, let us prove certain properties of \(G_{i}\)s.
**Lemma 3.10**.: _The graph \(G_{i}\) is not a regular graph and has maximum degree \((i+1)\)._
Proof.: As we have started with a path, our \(G_{1}\) has maximum degree \(2\) and is not regular. In the iteration step for constructing the graph \(G_{i+1}\) from \(G_{i}\), the degree of an old vertex (or its copy) can increase at most by \(1\), while a new vertex of \(G_{i+1}\) is adjacent to exactly \((i+1)\) old vertices and at most one new vertex. Hence, a new vertex in \(G_{i+1}\) can have degree at most \((i+2)\). Therefore, the proof is done by induction.
Figure 1: Construction of \(G_{2}\) from \(G_{1}=P_{24}\). Here \(\chi_{NL}(P_{24})=4\), the red and blue edges are the two sets of newly added matchings.
Finally, we are ready to prove Theorem 3.5.
Proof of Theorem 3.5.: Given \(a\) and \(b\), to build the example that will prove the theorem, one can consider \(G=G_{2a+1}\).
**Acknowledgements:** This work is partially supported by the following projects: "MA/ IFCAM/18/39", "SRG/2020/001575", "MTR/2021/000858" and "NBHM/RP-8 (2020)/ Fresh". Research by the first and second authors is partially sponsored by a public grant overseen by the French National Research Agency as part of the "Investissements d'Avenir" through the IMobS3 Laboratory of Excellence (ANR-10-LABX-0016), the IDEX-ISITE initiative CAP 20-25 (ANR-16-IDEX-0001) and the ANR project
GRALMECO (ANR-21-CE48-0004-01).
|
2306.17796 | Entropy Product Function and Central charges in NUT Geometry | We define an \emph{entropy product function}~(EPF) for
Taub-Newman-Unti-Tamburino~(TNUT) black hole~(BH) following the prescription
suggested by Wu et al.~\cite{wu} ~[PRD 100, 101501(R) (2019)]. The prescription
argues that a generic four-dimensional TNUT spacetime might be expressed in
terms of three or four different types of thermodynamic hairs. They can be
defined as the Komar mass~($M=m$), the angular momentum~($J_{n}=mn$), the
gravitomagnetic charge ($N=n$), the dual~(magnetic) mass $(\tilde{M}=n)$.
Taking this prescription and using the \emph{EPF}, we derive the \emph{central
charges} of dual CFT~(conformal field theory) via Cardy's formula. Remarkably,
we \emph{find} that for TNUT BH there exists a relation between the
\emph{central charges and EPF} as $c=6\left(\frac{\partial {\cal F}}{\partial
{\cal N}_{i}}\right)$, where ${\cal F}$ is EPF and ${\cal N}_{i}$ is one of the
integer-valued charges i.e. the NUT charges~($N$) or any new conserved
charges~($J_{N}$). We reverify these results by calculating the exact values of
different thermodynamic parameters. We define the EPF~${\cal F}$ from the first
law of thermodynamics of both horizons. Moreover, we write the first laws of
both the horizons for left-moving and right-moving sectors. Introducing the
B\'{e}zout's identity, we show that for TNUT BH one can generate more
holographic descriptions described by a pair of integers $(a,b)$. More
holographic pictures have a great significance in understanding the holographic
nature of quantum gravity. Furthermore, using the \emph{EPF} we derive the
central charges for Reissner-Nordstr\"{o}m-NUT~(RNNUT) BH, Kerr-Taub-NUT~(KNUT)
BH and Kerr-Newman-NUT~(KNNUT) BH. Finally, we prove that they are equal in
both sectors provided that the EPF is mass-independent~(or universal). | Parthapratim Pradhan | 2023-06-30T16:53:21Z | http://arxiv.org/abs/2306.17796v1 | # Entropy Product Function and Central charges in NUT Geometry
###### Abstract
We define an _entropy product function_ (EPF) for Taub-Newman-Unti-Tamburino (TNUT) black hole (BH) following the prescription suggested by Wu et al. [1] [PRD 100, 101501(R) (2019)]. The prescription argues that a generic four-dimensional TNUT spacetime might be expressed in terms of three or four different types of thermodynamic hairs. They can be defined as the Komar mass (\(M=m\)), the angular momentum (\(J_{n}=mn\)), the gravitomagnetic charge (\(N=n\)), the dual (magnetic) mass (\(M=n\)). Taking this prescription and using the _EPF_, we derive the _central charges_ of dual CFT (conformal field theory) via Cardy's formula. Remarkably, we _find_ that for TNUT BH there exists a relation between the _central charges and EPF_ as \(c=6\left(\frac{\partial F}{\partial N_{i}}\right)\), where \(\mathcal{F}\) is EPF and \(\mathcal{N}_{i}\) is one of the integer-valued charges i.e. the NUT charges (\(N\)) or any new conserved charges (\(J_{N}\)). We reverify these results by calculating the exact values of different thermodynamic parameters. We define the EPF \(\mathcal{F}\) from the first law of thermodynamics of both horizons. Moreover, we write the first laws of both the horizons for left-moving and right-moving sectors. Introducing the Bezout's identity, we show that for TNUT BH one can generate more holographic descriptions described by a pair of integers \((a,b)\). More holographic pictures have a great significance towards understanding the holographic nature of quantum gravity. Furthermore, using the _EPF_ we derive the central charges for Reissner-Nordstrom-NUT (RNNUT) BH, Kerr-Taub-NUT (KNUT) BH and Kerr-Newman-NUT (KNNUT) BH. Finally we prove that they are equal in both sectors provided that the EPF is mass-independent (or universal).
## 1 Introduction
The Kerr/CFT correspondence [2] is a great discovery in BH physics to understand the entropy of a BH. It is the macroscopic entropies of BHs which has been reproduced exactly from the microscopic states counting in the dual 2D CFT via Cardy formula. The fact started for many years that the microscopic origin of the BH entropy could be holographically encoded in a 2D CFT, particularly after successfully counting of the Bekenstein-Hawking entropies of a extremal BH in string theory [3].
One aspect is that the greybody factors for rotating black holes in four dimensions were derived by Cvetic and Larsen [4] to study the Hawking radiation. The string excitations in both sectors i.e. left and right moving sectors generate two distinct contributions of entropy. Similarly, the emission spectrum of the string is identified by left moving and right moving which is independent of temperatures. Consequently the appearance of two independent sets of thermodynamic variables is related to the presence of event horizon and Cauchy horizons. For instance, the two contributions to the entropy are proportional to the sum and difference of the Cauchy and event horizon area, respectively. This geometic interpretation of the thermodynamic variables gives access to features of the underlying microscopic theory.
On the other hand the solution of wave equation for most general spacetime background is derived to compute greybody factors in four and five dimensions [5]. Interestingly the wave equation has an exact symmetry that interchanges the Cauchy (or inner) horizon and event (or outer) horizons.
Importantly, this symmetry has a discrete in nature. The modes in the vicinity of the event horizon give rise to the Hawking radiation having Hawking temperature \(T_{H}=\frac{T_{L}+T_{B}}{2T_{R}T_{L}}\). Similarly, from the modes in the vicinity of the Cauchy horizon give rise to a characteristic temperature \(T_{C}=\frac{T_{L}-T_{B}}{2T_{R}T_{L}}\). The temperatures \(T_{R}\) and \(T_{L}\) that appear in these formulas agree precisely with those that follow from thermodynamics.
One of the interesting feature in the holographic description of KN class of BHs is that the central charges of the dual CFT are independent of the mass of BHs. The attribute could be related to the fact that the entropy product of \({\cal H}^{\pm}\) of these BHs are also mass-independent.
In the context of general relativity, it has been examined that for a four-dimensional Kerr-Newman BH [6], the entropy product of \({\cal H}^{\pm}\) i.e.
\[{\cal S}_{+}{\cal S}_{-} = 4\pi^{2}\left(J^{2}+\frac{Q^{4}}{4}\right) \tag{1}\]
is mass-independent. On the other hand in the context of string theory, it has been argued that for a BPS class of BHs, the entropy product of \({\cal H}^{\pm}\) should be quantized as [7]
\[{\cal S}_{+}{\cal S}_{-} = (2\pi)^{2}\left[\sqrt{N_{1}}+\sqrt{N_{2}}\right]\left[\sqrt{N_{ 1}}-\sqrt{N_{2}}\right]=(2\pi)^{2}N,\ N\in\mathbb{N},N_{1}\in\mathbb{N},N_{2} \in\mathbb{N}. \tag{2}\]
It is an well-established fact that the thermodynamics method 1 is a powerful mechanism to determine the dual CFT in Kerr/CFT correspondence [9]. In case of Kerr/CFT correspondence it has been proved that more universal information of the dual CFT including the dual temperatures, and the central charges of both left-moving sectors and right-moving sectors are fully encoded in the thermodynamics of the event (or outer) horizon (EH) (\({\cal H}^{+}\)) and Cauchy (or inner) horizon (CH) (\({\cal H}^{-}\)) of the Kerr-Newman (KN) BH. Also it has been suggested in [9] that the thermodynamics method is universal to determine the dual holographic picture of BH.
Footnote 1: The conventional way to determine the cental charges of dual CFT by using asymptotic symmetry group (ASG) of near-horizon geometry of extremal BH in either BBC (Branich-Brandt-Compere) formalism’[2] or equivalently the stretched horizon formalism [28]
For KN BH [9] it has been explicitly proved that the first laws of BH thermodynamics satisfied for EH and CH horizons as
\[dM = \pm T_{\pm}\,d{\cal S}_{\pm}+\Omega_{\pm}\,dJ+\Phi_{\pm}\,dQ. \tag{3}\]
where \(T_{\pm}\), \(\Omega_{\pm}\) and \(\Phi_{\pm}\) represents the Hawking temperature, the angular velocity and the electric potential of \({\cal H}^{\pm}\) calculated on the horizons for KN BH. Under the exchange of \(r_{+}\leftrightarrow r_{-}\) and using the symmetry of outer (inner) horizons \(r_{\pm}\), one would get the Hawking temperature of the inner horizon
\[T_{-} = -T_{+}\Big{|}_{r_{+}\leftrightarrow r_{-}} \tag{4}\]
while the other quantites are changed under the symmetry of \(r_{\pm}\) as
\[{\cal S}_{-} = {\cal S}_{+}\Big{|}_{r_{+}\leftrightarrow r_{-}},\ \Omega_{-}=\Omega_{+}\Big{|}_{r_{+} \leftrightarrow r_{-}},\ \Phi_{-}=\Phi_{+}\Big{|}_{r_{+}\leftrightarrow r_{-}} \tag{5}\]
An another universal relation exists for KN BH
\[T_{+}{\cal S}_{+}=T_{-}{\cal S}_{-} \tag{6}\]
which must says that the entropy product is _mass independent_.
In recent times, it has been proposed that [1] a generic four dimensional TNUT spacetime has four thermodynamical hairs. They could be defined as the Komar mass (\(M=m\)), the angular momentum (\(J_{n}=mn\)), the gravitomagnetic charge (\(N=n\)), and or the dual (magnetic) mass (\(\hat{M}=n\)). Using this formalism, _in this work_ we prove that there exists a relation between _EPF and the
central charges of dual CFT for TNUT BH_. First, we define the EPF from the first law of BH thermodynamics for both the horizons and then using this function we determine the central charges of dual CFT. Moreover, we calculate the thermodynamic parameters in left-moving sectors and right-moving sectors. Furthermore, we compute the first laws of thermodynamics in left-moving sectors and right-moving sectors. Finally, we examine the holographic picture of TNUT BH. Since the BH carries NUT parameter \((N)\)2 and new conserved charges \((J_{N})\) hence there exists two elementary holographic pictures, which we call the \(N\) picture and the \(J_{N}\) picture.
Footnote 2: NUT charge should be considered as a thermodynamic variable since it is independently varied in the full cohomogeneity of first law in BH thermodynamics.
Like four dimensional dyonic RN BH here in the context of _NUT class of BHs_ we prove that there exists a relation between the EPF (\(\mathcal{F}\)) and the central charges \((c)\) as
\[c^{i}_{L,R}=6\left(\frac{\partial\mathcal{F}}{\partial\mathcal{N}_{i}}\right) \tag{7}\]
where \(\mathcal{N}_{i}\) is one of the integer-valued charges appearing in the first laws, and should be angular momentum, or another conserved charges. For instance i.e. for RN BH, the EPF [9] is defined as
\[\mathcal{F}=\frac{\mathcal{S}_{+}\mathcal{S}_{-}}{4\pi^{2}}=\frac{Q^{4}}{4}. \tag{8}\]
Since the EPF is mass-independent hence \(T_{+}\mathcal{S}_{+}=T_{-}\mathcal{S}_{-}\). Therfore one can derive the central charges for \(Q\) picture by using Eq. (7) as
\[c^{Q}_{L} = 6Q^{3} \tag{9}\] \[c^{Q}_{R} = 6Q^{3} \tag{10}\]
They are equal for left moving and right-moving sectors that means \(c^{Q}_{L}=c^{Q}_{R}\). The central charges are equal means that the EPF is universal (mass-independent). This result is completely agreement with the central charges derived by asymptotic symmetry group (ASG) analysis [27]. Where it was proved by uplifting the four-dimensional compactification solution to a five-dimensional solution.
In the literature [15, 16, 17], it has been shown that the area (or entropy) products in NUT geometry i.e. TNUT BH, KTNUT BH and KNTNUT have "mass-dependent" characteristics. Secondly, the first laws of thermodynamics in the left moving sectors and the right moving sectors do not satisfied like Eq. (30). In that situation, the EPF is mass-dependent that means
\[T_{+}\mathcal{S}_{+}\neq T_{-}\mathcal{S}_{-} \tag{11}\]
Moreover, we cannot read the information of dual CFT that means we cannot derived the exact left-moving and right-moving temperatures of dual CFT. Furthermore, the central charges of left-moving sectors and right-moving sectors are not equal i.e.
\[c_{L}\neq c_{R} \tag{12}\]
However incorporating the formalism stated in [1], we should be able to defined the EPF for NUT class of BHs because the entropy product of \(\mathcal{H}^{\pm}\) is universal i.e. mass-independent [20]. In this work we will consider the case \(J_{N}=MN\) and the case \(J_{N}\neq MN\) is already been discussed in the following work [15, 17, 16]. Moreover in this work we derive the central charges from entropy product function and compared it with the result obtained from the ASG analysis for NUT class of BHs.
There are two ways one can compute the central charges. The first one is that ASG of near horizon geometry of extremal BH and the second one is that via EPF method i.e. thermodynamics method. In this work we will particularly emphasize on EPF method. Also we will see that _EPF method is more covenient to derive the central charges than the ASG method_.
The outline of the Letter is as follows. In the next section (2), we derive the relation between EPF and central charges for NUT class of BH i.e. TNUT BH. In Sec. (3), we derive the exact calculations
of central charges by direct calculation. In Sec. (4), we derive the central charges for RNNUT BH by using EPF. In Sec. (5), we derive the central charges for KNNUT BH by using EPF. In Sec. (6), we derive the central charges for KNNUT BH by using EPF. In Sec. (7), we have given our conclusions. In Appendix-A (Sec. 8), we evaluate the central charges from EPF for RN BH, Kerr BH and KN BH. In Appendix-B (Sec. 9), we provide the calculation of central charges by using near horizon geometry of extremal KN BH.
## 2 TNUT BH and EPF
The Lorentzian TNUT BH [18, 19] is a solution of the Einstein equation. The metric is defined as
\[ds^{2} = -{\cal A}\,\left(dt+2n\cos\theta d\phi\right)^{2}+\frac{dr^{2}}{ \cal A}+\left(r^{2}+n^{2}\right)\left(d\theta^{2}+\sin^{2}\theta d\phi^{2} \right)\, \tag{13}\]
where the function \({\cal A}\) is
\[{\cal A} = \frac{1}{r^{2}+n^{2}}\left[r^{2}-n^{2}-2mr\right] \tag{14}\]
Under the new prescription [1] for NUT class of BHs, the global conserved charges computed via conformal completion method [8] could be defined as
\[{\rm Komar\ mass:}\ m = M\] \[{\rm Angular\ momentum:}\,m\,n = J_{n}\] \[{\rm Gravitomagnetic\ charge:}\ n = N \tag{15}\] \[{\rm Dual\ (magnetic)\ mass:}\ \tilde{M} = n. \tag{16}\]
It means that NUT charge is a thermodynamic multihair. It further implies that simultaneously it has both rotation-like and electromagnetic charge-like properties.
Taking the effect of Eq. (16), the metric can be written as
\[ds^{2} = -{\cal A}\,\left(dt+2N\,\cos\theta d\phi\right)^{2}+\frac{dr^{2}} {\cal A}+\left(r^{2}+N^{2}\right)\left(d\theta^{2}+\sin^{2}\theta d\phi^{2} \right)\, \tag{17}\]
where the function \({\cal A}\) is given by
\[{\cal A} = \frac{1}{r^{2}+N^{2}}\left(r^{2}-N^{2}-2Mr\right) \tag{18}\]
The BH horizons are at
\[r_{\pm}=M\pm\sqrt{M^{2}+N^{2}}\ {\rm and}\ r_{+}>r_{-} \tag{19}\]
\(r_{+}\) is called EH and \(r_{-}\) is called CH. Thus the entropy [20, 21] of the BH is computed under the formalism stated in Eq. (16)
\[{\cal S}_{\pm} = 2\pi\left[M^{2}+N^{2}\pm\sqrt{M^{4}+J_{N}^{2}}\right]. \tag{20}\]
One can compute the BH temperature by defining partition function of a well-defined microcanonical ensemble. The fact that a BH solution having a topological charge should be described by the BH thermodynamics. It was started by Gibbons and Hawking in 1977 [22]. They proposed that the partition function of a canonical ensemble for BHs should be calculated with its Euclidean action in the form of the gravitional path integral. It is described by
\[{\cal Z} = e^{-\beta F}=\int D[g]e^{-\frac{I}{h}}\sim e^{-\frac{I}{h}}, \tag{21}\]
where \(I\) and \(F\) is the Euclidean action and the free energy of the BH. The period \(\beta\) of the Euclidean time is the inverse of the BH temperature i.e. \(\beta=\frac{1}{T_{+}}\). For Taub-NUT BH and for both the horizons \({\cal H}^{\pm}\), \(\frac{I_{+}}{\beta_{\pm}}=F_{\pm}=\frac{m}{2}\), \(\beta_{\pm}=\frac{1}{T_{\pm}}\) is the interval of the time coordinate [23].
In our earlier work [21], we proved that mass parameter can be expressed as a function of area (or entropy), new conserved charges and NUT parameter for both the horizons i.e. \(M=M({\cal S}_{\pm},J_{N},N)\). Then the first law of thermodynamics is satisfied for both the horizons as
\[dM = T_{+}d{\cal S}_{+}+\omega_{+}dJ_{N}+\psi_{+}dN, \tag{22}\] \[= -T_{-}d{\cal S}_{-}+\omega_{-}dJ_{N}+\psi_{-}dN \tag{23}\]
where
\[T_{\pm} = \frac{r_{\pm}-M}{2\pi\left(r_{\pm}^{2}+N^{2}\right)},\ \omega_{\pm}=\frac{N}{r_{\pm}^{2}+N^{2}},\ \psi_{\pm}=-\frac{2N\,r_{\pm}}{r_{\pm}^{2}+N^{2}} \tag{24}\]
Like KN BH, there exists a symmetry for NUT class of BHs between the following physical quantites under exchange of two physical horizons:
\[T_{-} = -T_{+}|_{r_{+}\leftrightarrow r_{-}} \tag{25}\] \[{\cal S}_{-} = {\cal S}_{+}|_{r_{+}\leftrightarrow r_{-}}\] (26) \[\omega_{-} = \omega_{+}|_{r_{+}\leftrightarrow r_{-}}\] (27) \[\psi_{-} = \psi_{+}|_{r_{+}\leftrightarrow r_{-}} \tag{28}\]
This indicates that if the first law of thermodynamics satisfied on the event horizon then it must be satisfied on the Cauchy horizon. Using Eq. (22) and Eq. (23) we find
\[d\left({\cal S}_{+}{\cal S}_{-}\right)=\left(\frac{{\cal S}_{-} T_{-}-{\cal S}_{+}T_{+}}{T_{+}T_{-}}\right)\ dM+\left(\frac{{\cal S}_{+}T_{+}\omega_{-}-{\cal S}_{-}T_{-} \omega_{+}}{T_{+}T_{-}}\right)\,dJ+\left(\frac{{\cal S}_{+}T_{+}\psi_{-}-{\cal S }_{-}T_{-}\psi_{+}}{T_{+}T_{-}}\right)\,dQ\.\]
It can be rewritten as
\[d{\cal F}=\left(\frac{{\cal S}_{-}T_{-}-{\cal S}_{+}T_{+}}{4\pi^ {2}\,T_{+}T_{-}}\right)\,dM+\left(\frac{{\cal S}_{+}T_{+}\omega_{-}-{\cal S}_{ -}T_{-}\omega_{+}}{4\pi^{2}\,T_{+}T_{-}}\right)\,dJ+\left(\frac{{\cal S}_{+}T _{+}\psi_{-}-{\cal S}_{-}T_{-}\psi_{+}}{4\pi^{2},T_{+}T_{-}}\right)\,dQ\.\]
where
\[{\cal F} = \frac{{\cal S}_{+}{\cal S}_{-}}{4\pi^{2}} \tag{31}\]
This \({\cal F}\) is defined as _EPF_ for TNUT BH. We first defined the EPF from the first law of thermodynamics of both the horizons. Again we know for TNUT BH the entropy product of \({\cal H}^{\pm}\)[20] is
\[\frac{{\cal S}_{+}{\cal S}_{-}}{4\pi^{2}} = J_{N}^{2}+N^{4} \tag{32}\]
Thus
\[{\cal F} = J_{N}^{2}+N^{4} \tag{33}\]
where \({\cal F}\) is function of \(J_{N}\) and \(N\). The term _EPF_ was first introduced in [11, 10] for investigation of the holographic pictures of four dimensional dyonic RN BH. The Eq. (30) implies that \({\cal F}\) is independent of the mass parameter indicates that
\[T_{+}{\cal S}_{+} = T_{-}{\cal S}_{-}. \tag{34}\]
To derive the Smarr formula for TNUT BH first we have to consider the dimensions of the following thermodynamic parameters:
\[[M]=[N]=L,\ [{\cal S}_{+}]=[{\cal S}_{-}]=[J_{N}]=L^{2} \tag{35}\]
Thus taking \(({\cal S}_{+},J_{N},N)\) as independent parameters and \(M\) is the homogenous function of \(({\cal S}_{+}^{\frac{1}{2}},J_{N}^{\frac{1}{2}},N)\) with degree 1. Then Euler's homogenous function theorem indicates that
\[M = 2{\cal S}_{+}\frac{\partial M}{\partial{\cal S}_{+}}+2J_{N}\frac {\partial M}{\partial J_{N}}+N\frac{\partial M}{\partial N}. \tag{36}\]
Hence the Smarr formula is
\[M = 2\left(T_{+}{\cal S}_{+}+\omega_{+}J_{N}\right)+\Psi_{+}N. \tag{37}\]
Similarly, the Smarr formula for the Cauchy horizon is given by
\[M = 2\left(-T_{-}{\cal S}_{-}+\omega_{-}J_{N}\right)+\Psi_{-}N. \tag{38}\]
Using the Smarr formulae for the event horizon and Cauchy horizons (37) and (38), we can see that (29) implies that
\[{\rm d}\ln({\cal F}) = \frac{(\omega_{-}-\omega_{+}){\rm d}J_{N}+(\Psi_{-}-\Psi_{+}){\rm d }N}{2\pi^{2}(\omega_{-}-\omega_{+})J_{N}+\pi^{2}N(\Psi_{-}-\Psi_{+})}. \tag{39}\]
When \(J_{N}\neq 0\) and \(N\neq 0\), in general we can write
\[\frac{\omega_{-}-\omega_{+}}{\Psi_{-}-\Psi_{+}} = g(J_{N},N), \tag{40}\]
where \(g(J_{N},N)\) is some unknown function of \((J_{N},N)\). Thus we have
\[{\rm d}\ln({\cal F})=\frac{g\,{\rm d}J_{N}+{\rm d}N}{2\pi^{2}g\,J_{N}+N\pi^{2}}. \tag{41}\]
Consistency requires that
\[\partial_{N}\frac{g}{2J_{N}g+N}=\partial_{J_{N}}\frac{1}{2J_{N}\,g+N}, \tag{42}\]
which leads to
\[2J_{N}\partial_{J_{N}}\,g+N\partial_{N}g=-g. \tag{43}\]
This means that \(g\) is the homogeneous function of \((J_{N}^{\frac{1}{2}},N)\) with degree \(-1\), thus we may write \(g(J_{N},N)=N^{-1}f(N^{2}/J_{N})\). Integrating (41) we get
\[{\cal F}\propto J_{N}^{2}\,f(N^{2}/J_{N}), \tag{44}\]
where
\[\ln f(x)=\int_{0}^{x}\frac{{\rm d}y}{2f(y)+y}. \tag{45}\]
Hence the EPF \({\cal F}\) is the homogeneous function of \((J_{N},N^{2})\) with degree 2, thus quasi-homogeneous function of \((J_{N},N)\).
Following Refs. [4, 5], we can define the thermodynamic parameters in the left-moving and right-moving sectors for TNUT BH as
\[T_{L,R} = \frac{T_{+}T_{-}}{T_{-}\pm T_{+}},\] \[{\cal S}_{L,R} = \frac{{\cal S}_{+}\pm{\cal S}_{-}}{2},\] \[\omega_{L,R} = \frac{T_{-}\,\omega_{+}\pm T_{+}\,\omega_{-}}{2(T_{-}\pm T_{+})}\] \[\psi_{L,R} = \frac{T_{-}\,\psi_{+}\pm T_{+}\,\psi_{-}}{2(T_{-}\pm T_{+})} \tag{46}\]
It was demonstrated in Ref. [4, 5] the first law of thermodynamics for left-moving and right-moving sectors are satisfied separately in the context of rotating BH in four dimensions, and also there exists separate Smarr formulae for two sectors. Here, we _show_ that such type of relation exists for _NUT class of BH_ also. Thus in terms of left and right moving modes of dual CFT, the first law of BH thermodynamics can be rewritten as
\[\frac{dM}{2} = T_{L}\,d{\cal S}_{L}+\omega_{L}\,dJ_{N}+\psi_{L}\,dN \tag{47}\] \[= T_{R}\,d{\cal S}_{R}+\omega_{R}\,dJ_{N}+\psi_{R}\,dN\.\]
Now we shall keep \(J_{N}\) as a invariant quantity and taking the perturbation of type \((dN,dJ_{N})=dN(1,0)\) then we should get from the first law
\[dN=\frac{T_{L}}{\psi_{R}-\psi_{L}}\,d{\cal S}_{L}-\frac{T_{R}}{ \psi_{R}-\psi_{L}}\,d{\cal S}_{R}. \tag{48}\]
From the above computations, we can read the information of the dual CFT. Also, we can predict two important facts:
(a) \({\cal S}_{L}\) and \({\cal S}_{R}\) in Eq. (46) are the exact entropies of the left and right moving sectors of dual CFT [24, 4, 5, 25].
(b) When we keep \(J_{N}\) is fixed, we get the \(N\) picture i.e the NUT picture. There exists the first law of thermodynamics
\[dN = T_{L}^{N}d{\cal S}_{L}-T_{R}^{N}d{\cal S}_{R},. \tag{49}\]
where \(T_{L}^{N}\) and \(T_{R}^{N}\) are the exact left-moving and right-moving temperatures of the dual CFT. It was first reported in [9, 11] for four dimensional KN BH.
Let us assume that the CFT entropies should be reduced by Cardy's formula
\[{\cal S}_{L} = \frac{\pi^{2}}{3}c_{L}^{N}T_{L}^{N}. \tag{50}\]
and
\[{\cal S}_{R} = \frac{\pi^{2}}{3}c_{R}^{N}T_{R}^{N}. \tag{51}\]
from which we can derive the central charges. It should be easily proved that if the entropy product function is mass-independent then the left-moving and right-moving sector central charges must be equal [9] i.e.
\[c_{L}^{N}=c_{R}^{N} \tag{52}\]
Putting \({\cal S}_{+}={\cal S}_{L}+{\cal S}_{R}\) and \({\cal S}_{-}={\cal S}_{L}-{\cal S}_{R}\) in Eq. (30) and take variations on both sides of the equations while keeping \(J_{N}\) is constant, and we obtain
\[\left(\frac{\partial{\cal F}}{\partial N}\right)dN = \frac{{\cal S}_{L}\,d{\cal S}_{L}-{\cal S}_{R}\,d{\cal S}_{R}}{2 \pi^{2}} \tag{53}\]
Taking the advantage from Eq. (50), Eq. (51), Eq. (52) and Eq. (49), we get the central charges
\[c_{L}^{N} = 6\left(\frac{\partial{\cal F}}{\partial N}\right)=24N^{3} \tag{54}\] \[c_{R}^{N} = 6\left(\frac{\partial{\cal F}}{\partial N}\right)=24N^{3} \tag{55}\]
These are the central charges of dual CFT in the NUT picture and we proved that they are equal. Similarly, we should keep \(N\) is constant and find the \(J_{N}\) picture in which the dual CFT is that of the central charges
\[c_{L}^{J_{N}} = 6\left(\frac{\partial{\cal F}}{\partial J_{N}}\right)=12J_{N} \tag{56}\] \[c_{R}^{J_{N}} = 6\left(\frac{\partial{\cal F}}{\partial J_{N}}\right)=12J_{N} \tag{57}\]
From the above analysis, remarkably we obtain a relation between the EPF and the central charges as
\[c_{L}^{i} = 6\left(\frac{\partial F}{\partial N_{i}}\right) \tag{58}\] \[c_{R}^{i} = 6\left(\frac{\partial F}{\partial N_{i}}\right) \tag{59}\]
This is the master formula of this work. Where \({\cal N}_{i}\) is one of the integer-valued charges appering in the first laws. It should be a NUT charges or another new conserved charges i.e. \(J_{N}\). Thus
\[c_{L}^{i} = c_{R}^{i} \tag{60}\]
It must be mentioned that the Eq. (47) indicates how the BH responds to various types of perturbations. If we could consider the perturbations carrying only NUT charges, or more precisely \((dN,dJ_{N})=dN(1,0)\), the corresponding first law [Eq. 49] gives the exact left-moving and right-moving temperatures, \(T_{L}^{N}\) and \(T_{R}^{N}\) of dual CFT and then the central charges \(c_{L,R}^{N}\) in the NUT picture.
On the other side, if we consider the perturbations carrying only new conserved charges, \(J_{N}\) or more precisely \((dN,dJ_{N})=dJ_{N}(0,1)\), the first law gives the exact left-moving and right-moving temperatures, \(T_{L}^{J_{N}}\) and \(T_{R}^{J_{N}}\) of dual CFT in the \(J_{N}\) picture and then the central charges \(c_{L,R}^{J_{N}}\).
Now if we consider the perturbations simultaneously i.e. \((dN,dJ_{N})=d{\cal N}(a,b)\) with \(a\), \(b\) being two coprime integers. Then from the first laws we could write
\[\frac{dM}{2} = T_{L}\,d{\cal S}_{L}+(a\,\omega_{L}\,dJ_{N}+b\,\psi_{L}\,dN) \tag{61}\] \[= T_{R}\,d{\cal S}_{R}+(a\,\omega_{R}\,dJ_{N}+b\,\psi_{R}\,dN)\;\;.\]
Using a similar procedure we get the dual picture i.e. \((a,b)\) picture, in which the CFT is that of the central charges
\[c_{L}^{(a,b)} = a\,c_{L}^{N}+b\,c_{L}^{J_{N}} \tag{62}\] \[c_{R}^{(a,b)} = a\,c_{R}^{N}+b\,c_{R}^{J_{N}} \tag{63}\]
Now we will introduce the Bezout's identity which states that every pair of coprime integers \(a\), \(b\) there exist other pairs of coprime integers \(c\), \(d\) such that \(ad-bc=1\). Thus the \((a,b)\) picture should be viewed as being generated from two elementary (1,0) and (0,1) pictures by a \(SL(2,Z)\) transformation:
\[\left(\begin{array}{c}c_{L,R}^{(a,b)}\\ c_{L,R}^{(c,d)}\end{array}\right)=\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\left(\begin{array}{c}c_{L,R}^{(1,0)}\\ c_{L,R}^{(0,1)}\end{array}\right),\;\;\;\;\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in SL(2,Z).\]
So for NUT class of BH, we can generate more holographic descriptions labeled by a pair of coprime integers \((a,b)\). These pictures are constructed on two elementary pictures and related to each other by \(SL(2,Z)\) duality. More holographic pictures indicates towards multiple holographic duals which play a key role to understanding the quantum nature of gravity.
## 3 Exact calculation of central charges for TNUT BH
In the previous section we derived the central charges by using EPF. In the present section we will reverify the above result by direct calculation in both sectors. To derive the microscopic entropy via the Cardy formula we have to calculate the following important thermodynamic parameters in left-moving sectors and right-moving sectors:
\[T_{L} = \frac{1}{4\pi\left(r_{+}+r_{-}\right)},\,\,\,T_{R}=\frac{1}{4\pi \left(r_{+}-r_{-}\right)}\] \[{\cal S}_{L} = \frac{\pi(r_{+}-r_{-})^{2}}{2},\,\,\,{\cal S}_{R}=\frac{\pi(r_{+} ^{2}-r_{-}^{2})}{2}\] \[\omega_{L} = 0,\,\,\,\omega_{R}=\frac{N}{(r_{+}-r_{-})^{2}}\] \[\Psi_{L} = -\frac{N}{r_{+}-r_{-}},\,\,\,\Psi_{R}=-\frac{N(r_{+}+r_{-})}{(r_ {+}-r_{-})^{2}}\,\,. \tag{64}\]
There exists two holographic picture for TNUT BH. They are defined as \(N\) picture and \(J_{N}\) picture. Now we have to calculate the dimensionless temperature of the left and right moving sectors of the dual CFT in \(J_{N}\) picture. It could be found as
\[T_{L}^{J_{N}} = \frac{1}{4\pi N}\frac{(r_{+}-r_{-})^{2}}{(r_{+}+r_{-})}\,\,. \tag{65}\]
&
\[T_{R}^{J_{N}} = \frac{1}{4\pi N}\left(r_{+}-r_{-}\right)\,\,. \tag{66}\]
These are exactly the microscopic temperature of dual CFT in TNUT spacetime.
Now we are ready to determine the central charges in left and right moving sectors of the TNUT/CFT correspondence via the Cardy formula
\[{\cal S}_{L}^{J_{N}} = \frac{\pi^{2}}{3}c_{L}^{J_{N}}T_{L}^{J_{N}},\,\,\,{\cal S}_{R}^{J _{N}}=\frac{\pi^{2}}{3}c_{R}^{J_{N}}T_{R}^{J_{N}}\,\,. \tag{67}\]
Hence the central charges of dual CFT becomes
\[c_{L}^{J_{N}} = 12J_{N},\,\,\,c_{R}^{J_{N}}=12J_{N}\,\,. \tag{68}\]
It implies that the central charges of left moving sectors and right moving sectors of dual CFT are same for TNUT BH. This is a remarkable result for _TNUT BH_. This is possible only due to introduction of new conserved charges i.e. \(J_{N}\). The result is exactly same as we have seen in case of Kerr BH [13] and KN BH [9]. This kind of observation tells us that TNUT BH is dual to \(c_{L}^{J_{N}}=c_{R}^{J_{N}}=12J_{N}\) of 2D CFT at temperature \((T_{L}^{J_{N}},T_{R}^{J_{N}})\) for each value of \(M\) and \(J_{N}\).
Analogously, for \(N\) picture, the dimensionless temperature of the left and right moving sectors of the dual CFT are
\[T_{L}^{N} = \frac{(r_{+}-r_{-})^{2}}{16\pi N^{3}}\,\,. \tag{69}\]
and
\[T_{R}^{N} = \frac{\left(r_{+}^{2}-r_{-}^{2}\right)}{16\pi N^{3}}\,\,. \tag{70}\]
These are exactly the microscopic temperature of dual CFT in \(N\) picture for TNUT spacetime. Therefore the central charges of dual CFT are computed to be
\[c_{L}^{N} = 24N^{3},\,\,\,c_{R}^{N}=24N^{3}. \tag{71}\]
These results are completely agreement with the result obtained in previous section. With the above thermodynamic parameters one can easily check that for TNUT BH
\[\frac{{\cal S}_{L}}{{\cal S}_{R}}=\frac{T_{L}}{T_{R}} \tag{72}\]
which is definitely satisfied. Where \({\cal S}_{L,R}\) are the left moving entropies and right moving entropies of the dual CFT, and \(T_{L,R}\) are the CFT temperatures. Now we will give more examples for derivations of central charges by using _EPF_. To do that we have to consider first RN-NUT BH.
## 4 Central charges for RN-NUT BH from EPF
For RN-NUT BH, the metric function \({\cal A}\) should be
\[{\cal A} = \frac{1}{r^{2}+N^{2}}\left(r^{2}-N^{2}-2Mr+Q^{2}\right) \tag{73}\]
where \(Q\) is the purely electric charge. Now the BH horizons are located at
\[r_{\pm}=M\pm\sqrt{M^{2}-Q^{2}+N^{2}} \tag{74}\]
Now the entropy [20, 21] of \({\cal H}^{\pm}\) is computed under the criterion stated in Eq. (16)
\[{\cal S}_{\pm} = \pi\left[2\left(M^{2}+N^{2}\right)-Q^{2}\pm 2\sqrt{M^{4}-M^{2}Q^{2 }+J_{N}^{2}}\right]. \tag{75}\]
The EPF for this BH is
\[{\cal F} = J_{N}^{2}+\left(\frac{Q^{2}}{2}-N^{2}\right)^{2} \tag{76}\]
For \(J_{N}\) picture, we get the central charges for left moving and right-moving sectors
\[c_{L}^{J_{N}} = 6\left(\frac{\partial{\cal F}}{\partial J_{N}}\right)=12J_{N} \tag{77}\] \[c_{R}^{J_{N}} = 6\left(\frac{\partial{\cal F}}{\partial J_{N}}\right)=12J_{N} \tag{78}\]
It proves that central charges are equal in both sectors i.e.
\[c_{L}^{J_{N}} = c_{R}^{J_{N}} \tag{79}\]
Similarly for \(Q\) picture, we get the equal central charges for left moving and right-moving sectors
\[c_{L}^{Q} = 12Q\left(\frac{Q^{2}}{2}-N^{2}\right) \tag{80}\] \[c_{R}^{Q} = 12Q\left(\frac{Q^{2}}{2}-N^{2}\right) \tag{81}\]
Analogously for \(N\) picture, the equal central charges for left moving and right-moving sectors are
\[c_{L}^{N} = 24N\left(N^{2}-\frac{Q^{2}}{2}\right) \tag{82}\] \[c_{R}^{N} = 24N\left(N^{2}-\frac{Q^{2}}{2}\right) \tag{83}\]
Alternatively, we could say that the central charges are equal means the EPF is universal (mass-independent).
Central charges for KNUT BH from EPF
The metric of KNUT BH is
\[ds^{2} = -\frac{\Delta_{r}}{\rho^{2}}\,\left[dt-Xd\phi\right]^{2}+\frac{\sin^ {2}\theta}{\rho^{2}}\,\left[(r^{2}+a^{2}+N^{2})\,d\phi-a\,dt\right]^{2}+\rho^{ 2}\,\left[\frac{dr^{2}}{\Delta_{r}}+d\theta^{2}\right]. \tag{84}\]
where
\[a \equiv \frac{J}{M},\,\rho^{2}\equiv r^{2}+(N+a\,\cos\theta)^{2} \tag{85}\] \[\Delta_{r} \equiv r^{2}-2Mr+a^{2}-N^{2}\] (86) \[X \equiv a\,\sin^{2}\theta-2N\,\cos\theta. \tag{87}\]
and the global conserved charges are the Komar mass \(M\), angular momentum \(J=aM\) and gravitomagnetic charge or dual mass or NUT parameter \(N\).
The horizons are located at
\[r_{\pm} = M\pm\sqrt{M^{2}-a^{2}+N^{2}} \tag{88}\]
Introducing \(J_{N}=MN\), the entropy of \({\cal H}^{\pm}\) should be derived to be [20, 21]
\[{\cal S}_{\pm} = 2\pi\left[(M^{2}+N^{2})\pm\sqrt{M^{4}+J_{N}^{2}-J^{2}}\right]. \tag{89}\]
The EPF for KNUT BH is defined as
\[{\cal F} = J^{2}+J_{N}^{2}+N^{4} \tag{90}\]
Hence for \(J\) picture, we find the equal central charges for left moving and right-moving sectors
\[c_{L}^{J} = 12J \tag{91}\] \[c_{R}^{J} = 12J \tag{92}\]
Similarly for \(J_{N}\) picture, we find the equal central charges for left moving and right-moving sectors
\[c_{L}^{J_{N}}=c_{R}^{J_{N}}=12J_{N} \tag{93}\]
Analogously for \(N\) picture, the equal central charges for left moving and right-moving sectors are
\[c_{L}^{N}=c_{R}^{N}=24N^{3} \tag{94}\]
## 6 Central charges for KNNUT BH from EPF
The metric for KNNUT BH can be written as
\[ds^{2} = -\frac{\Delta_{r}}{\rho^{2}}\,\left[dt-Xd\phi\right]^{2}+\frac{ \sin^{2}\theta}{\rho^{2}}\,\left[(r^{2}+a^{2}+N^{2})\,d\phi-a\,dt\right]^{2}+ \rho^{2}\,\left[\frac{dr^{2}}{\Delta_{r}}+d\theta^{2}\right]. \tag{95}\]
where
\[a \equiv \frac{J}{M},\,\rho^{2}\equiv r^{2}+(N+a\,\cos\theta)^{2} \tag{96}\] \[\Delta_{r} \equiv r^{2}-2Mr+a^{2}+Q^{2}-N^{2}\] (97) \[X \equiv a\,\sin^{2}\theta-2N\,\cos\theta. \tag{98}\]
and here the conserved charges are Komar mass \(M\), angular momentum \(J=aM\), gravito-electric charge \(Q\) and the gravitomagnetic charge or dual (magnetic) mass or NUT charge \(N\). The BH horizons are
\[r_{\pm} \equiv M\pm\sqrt{M^{2}-a^{2}-Q^{2}+N^{2}} \tag{99}\]
Incorporating \(J_{N}=MN\), the entropy of \({\cal H}^{\pm}\) should be derived as [20, 21]
\[{\cal S}_{\pm} = 2\pi\left[2(M^{2}+N^{2})-Q^{2}\pm 2\sqrt{M^{4}+J_{N}^{2}-J^{2}-M^{ 2}Q^{2}}\right]. \tag{100}\]
The EPF for KNNUT BH is defined to be
\[{\cal F} = \left[J^{2}+J_{N}^{2}+\left(\frac{Q^{2}}{2}-N^{2}\right)^{2}\right] \tag{101}\]
Hence we find the equal central charges for left moving and right-moving sectors for \(J\) picture
\[c_{L}^{J}=c_{R}^{J}=12J=12aM=12a\sqrt{a^{2}+Q^{2}-N^{2}} \tag{102}\]
This central charges in \(J\) picture are completely agreement with the central charges derived by using Brown-Henneaux technique which makes the use of asymptotic symmetry group (ASG) [26][Shortly derived in Appendix C]. Similarly we find the equal central charges for left moving and right-moving sectors for \(J_{N}\) picture
\[c_{L}^{J_{N}}=c_{R}^{J_{N}}=12J_{N} \tag{103}\]
Analogously the equal central charges for left moving and right-moving sectors are in \(N\) picture,
\[c_{L}^{N}=c_{R}^{N}=24N\left(N^{2}-\frac{Q^{2}}{2}\right) \tag{104}\]
Finally for \(Q\) picture, we get the equal central charges for left moving and right-moving sectors
\[c_{L}^{Q}=c_{R}^{Q}=12Q\left(\frac{Q^{2}}{2}-N^{2}\right) \tag{105}\]
The above analysis tells us that using EPF one could derive central charges for various pictures of conserved charges. It also remarkable that the central charges are equal means the _EPF is mass-independent_. On the other hand the central charges are unequal means the _EPF is mass-dependent_. This is an another way one could verify whether the entropy product of \({\cal H}^{\pm}\) is mass-independent or not.
## 7 Conclusions
It was proposed that a generic four dimensional TNUT BH could be explicitly expressed in terms of three or four different types of thermodynamic hairs. They must be defined as the Komar mass (\(M=m\)), the angular momentum (\(J_{n}=mn\)), the gravitomagnetic charge (\(N=n\)), and or the dual (magnetic) mass (\(\tilde{M}=n\)). In this context, we defined the EPF for a TNUT BH. Using this feature and the above proposal, we evaluated the _central charges_ of dual CFT by virtue of Cardy's formula.
Remarkably for NUT class of BH, we proved that there exists an _universal relation between the central charges and EPF_ as \(c=6\left(\frac{\partial{\cal F}}{\partial{\cal W}_{i}}\right)\). We first showed the entropy product function could be derived by using first law of thermodynamics of both the horizons. Also, we computed the different thermodynamic parameters in the left-moving sectors and right-moving sectors and reverified the
results obtained by using EPF. Moreover, we examined the first laws that satisfied for both the horizons in case of left-moving and right-moving sectors.
Furthermore, we introduced the Bezout's identity and showed that for NUT class of BH we can generate more holographic pictures that described by a pair of intgers \((a,b)\). Again more holographic picture of NUT class of BH indicates toward understanding the holographic nature of quantum gravity. At the end we showed that using EPF one could derive central charges for various NUT class of BHs. Remarkably, we found that the central charges are equal in both sectors and this immediately indicates that the _EPF is mass-independent_. This is an another way one could verify whether the entropy product is mass-independent or not. Finally, we showed that the EPF method is more convenient to derive the central charges than the ASG method.
## 8 Appendix-A
In this appendix section, we will calculate the central charges from EPF for four dimensional RN BH, Kerr BH and KN BH and contrast with the central charges derived from the near-horizon geometry of the corresponding extremal BH by using the asymptotic symmetry group (ASG) introduced by Brown and Henneaux [12].
1) _Kerr BH_
The EPF for Kerr BH is
\[{\cal F} = J^{2} \tag{106}\]
The equal central charges for left moving and right-moving sectors obtained for \(J\) picture as
\[c_{L}^{J} = 12J \tag{107}\] \[c_{R}^{J} = 12J \tag{108}\]
This result is completely agreement with the central charges obtained by ASG analysis [2].
2)_KN BH_
The EPF for KN BH is given by
\[{\cal F} = J^{2}+\frac{Q^{4}}{4} \tag{109}\]
Hence for \(J\) picture we find the central charges for left moving and right-moving sectors are
\[c_{L}^{J} = 12J \tag{110}\] \[c_{R}^{J} = 12J \tag{111}\]
These results are completely agreement with the central charges obtained by ASG analysis in Eq. (116). Similarly for \(Q\) picture, one finds the equal central charges for left moving and right-moving sectors
\[c_{L}^{Q} = 6Q^{3} \tag{112}\] \[c_{R}^{Q} = 6Q^{3} \tag{113}\]
Similarly, these results are also completely agreement with the central charges obtained by ASG analysis [13].
## 9 Appendix B: Calculation of Central charges by using Near-Horizon Geometry of Extremal KN BH
In this appendix section, we will review how to derive central charges of extremal KN BH in the near-horizon limit by using ASG analysis followed by the work of Hartman [13]. To do that we have
to consider near horizon metric of an extremal, stationary, rotationally symmetric BH of the following form
\[ds^{2} = \Upsilon\left[-r^{2}\,dt^{2}+\frac{dr^{2}}{r^{2}}+\zeta\,d\theta^{2} \right]+\chi\left(d\phi+kr\,dt\right)^{2} \tag{114}\]
where
\[\Upsilon=\rho_{+}^{2},\,\zeta=1,\,\chi=\frac{(r_{+}^{2}+a^{2})^{2}\sin^{2} \theta}{\rho_{+}^{2}}\]
and we have defined
\[\rho_{+}^{2}=r_{+}^{2}+a^{2}\cos^{2}\theta,\,k=\frac{2ar_{+}}{r_{+}^{2}+a^{2} },\,r_{+}=m\]
Then the central charges is derived to be in the \(J\) picture
\[c_{L} = 3k\int_{0}^{\pi}\sqrt{\Upsilon\,\zeta\,\chi}\,d\theta \tag{115}\] \[= 12ar_{+}=12am=12J=12a\sqrt{a^{2}+Q^{2}} \tag{116}\]
## 10 Appendix C: Calculation of Central charges by using Near-Horizon Geometry of Extremal KN-NUT BH
In this appendix section, we will review how to derive central charges of extremal KN-NUT BH in the near-horizon limit by using ASG analysis followed by the work [26]. To proceed we have to consider near horizon metric of an extremal, stationary, rotationally symmetric BH of the following form
\[ds^{2} = \Upsilon\left[-y^{2}\,d\tau^{2}+\frac{dy^{2}}{y^{2}}+\zeta\,d \theta^{2}\right]+\chi\left(d\phi+p\,y\,d\tau\right)^{2} \tag{117}\]
where
\[\Upsilon=\rho_{+}^{2},\,\zeta=1,\,\chi=\frac{[r_{+}^{2}+(a+n)^{2}]^{2}\sin^{2} \theta}{\rho_{+}^{2}}\]
and we have defined
\[\rho_{+}^{2}=r_{+}^{2}+(n+a\cos\theta)^{2},\,p=\frac{2ar_{+}}{r_{+}^{2}+(a+n) ^{2}},\,r_{+}=m\]
Then the central charges is derived to be in the \(J\) picture
\[c_{L} = 3p\int_{0}^{\pi}\sqrt{\Upsilon\,\zeta\,\chi}\,d\theta \tag{118}\] \[= 12ar_{+}=12am=12J \tag{119}\]
|
2309.11562 | The Quintuplet Annihilation Spectrum | We extend the Effective Field Theory of Heavy Dark Matter to arbitrary odd
representations of SU(2) and incorporate the effects of bound states. This
formalism is then deployed to compute the gamma-ray spectrum for a 5 of SU(2):
quintuplet dark matter. Except at isolated values of the quintuplet mass, the
bound state contribution to hard photons with energy near the dark-matter mass
is at the level of a few percent compared to that from direct annihilation.
Further, compared to smaller representations, such as the triplet wino, the
quintuplet can exhibit a strong variation in the shape of the spectrum as a
function of mass. Using our results, we forecast the fate of the thermal
quintuplet, which has a mass of $\sim$13.6 TeV. We find that existing H.E.S.S.
data should be able to significantly test the scenario, however, the final word
on this canonical model of minimal dark matter will likely be left to the
Cherenkov Telescope Array (CTA). | Matthew Baumgart, Nicholas L. Rodd, Tracy R. Slatyer, Varun Vaidya | 2023-09-20T18:02:11Z | http://arxiv.org/abs/2309.11562v2 | # The Quintuplet Annihilation Spectrum
###### Abstract
We extend the Effective Field Theory of Heavy Dark Matter to arbitrary odd representations of SU(2) and incorporate the effects of bound states. This formalism is then deployed to compute the gamma-ray spectrum for a \(\mathbf{5}\) of SU(2): quintuplet dark matter. Except at isolated values of the quintuplet mass, the bound state contribution to hard photons with energy near the dark-matter mass is at the level of a few percent compared to that from direct annihilation. Further, compared to smaller representations, such as the triplet wino, the quintuplet can exhibit a strong variation in the shape of the spectrum as a function of mass. Using our results, we forecast the fate of the thermal quintuplet, which has a mass of \(\sim\)13.6 TeV. We find that existing H.E.S.S. data should be able to significantly test the scenario, however, the final word on this canonical model of minimal dark matter will likely be left to the Cherenkov Telescope Array (CTA).
###### Contents
* 1 Introduction
* 2 Direct Annihilation
* 2.1 Review: An EFT for the endpoint spectrum from DM annihilation
* 2.2 Extension to the quintuplet
* 3 Bound State Formation
* 3.1 Generators and the potential
* 3.2 Bound state formation through emission of a photon
* 3.3 Bound state formation through emission of \(W\) and \(Z\) bosons
* 3.4 Capture rate results
* 4 Bound State Annihilation
* 4.1 The decay cascade
* 4.2 Operators for bound state decay
* 4.3 Wavefunction factors for bound state annihilation
* 4.4 Bound state decay rate into SM particles
* 4.5 Bound state transitions
* 4.6 Key points from the bound state decay calculation
* 5 The Combined Photon Spectrum and Numerical Results
* 5.1 Predictions for the spectrum and rate of photon production
* 5.2 Uncertainty associated with the velocity distribution of dark matter
* 5.3 Estimating the experimental sensitivity to quintuplet DM
* 6 Conclusions
* A Quintuplet Dark Matter: A Brief Review
* B Operators for higher-\(L\) Bound State annihilation
* C Unstable Particle Effective Theory
* D Proof of the Wilson Line Identity
* E Subtle Signs in the Bound-state Formation and Decay Calculations
* F Analytic Approximate Results for Annihilation and Bound-state Formation
Introduction
A fundamental difficulty in the search for the particle nature of dark matter (DM) is the enormous array of well motivated candidates that exist. There are many ways to forge a path through the vast model space. One guiding principle is a bottom-up notion of simplicity--a preference is given for the minimal modification to the Standard Model (SM) consistent with observations. As originally emphasized in Ref. [1], with the mantra of minimality, few models are simpler than quintuplet DM. Within this model, the SM is augmented by a single new field, a Majorana fermion transforming in the \(\mathbf{5}\) or quintuplet representation of SU(2), and as a gauge singlet under SU(3)\(\times\)U(1). This representation sits at a "sweet spot." It is large enough that no additional symmetries are needed to make it cosmologically stable, with decay operators only appearing at dimension-6. Simultaneously, it is small enough that the SU(2) Landau pole remains above the GUT scale. After electroweak symmetry breaking, the five Majorana fermions of the quintuplet reorganize themselves into a neutral Majorana fermion, \(\chi^{0}\) - the DM candidate - as well as singly and doubly-charged Dirac fermions, \(\chi^{+}\) and \(\chi^{++}\), which will play important roles in the phenomenology. The coupling between the DM and the SM is fixed by measurements of the electroweak coupling \(\alpha_{W}\), leaving the DM mass, \(M_{\chi}\), as the single free parameter in the model. Assuming a conventional thermal origin of the DM, even the mass becomes fixed to \(M_{\chi}=13.6\pm 0.8\) TeV [2; 3].1 However, if the quintuplet represents only a fraction of the DM or was produced through a non-standard cosmology, a wider range of TeV-scale masses becomes possible. A brief review of quintuplet DM and our conventions for it is provided in App. A.
Footnote 1: This value includes the contribution of bound states to the relic abundance. The value without these effects is closer to 9.6 TeV [4]. The value is, however, computed with the LO rather than NLO potentials of Refs. [5; 6] that we will make use of throughout, as described in Sec. 3.1.
The question of whether the quintuplet is the DM of our Universe can be probed through indirect detection searches for its annihilation signal, \(\chi^{0}\chi^{0}\to\gamma+X\). Here, \(\gamma\) represents the experimentally detected hard photon with energy comparable to \(M_{\chi}\), and \(X\) represents additional undetected radiation. In this context, we consider a hard photon to be one with energy near the mass of the DM, so \(E_{\gamma}\sim M_{\chi}\), and these photons are of particular interest as they give a line-like signature in the photon spectrum at the multi-TeV scale, which is not expected from any plausible astrophysical backgrounds.2 There are reasons to be optimistic that the coming decade may bring with it the detection of such TeV photons from DM annihilation, primarily on account of the rapid progress in the experimental program of searching for TeV-PeV photons. Building on the successes of DM searches with ongoing air and water Cherenkov telescopes such as HAWC [8; 9; 10; 11; 12; 13], H.E.S.S. [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25], VERITAS [26; 27; 28; 29],
MAGIC [30; 31; 32; 33; 34; 35], and LHAASO [36; 37], the next ten years should see both substantial advances in sensitivity at existing experiments (see _e.g._ Refs. [38; 39]), and new facilities such as the Cherenkov Telescope Array (CTA) [40; 41; 42; 43; 44] and Southern Wide-Field Gamma-Ray Observatory (SWGO) [45; 46]. For a recent review, see Ref. [47]. The data these instruments will collect raises the exciting possibility that a signal from a heavy multi-TeV thermal DM candidate, such as the quintuplet, could be around the corner.
The central focus of the present work is to take this possibility seriously, and therefore derive a precise theoretical prediction for the photon spectrum from quintuplet annihilation. Even though the quintuplet represents at most a single-parameter model characterized solely by its mass, there are a number of effects that must be carefully accounted for in order to produce an accurate spectrum. Arguably the most important of these is the fact that as two \(\chi^{0}\) approach one another, once they come within a distance \(r\sim m_{W}^{-1}\), they can exchange virtual electroweak bosons, and thereby experience a potential that can significantly perturb their initial wavefunctions. This effect is referred to as Sommerfeld enhancement, and can modify the expected annihilation rate by orders of magnitude [48; 49; 50; 51; 52]. Further, the problem involves several hierarchies of scale, which necessitates an effective field theory treatment. The large hierarchy between the DM mass and the electroweak scale, \(M_{\chi}\gg m_{W}\), manifests itself through large Sudakov double logarithms, \(\alpha_{W}\ln^{2}(M_{\chi}/m_{W})\), which lead to a breakdown in naive perturbation theory. Perturbative control can be restored by resumming these logarithms using the techniques of effective field theory (EFT), in particular one built using Soft-Collinear Effective Theory (SCET) [53; 54; 55; 52]. This solution has previously been implemented in heavy dark-matter models of neutralinos, including the wino and higgsino [56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67]. An additional source of large logs occurs due to our insistence on hard photons near the endpoint of the spectrum, with \(E_{\gamma}\sim M_{\chi}\). More carefully, we focus on photons with \(z=E_{\gamma}/M_{\chi}\), such that \((1-z)\ll 1\). This kinematic restriction gives rise to large terms of the form \(\ln(1-z)\), which can also be resummed as shown in, for instance, Ref. [62]. In summary, the effects of Sommerfeld enhancement, electroweak Sudakov logarithms, and endpoint contributions have been incorporated in a number of scenarios of neutralino DM. For instance, Refs. [62; 64] developed an EFT framework for including all of these effects in the case of wino-like DM, where DM is a \(\mathbf{3}\) or triplet of SU(2), allowing the spectrum to be computed to next-to-leading logarithmic (NLL) accuracy. However, the power of EFT and factorization is that many aspects of those calculations should not depend on the specific DM representation, so that results for the wino - an SU(2) triplet - should lift to the quintuplet and in fact, as we will discuss, to any higher representation. In Sec. 2 of the present work we will demonstrate this explicitly, and in particular compute the NLL quintuplet direct annihilation spectrum.
The quintuplet also represents an opportunity to extend this formalism to incorporate an additional (potentially) important source of photons: bound state formation and decay. The starting point for these bound states is similar to that of the Sommerfeld enhancement to annihilation. As there, once the initial \(\chi^{0}\) pair comes within \(r\sim m_{W}^{-1}\), they can exchange a \(W\)-boson and thereby convert into a \(\chi^{+}\chi^{-}\) pair. This pair of charged particles can now emit a photon, capturing into a bound state, which can be thought of as the DM quintuplet analogue
of positronium. At higher DM masses, it is also possible to emit a \(W\) boson and capture into a charged bound state, and further on-shell \(Z\) boson emission becomes a significant contributor to the formation of neutral bound states. All of these bound states are generally unstable and may decay either to lighter bound states in the spectrum, or directly to SM particles. In principle the contribution of bound states could also be important for the wino, however, as shown in Ref. [68] it is irrelevant for present-day indirect detection (albeit possibly relevant in the early Universe, see for example Ref. [2]). For the thermal wino, there is only one bound state present in the spectrum, and capture to it via emission of a dipole photon is forbidden by spin statistics. For heavier winos or wino-like particles (with non-thermal histories to prevent overclosure of the Universe), there is a non-zero rate for capture to a spectrum of bound states via dipole photon emission (and at sufficiently high masses, \(W\) and \(Z\) emission). However, dimensionless numerical factors arising from the wavefunction overlap integral render these rates small compared to direct annihilation, even in the limit of very heavy DM where SU(2) can be treated as approximately unbroken.
For the quintuplet, it was already anticipated in Ref. [68] that the higher preferred mass would allow for a more complicated spectrum of bound states, and the suppression from dimensionless factors should not be as severe. Furthermore, previous studies of quintuplet DM have found that bound state formation plays a key role in setting the abundance of DM in the early Universe [2] and can be non-negligible in indirect searches for quintuplet annihilation in the Milky Way halo [69]. In the present work, we apply our SCET-based formalism to precisely compute contributions to the annihilation signal from bound state formation followed by decay. The existence of bound states in the quintuplet spectrum roughly induces the following independent (_i.e._ non-interfering) contribution to the annihilation spectrum,
\[\frac{d\sigma}{dz}=\sum_{B}\sigma(\chi^{0}\chi^{0}\to B+X_{\text{us}}) \frac{1}{\Gamma_{B}}\frac{d\Gamma_{B\rightarrow\gamma+X}}{dz}, \tag{1}\]
where the sum is over the set of bound states in the theory, \(\sigma(\chi^{0}\chi^{0}\to B+X_{\text{us}})\) is the production cross section for the bound state which happens via the emission of an unobserved ultra-soft particle \(X_{\text{us}}=\gamma,W,Z\), \(\Gamma_{B}\) is the total decay width for the bound state, and \(d\Gamma_{B\rightarrow\gamma+X}/dz\) is the differential decay rate to a measured photon carrying energy \(zM_{\chi}\). This result is essentially a consequence of the narrow width approximation; we will justify it in App. C. Taking Eq. (1) as given, the problem of introducing the bound state contribution is reduced to computing three ingredients. The first of these is the capture cross section, \(\sigma(\chi^{0}\chi^{0}\to B+X_{\text{us}})\), or heuristically the probability for a given bound state to form. We compute this using first-order perturbation theory in non-relativistic quantum mechanics, following Refs. [68; 70], with an extension to handle bound states produced by \(W/Z\) emission instead of photon emission. The second of these is determining the fully inclusive decay rate for the bound state, \(\Gamma_{B}\). To obtain this result we need both the decay rates for excited states to transition to lower-energy states (calculated using perturbation theory, similar to the initial capture process) and the rates for decay via annihilation into SM particles. The final ingredi
ent is to compute the photon spectrum of the bound state decay. As emphasized above we are primarily interested in the spectrum of hard photons, with \(E_{\gamma}\sim M_{B}/2\). This implies that the photons are kinematically restricted to the endpoint region, and this will allow us to draw on the SCET-based machinery for calculating endpoint spectra developed in Refs. [62; 64]. For this final step, we will also need to consider additional operators mediating the hard process that are not needed for the direct annihilation process. The reason for this is that the decaying bound states may have different spin and angular momentum quantum numbers than those configurations that dominate the direct annihilation.
Our calculation of the aforementioned bound state effects is divided between two sections: in Sec. 3 we compute the formation rates, while in Sec. 4 we determine the fate of each bound state, accounting for transitions between bound states and their ultimate decay to SM particles. When these results are combined, we observe that the contribution of the bound state annihilation to the hard photon spectrum is only a few percent of that from direct annihilation at most masses. The contribution from bound state annihilation is small for a number of reasons: firstly, bound-state formation from a \(\chi^{0}\chi^{0}\) initial state with \(L=0\) requires capture into an excited state, which is generically suppressed by a wavefunction overlap factor, and the transition from the \(L=0\) state to the lowest-lying \(L=1\) states also has an accidentally small numerical prefactor in the cross section--for the wino, this coefficient is zero in the limit of unbroken SU(2). If we instead consider \(L>0\) initial states, these contributions are velocity suppressed due to the small non-relativistic speed of DM in our Milky Way Galaxy. In addition, odd-\(L\) initial states must have \(S=1\) to ensure the asymmetry of the \(\chi^{0}\chi^{0}\) wavefunction. They thus give rise to \(S=1\) bound states (ignoring spin-flip transitions, which are suppressed); we find that such bound states have power-suppressed contributions to the endpoint photon spectrum. All these effects are discussed in App. F, where we also show that in the limit of high DM mass and large representation size, the ratio of bound-state formation to direct annihilation is expected to decrease further for larger representations (in the context of indirect detection of hard gamma rays).
In Sec. 5 we combine the pieces to obtain the full quintuplet endpoint spectrum and annihilation cross section as a function of DM mass. As for the wino, the annihilation cross section exhibits a rich structure and rapid variation associated with near-zero energy bound states, characteristic of the Sommerfeld enhancement. Unlike the wino, however, we also see a strong variation in the shape of the spectrum itself: the energy distributions of photons resulting from a quintuplet annihilation can depend sensitively on its exact mass. Using these results, we estimate the sensitivity of existing H.E.S.S. data to the thermal quintuplet, finding that for commonly adopted DM profiles in the Milky Way, the signal should already be either observable or in tension. If we adopt a more conservative DM profile, however, our estimate is that the final word on the quintuplet will await the data that will soon be collected by CTA. Finally, our conclusions are presented in Sec. 6, with several extended discussions and details relegated to appendices.
## 2 Direct Annihilation
While bound states represent a novel addition to the spectrum for the case of the quintuplet, direct annihilation via \(\chi^{0}\chi^{0}\to\gamma+X\) remains the dominant contribution to the hard photon spectrum for most masses, and we will compute it in this section. To do so we will draw on the EFT formalism developed to compute the leading log (LL) spectrum in Ref. [62], and then extended to NLL in Ref. [64]. The formalism there was applied to the wino, yet as emphasized in those references the approach can be readily extended to other DM candidates, especially to cases where the DM is simply charged under SU(2). This section represents an explicit demonstration of that claim. We begin in Sec. 2.1 by briefly reviewing the framework developed in Refs. [62; 64]. In doing so, we will focus on the aspects of the formalism that we will generalize to make it clear how to extend the calculation to additional SU(2) representations of DM, and we defer to those references for a complete discussion of all the relevant ingredients. Having done this, in Sec. 2.2 we will then demonstrate explicitly how the calculation can be extended to the quintuplet, and provide results for the LL and NLL spectrum.
Before moving into the details, however, let us already demonstrate the importance of performing the NLL resummation, and the precision obtained by doing so. In the left of Fig. 1 we show a comparison of the line cross section computed to NLL accuracy, with the associated uncertainties, compared to the result if we had only performed a tree level computation of the
Figure 1: A demonstration of the importance of performing the NLL resummation of the direct quintuplet annnihilation cross-section. (Left) A comparison of the line cross-section between the NLL resummed result and that obtained using only the tree-level calculation combined with the Sommerfeld enhancement. At the thermal mass, \(M_{\chi}=13.6\) TeV, the resummed cross-section is more than a factor of four smaller. (Right) The cross section for the thermal quintuplet integrated over \(z\in[z_{\rm cut},1]\). A significant reduction in the theoretical uncertainty is obtained for the NLL result. Further, we see an increase in the cross section in the range appropriate for the H.E.S.S. energy resolution, which arises due to the presence of endpoint photons. (The right figure can be directly contrasted to that of the wino, presented in Ref. [64].) Both figures consider solely the contribution from direct annihilation; incorporating the bound state contributions will be a major focus of this work.
rates, augmented with the Sommerfeld enhancement determined from the NLO potentials of Ref. [71] (discussed further in Sec. 3.1). Note, the line cross section is the annihilation rate to a two-photon final state, specifically the rate for \(\chi^{0}\chi^{0}\to\gamma\gamma\) + half the rate for \(\chi^{0}\chi^{0}\to\gamma Z\), with the photons having an energy \(E_{\gamma}=M_{\chi}\). We see that at larger masses the difference between the two methods can be significant. The NLL result is already a factor of four smaller than the tree level approximation at the thermal mass, and by 100 TeV the difference is more than an order of magnitude. The EFT formalism also incorporates endpoint photons (described in detail below) that have \(E_{\gamma}\lesssim M_{\chi}\), which, given the finite energy resolution of IACTs like H.E.S.S., are indistinguishable from the line. This is shown on the right of Fig. 1, where we plot the integrated cumulative cross section down to a given \(z_{\rm cut}\), which is almost a factor of two larger within the H.E.S.S. resolution as compared to the value for \(z_{\rm cut}=1\). In addition, this figure demonstrates the considerable reduction in theoretical uncertainty of the cross section obtained at NLL.
### Review: An EFT for the endpoint spectrum from DM annihilation
As described in the introduction, a precise prediction of the hard photon spectrum arising from the annihilation of heavy DM charged under SU(2) mandates an accounting of a number of physical effects. The benefit of approaching this problem using the EFT formalism reviewed in this section is that the various effects will factorize; heuristically, we will be able to separate the physics associated with different scales into objects that can be calculated independently.
To reiterate, the calculation we are interested in is the spectrum, \(d\sigma/dz\), of hard photons resulting from DM annihilations, where "hard" means that we are interested in photons carrying away large energy fractions \(z=E_{\gamma}/M_{\chi}\sim 1\). The starting point is two incoming neutral DM particles, \(\chi^{0}\), which are asymptotically momentum eigenstates described by plane waves. These states, initially with momenta \(p_{1,2}\sim M_{\chi}v\sim 10^{-3}M_{\chi}\) (in the Milky Way's halo) will eventually be within a distance \(r\sim m_{W}^{-1}\), at which point they will experience an interaction potential that perturbs their wave-functions away from plane waves. The perturbed wave-functions can yield a significantly enhanced probability for the particles to have a separation \(r\sim M_{\chi}^{-1}\), where the hard annihilation occurs, and thereby provide a large boost to the cross section. Restricting our attention to the case where \(M_{\chi}\gg m_{W}\), the Sommerfeld enhancement occurs on a parametrically larger distance scale than the annihilation, and so we can factorize it out from the cross section as follows,3
Footnote 3: For a more detailed discussion of this factorization, we refer to Refs. [57; 62; 71].
\[\frac{d\sigma}{dz}=\sum_{a^{\prime}b^{\prime}ab}F^{a^{\prime}b^{\prime}ab} \frac{d\hat{\sigma}^{a^{\prime}b^{\prime}ab}}{dz}. \tag{1}\]
Here \(a^{\prime}b^{\prime}ab\) are adjoint SU(2) indices, and for the wino we can write
\[F^{a^{\prime}b^{\prime}ab}=\Big{\langle}(\chi^{0}\chi^{0})_{S}\Big{|}(\chi_{v }^{a^{\prime}T}i\sigma_{2}\chi_{v}^{b^{\prime}})^{\dagger}\Big{|}0\Big{\rangle} \Big{\langle}0\Big{|}(\chi_{v}^{aT}i\sigma_{2}\chi_{v}^{b})\Big{|}(\chi^{0} \chi^{0})_{S}\Big{\rangle}. \tag{2}\]
Here \(\chi_{v}\) is the field describing the non-relativistic DM, and the label \(v\) appears just as for a heavy quark in Heavy Quark Effective Theory (HQET). The non-relativistic DM effective theory that governs the dynamics of the field is reviewed in Ref. [62], and as shown there the above expressions for \(F^{a^{\prime}b^{\prime}ab}\) can be directly related to conventional Sommerfeld enhancement factors, as we now review. In the broken phase, the triplet wino is described by a neutral Majorana fermion \(\chi^{0}\), and a heavier charged Dirac fermion \(\chi^{\pm}\). DM annihilations proceed through the neutral states, and to ensure the antisymmetry of the state, at lowest order in the DM velocity (\(s\)-wave), the initial state must be a spin singlet, which we represent through the notation \((\chi^{0}\chi^{0})_{S}\). From this initial state, Eq. (2) describes the fact that through the exchange of electroweak bosons, not only will the incident wave-functions be perturbed, but further there are two final states that the system could evolve into by the time the hard process is initiated, \(\chi^{0}\chi^{0}\) or \(\chi^{+}\chi^{-}\). In Ref. [62] these matrix elements were determined as,
\[\begin{split}\Big{\langle}0\Big{|}(\chi_{v}^{0T}i\sigma_{2}\chi _{v}^{0})\Big{|}(\chi^{0}\chi^{0})_{S}\Big{\rangle}=&\,4\sqrt{2 }M_{\chi}s_{00},\\ \Big{\langle}0\Big{|}(\chi_{v}^{+T}i\sigma_{2}\chi_{v}^{-}) \Big{|}(\chi^{0}\chi^{0})_{S}\Big{\rangle}=&\,4M_{\chi}s_{0\pm}. \end{split} \tag{3}\]
Here \(s_{00}\) and \(s_{0\pm}\) are the Sommerfeld factors that need to be computed, and note that if the Sommerfeld effect were neglected, they would take the values \(s_{00}=1\) and \(s_{0\pm}=0\). We emphasize again that the above expressions only hold for the wino; for the quintuplet, we would also need to account for the presence of the doubly-charged states.
To NLL accuracy, with the Sommerfeld enhancement stripped at the first stage of the matching, the differential cross section with factorized dynamics in SCET is given in terms of the hard function \(H\), the jet functions \(J_{\bar{n}}^{{}^{\prime}}\), \(J_{\gamma}\), and the soft function \(S^{\prime}\)[62; 64]
\[\left(\frac{d\hat{\sigma}}{dz}\right)^{\rm NLL}=H_{ij}(M_{\chi})J_{\gamma}(m_ {{}^{W}})J_{\bar{n}}^{{}^{\prime}}(M_{\chi},1-z,m_{{}^{W}})\otimes S_{ij}^{{} ^{\prime}}(1-z,m_{{}^{W}}) \tag{4}\]
The cross section can then be refactorized into a combination of the following seven factors [62; 64]
\[\begin{split}\left(\frac{d\hat{\sigma}}{dz}\right)^{\rm NLL}=& \,H(M_{\chi},\mu)\,J_{\gamma}(m_{{}^{W}},\mu,\nu)\,J_{\bar{n}}(m_ {{}^{W}},\mu,\nu)\,S(m_{{}^{W}},\mu,\nu)\\ &\times H_{J_{n}}(M_{\chi},1-z,\mu)\otimes H_{S}(M_{\chi},1-z, \mu)\otimes C_{S}(M_{\chi},1-z,m_{{}^{W}},\mu,\nu),\end{split} \tag{5}\]
where we have suppressed the color indices on the hard scattering cross section in Eq. (1). Let us briefly provide some intuition for this expression. The DM annihilation will occur an astrophysical distance from our telescope, and therefore no matter how complex the final state is, we should only expect to see a single photon from the decay. This implies we are sensitive to the annihilation \(\chi^{0}\chi^{0}\to\gamma+X\), where we must be inclusive over the unobserved \(X\). Although unobserved, \(X\) cannot in fact be completely arbitrary. Our choice to search for photons with energy \(E_{\gamma}=zM_{\chi}\), with \((1-z)\ll 1\), implies that the invariant mass of the
recoiling states is constrained to be small: \(m_{\chi}=2M_{\chi}\sqrt{1-z}\ll M_{\chi}\). This implies that the spray of radiation the photon recoils against must be a jet. With this picture in mind, we can apply a conventional SCET factorization to our problem, breaking it into a function for the hard scattering (\(H\)), the collinear radiation in the direction of the photon (\(J_{\gamma}\)) and the recoiling jet (\(J_{\bar{n}}^{\prime}\)), and finally the soft wide angle radiation (\(S^{\prime}\)). As shown in Ref. [62], this factorization can be achieved, but it is insufficient to fully separate the scales that appear when accounting for the finite masses of the electroweak bosons. For this, one must further factorize \(J_{\bar{n}}^{\prime}\) into the two functions \(J_{\bar{n}}\) and \(H_{J_{\bar{n}}}\), and \(S^{\prime}\) into the three functions \(S\), \(H_{S}\), and \(C_{S}\). The full details of this argument, together with the field theoretic definition of each function, is provided in Ref. [62], with Ref. [64] demonstrating that the factorization remains valid even when computing to NLL accuracy. The central utility to Eq. (5) is that each of the functions can be computed separately - and independently of the DM representation - and then brought to a common scale using renormalization group evolution. This facilitates a full result which resums logarithms of \(m_{ W}/M_{\chi}\), but also of \((1-z)\), which can be large given that we are searching for photons near the endpoint with \(z\sim 1\).
The above factorization is perfectly sufficient for the wino. However, the fact that the DM matrix elements in Eq. (2) index the DM with an adjoint SU(2) label demonstrates that this form can only be appropriate for the wino, and further implies that \(d\hat{\sigma}/dz\) is not representation independent. We will now generalize this. To do so, we must revisit the matching of the full theory to our EFT, as this is where the DM representation enters the calculation. Doing so, it is straightforward to show that the tree-level matching can be achieved by way of a single hard scattering operator, given by
\[\mathcal{O}=\left(\chi_{v}^{T}i\sigma_{2}\left\{T_{\chi}^{d},\,T_{\chi}^{c} \right\}\chi_{v}\right)\left(\mathcal{B}_{\perp n}^{ic}\mathcal{B}_{\perp\bar {n}}^{jd}\right)i\epsilon^{ijk}(n-\bar{n})^{k}, \tag{6}\]
with Wilson coefficient,
\[C(\mu)=-\pi\frac{\alpha_{ W}(\mu)}{2M_{\chi}}. \tag{7}\]
In the above result \(T_{\chi}\) are the generators of SU(2), and therefore \(c,d\) correspond to adjoint indices. However, \(T_{\chi}\) is written in whatever representation is appropriate for DM; a review of the relevant form for the generators in the triplet and quintuplet representations is given in App. A.
Equation (6) provides the hard operator before a BPS field redefinition [55]. This transformation must be performed in order to factorize the interactions of the heavy DM from the ultrasoft radiation. Accordingly, we now perform a field redefinition,
\[\chi_{v}\to S_{v}\chi_{v},\hskip 14.226378pt\mathcal{B}_{\perp n}^{a} \to Y_{n}^{aa^{\prime}}\mathcal{B}_{\perp n}^{a^{\prime}}, \tag{8}\]
where \(S_{v}\) and \(Y_{n}^{aa^{\prime}}\) are both ultrasoft Wilson lines, but the former is in the \(v\) direction and the same representation as DM, whereas the latter is in the \(n\) direction and adjoint representation.
The operator then transforms as
\[\begin{split}\mathcal{O}\to&\left(\chi_{v}^{T}i\sigma_{2} S_{v}^{\dagger}\left\{T_{\chi}^{d^{\prime}},\,T_{\chi}^{c^{\prime}}\right\}S_{v} \chi_{v}\right)\left(Y_{n}^{c^{\prime}c}\mathcal{B}_{\perp n}^{ic}Y_{\bar{n}}^ {d^{\prime}d}\mathcal{B}_{\perp\bar{n}}^{jd}\right)i\epsilon^{ijk}(n-\bar{n})^ {k}\\ =&\left(\chi_{v}^{T}i\sigma_{2}\left\{T_{\chi}^{a}, \,T_{\chi}^{b}\right\}\chi_{v}\right)\left(Y^{abcd}\mathcal{B}_{\perp n}^{ic} \mathcal{B}_{\perp\bar{n}}^{jd}\right)i\epsilon^{ijk}(n-\bar{n})^{k},\end{split} \tag{9}\]
where in the final line we have defined,4
Footnote 4: We note that in order to reproduce the operator definitions in Ref. [58] and the works that followed it, Eq. (10) would read \(Y^{abcd}=(Y_{v}^{ae}Y_{n}^{ce})(Y_{v}^{b^{\prime}}Y_{\bar{n}}^{df})\), where all indices have been transposed. We believe the index ordering in that work simply had a typo, and note that to the order all wino calculations have been performed so far, flipping these indices would not impact the results.
\[Y^{abcd}=(Y_{v}^{ea}Y_{n}^{ec})(Y_{v}^{fb}Y_{\bar{n}}^{fd}). \tag{10}\]
To arrive at this result, we made use of the identity \(S_{v}^{\dagger}T_{\chi}^{a}S_{v}=Y_{v}^{aa^{\prime}}T_{\chi}^{a^{\prime}}\), which we demonstrate in App. D. To be explicit, the identity has allowed us to replace a pair of \(S_{v}\) Wilson lines, which are in the DM representation, with a single adjoint \(Y_{v}\) Wilson line.
For the wino, the anticommutator can be readily evaluated,
\[\chi_{v}^{T}i\sigma_{2}\left\{T_{\chi}^{a},\,T_{\chi}^{b}\right\}\chi_{v}= \chi_{v}^{a^{\prime}T}i\sigma_{2}\left(2\delta^{a^{\prime}b^{\prime}}\delta^{ ab}-\delta^{a^{\prime}a}\delta^{b^{\prime}b}-\delta^{a^{\prime}b}\delta^{b^{ \prime}a}\right)\chi_{v}^{b^{\prime}}. \tag{11}\]
Substituting this into Eq. (9), and using \(Y_{v}^{ea}Y_{v}^{fa}=Y_{v}^{ea}(Y_{v}^{\dagger})^{af}=\delta^{ef}\), we obtain two separate operators,
\[\begin{split}\mathcal{O}_{1}&=\left(\chi_{v}^{aT}i \sigma_{2}\chi_{v}^{b}\right)\left(\left[\delta^{ab}Y_{n}^{ec}Y_{\bar{n}}^{ed} \right]\mathcal{B}_{\perp n}^{ic}\mathcal{B}_{\perp\bar{n}}^{jd}\right)i \epsilon^{ijk}(n-\bar{n})^{k},\\ \mathcal{O}_{2}&=\left(\chi_{v}^{aT}i\sigma_{2}\chi_ {v}^{b}\right)\left(Y^{abcd}\mathcal{B}_{\perp n}^{ic}\mathcal{B}_{\perp\bar{ n}}^{jd}\right)i\epsilon^{ijk}(n-\bar{n})^{k},\end{split} \tag{12}\]
with Wilson coefficients
\[C_{1}(\mu)=-C_{2}(\mu)=-\pi\frac{\alpha_{W}(\mu)}{M_{\chi}}, \tag{13}\]
exactly matching those determined in Ref. [58]. Nevertheless, in order to fully factorize the DM representation, we should keep the anticommutator unexpanded. In particular, by so doing only the combination \(\chi_{v}^{T}i\sigma_{2}\left\{T_{\chi}^{a},\,T_{\chi}^{b}\right\}\chi_{v}\) retains any knowledge of the DM representation. We can then introduce a modified definition of Eqs. (1) and (2) which achieves a complete separation of the DM representation from the factorized expressions in Eq. (5). In detail, we write
\[\begin{split}\frac{d\sigma}{dz}&=\sum_{a^{\prime}b^{ \prime}ab}F_{\chi}^{a^{\prime}b^{\prime}ab}\frac{d\hat{\sigma}^{a^{\prime}b^{ \prime}ab}}{dz},\\ F_{\chi}^{a^{\prime}b^{\prime}ab}&=\left\langle(\chi ^{0}\chi^{0})_{S}\right|\left(\chi_{v}^{T}i\sigma_{2}\left\{T_{\chi}^{a^{ \prime}},\,T_{\chi}^{b^{\prime}}\right\}\chi_{v}\right)^{\dagger}\left|0 \right\rangle\!\left\langle 0\right|\left(\chi_{v}^{T}i\sigma_{2}\left\{T_{\chi}^{a},\,T_{\chi}^{b} \right\}\chi_{v}\right)\left|(\chi^{0}\chi^{0})_{S}\right\rangle\!.\end{split} \tag{14}\]
With this factorization we move the DM dependence into \(F_{\chi}\), and the remaining factors in Eq. (9) determine the hard matching onto the SCET calculation of \(d\hat{\sigma}/dz\). Importantly,
when the DM factors are stripped, what remains in \(\mathcal{O}\) is \(\big{(}Y^{abcd}\mathcal{B}^{ic}_{\perp n}\mathcal{B}^{jd}_{\perp\bar{n}}\big{)}i \epsilon^{ijk}(n-\bar{n})^{k}\), which exactly matches the DM stripped contribution in \(\mathcal{O}_{2}\). This implies that if we intend to compute the cross section for the wino using Eq. (14), we need to make two minor changes to the approach used to compute it with Eqs. (1) and (2). Firstly, we must determine the new DM matrix elements \(F^{a^{\prime}b^{\prime}ab}_{\chi}\) in terms of the Sommerfeld factors \(s_{00}\) and \(s_{0\pm}\), specified in Eq. (3), and then evaluate the contraction of \(F_{\chi}\) into \(d\tilde{\sigma}/dz\). Secondly, in the SCET calculation, we match onto the hard function with \(C_{1}(\mu)=0\) and \(C_{2}(\mu)=-\pi\alpha_{ W}(\mu)/2M_{\chi}\) as opposed to the values in Eq. (13). Of course, there was nothing fortuitous in the fact that the DM representation can be factored out so simply, this is simply a manifestation of the power of the EFT approach developed in Refs. [62; 64]. The factorization in Eq. (5) is determined by the relevant degrees of freedom in the theory at scales below \(M_{\chi}\), and this is not altered by changing the DM representation, so the factorization remains unaffected.
To make this point explicit, let us demonstrate that at LL these two approaches yield the same result for the wino; a straightforward generalization of the argument below confirms this conclusion persists at NLL. As outlined above, the calculation in Ref. [62] is modified in two ways: a new set of DM matrix elements is computed in \(F_{\chi}\), and then in the SCET calculation an alternative matching is provided onto the hard function, \(H\). After the BPS field redefinition, the DM representation contracts into the ultrasoft Wilson lines, and therefore in the SCET calculation is contracted into the soft function, \(\tilde{S}\). (Here, \(\tilde{S}=C_{S}S\) is the combination of the collinear-soft and soft functions from Eq. (5).) In full, what we will need to recompute is the combination \(HH_{S}\tilde{S}\), as the first and last of these factors is modified, and although \(H_{S}\) remains unchanged, it connects these two objects.
Let us begin by reviewing the relevant part of the calculation as it appeared in Ref. [62]. Accounting for the running of the hard function between \(2M_{\chi}\) and \(m_{ W}\), we have
\[H_{i}(m_{ W})=U_{H}H_{i}^{\rm tree},\hskip 14.226378ptU_{H}=\exp\bigg{(}-8C_{A} \tilde{\alpha}_{ W}\ln^{2}\bigg{(}\frac{m_{ W}}{2M_{\chi}}\bigg{)}\bigg{)}, \tag{15}\]
where the renormalization group evolution is encoded in \(U_{H}\), which is defined in terms of \(\tilde{\alpha}_{ W}=\alpha_{ W}/4\pi\) and \(C_{A}=2\) the SU(2) adjoint Casimir. The tree level matching coefficients \(H_{i}^{\rm tree}\), are determined by the operator Wilson coefficients \(C_{1}\) and \(C_{2}\), in particular \(H_{1}^{\rm tree}=C_{1}^{*}C_{1}\), \(H_{2}^{\rm tree}=C_{2}^{*}C_{2}\), and \(H_{3}^{\rm tree}=C_{1}^{*}C_{2}=C_{2}^{*}C_{1}\). Given the two different structures of the ultrasoft Wilson lines in Eq. (12), there are four soft functions at the amplitude square level that evaluate to,
\[\begin{split}\tilde{S}^{aba^{\prime}b^{\prime}}_{1}& =\delta^{ab}\delta^{a^{\prime}b^{\prime}},\hskip 85.358268pt\tilde{S}^{ aba^{\prime}b^{\prime}}_{2}&=\delta^{a3}\delta^{a^{\prime}3}\delta^{bb^{ \prime}},\\ \tilde{S}^{aba^{\prime}b^{\prime}}_{3}&=\delta^{a3} \delta^{b3}\delta^{a^{\prime}b^{\prime}}+\delta^{a^{\prime}3}\delta^{b^{\prime }3}\delta^{ab},\hskip 14.226378pt\tilde{S}^{aba^{\prime}b^{\prime}}_{4}& =\delta^{aa^{\prime}}\delta^{bb^{\prime}}.\end{split} \tag{16}\]
These must then be contracted into \(F^{a^{\prime}b^{\prime}ab}\) as defined in Eq. (2), which using Eq. (3) can be evaluated in terms of \(s_{00}\) and \(s_{0\pm}\), with explicit expressions provided in Ref. [62]. The renormalization group evolution of the soft function, and the contraction between it and the
hard function, are controlled by \(H_{S}\), which is given by,5
Footnote 5: We note that to obtain this result we used \(H_{S,22}^{\rm tree}=1\), as opposed to \(H_{S,22}^{\rm tree}=2\), which was stated in Ref. [62] that we believe was a typo.
\[\begin{split} H_{S,11}(m_{ W})=1,\hskip 14.226378ptH_{S,33}(m_{ W})=U_{H_{S}},\hskip 14.226378ptH_{S,31}(m_{ W})=\frac{2}{3}[1-U_{H_{S}}],\\ H_{S,22}(m_{ W})=U_{H_{S}},\hskip 14.226378ptH_{S,24}(m_{ W})=\frac{1}{3}[1-U_{H_{S}}],\end{split} \tag{17}\]
where \(U_{H_{S}}\) quantifies the evolution in a similar fashion to \(U_{H}\). Combining these results, we conclude
\[\begin{split} H_{i}H_{S,ij}\tilde{S}_{j}^{aba^{\prime}b^{\prime}}F ^{a^{\prime}b^{\prime}ab}=& 16\pi^{2}\alpha_{ W}^{2}U_{H}\left[\left(\frac{4}{3}|s_{00}|^{2}+2|s_{0\pm}|^{2}+ \frac{4\sqrt{2}}{3}{\rm Re}(s_{00}s_{0\pm}^{*})\right)\right.\\ &\left.+U_{H_{S}}\left(-\frac{4}{3}|s_{00}|^{2}+2|s_{0\pm}|^{2}- \frac{4\sqrt{2}}{3}{\rm Re}(s_{00}s_{0\pm}^{*})\right)\right]\\ \equiv& 16\pi^{2}\alpha_{ W}^{2}U_{H}\left[F_{0}+U_{H_{S}}F_{1}\right].\end{split} \tag{18}\]
The functions \(F_{0}\) and \(F_{1}\) encode the two combinations of Sommerfeld factors that appeared in parentheses, and appear in the wino LL result.
The above is a direct repetition of the calculation performed in Ref. [62], we now demonstrate that the same result is achieved in our modified approach. Firstly, in this approach, the hard matching coefficients are modified, with only \(H_{2}^{\rm tree}\) non-zero now, because \(C_{1}=0\) (recall, we have a single operator here with the same structure as \(\mathcal{O}_{2}\)). The only other modifications are the contractions of \(\tilde{S}\) in Eq. (16) into \(F_{\chi}\) rather than \(F\). These can be determined straightforwardly, for instance,
\[\tilde{S}_{1}^{aba^{\prime}b^{\prime}}F_{\chi}^{a^{\prime}b^{ \prime}ab}= \left\langle(\chi^{0}\chi^{0})_{S}\right|\left(\chi_{v}^{T}i \sigma_{2}\left\{T_{\chi}^{a^{\prime}},\,T_{\chi}^{a^{\prime}}\right\}\chi_{v }\right)^{\dagger}\left|0\right\rangle\!\left\langle 0\right|\left(\chi_{v}^{T}i \sigma_{2}\left\{T_{\chi}^{a},\,T_{\chi}^{a}\right\}\chi_{v}\right)\left|( \chi^{0}\chi^{0})_{S}\right\rangle\] \[= 16\left|\left\langle 0\right|\left(\chi_{v}^{Ta}i\sigma_{2}\chi_{v }^{a}\right)\left|(\chi^{0}\chi^{0})_{S}\right\rangle\right|^{2}\] \[= 16\left|\left\langle 0\right|\left(\chi_{v}^{T0}i\sigma_{2}\chi_{v }^{0}\right)\left|(\chi^{0}\chi^{0})_{S}\right\rangle+2\!\left\langle 0\right| \left(\chi_{v}^{T+}i\sigma_{2}\chi_{v}^{-}\right)\left|(\chi^{0}\chi^{0})_{S} \right\rangle\right|^{2}\] \[= 256M_{\chi}^{2}\left|\sqrt{2}s_{00}+2s_{0\pm}\right|^{2}. \tag{19}\]
The remaining combinations are given by,
\[\begin{split}\tilde{S}_{2}^{aba^{\prime}b^{\prime}}F_{\chi}^{a^{ \prime}b^{\prime}ab}&=256M_{\chi}^{2}|s_{0\pm}|^{2},\\ \tilde{S}_{3}^{aba^{\prime}b^{\prime}}F_{\chi}^{a^{\prime}b^{\prime }ab}&=256M_{\chi}^{2}\left(4|s_{0\pm}|^{2}+2\sqrt{2}{\rm Re}(s_{00} s_{0\pm}^{*})\right),\\ \tilde{S}_{4}^{aba^{\prime}b^{\prime}}F_{\chi}^{a^{\prime}b^{ \prime}ab}&=128M_{\chi}^{2}\left(2|s_{00}|^{2}+3|s_{0\pm}|^{2}+2 \sqrt{2}{\rm Re}(s_{00}s_{0\pm}^{*})\right).\end{split} \tag{20}\]
Using these modified results we find that \(H_{i}H_{S,ij}\tilde{S}_{j}^{aba^{\prime}b^{\prime}}F_{\chi}^{a^{\prime}b^{ \prime}ab}\) in the new basis exactly matches
Eq. (18), as it must. The utility of this approach, is that having formulated the calculation in this way, if we changed the DM representation, the only part of the calculation that would need to be modified is that the appropriate generalizations of contractions in Eqs. (19) and (20) would need to be computed. We will evince this by showing that results for the quintuplet can be derived straightforwardly in the next subsection. Again, at NLL an almost identical modification to the approach in Ref. [64] is required, one must simply account for the more complicated forms the hard and soft functions take at that order.
### Extension to the quintuplet
In the previous subsection, we reorganized the formalism of Refs. [62; 64] in such a way that the dependence on the DM representation is fully encoded in \(F_{\chi}\) as defined in Eq. (14), and explicitly demonstrated this alternative procedure produces the same result at LL. This reorganization has the benefit that the quintuplet calculation (and that for any higher odd-\(n\) representation, see _e.g._ Ref. [3]) is almost identical to that of the wino; there are unique Sommerfeld expressions to compute, and a modification for how the new \(F_{\chi}\) contracts into the soft Wilson lines in \(Y^{abcd}\), but in essence the computation is the same.
For the Sommerfeld factors, in the broken phase, the five degrees of freedom of the quintuplet reorganize themselves into a neutral Majorana fermion \(\chi^{0}\), a heavier singly charged Dirac fermion \(\chi^{\pm}\), and a doubly-charged Dirac fermion \(\chi^{\pm\pm}\) that is even heavier. (Again, a more complete discussion is provided in App. A.) This implies that there are now three two-body states that can initiate the hard annihilation that are coupled to the initial state through the potential,6 and we parameterize the various matrix elements as7
Footnote 6: This ignores for the moment the possibility of annihilation through bound states with different quantum numbers, which we will discuss later.
Footnote 7: We emphasize that despite the repeated notation, the functions \(s_{00}\) and \(s_{0\pm}\) controlling the Sommerfeld enhancement have numerically different values for the wino (Eq. (3)) and quintuplet (Eq. (21)).
\[\begin{split}\Big{\langle}0\Big{|}(\chi_{v}^{0T}i\sigma_{2} \chi_{v}^{0})\Big{|}(\chi^{0}\chi^{0})_{S}\Big{\rangle}=&\,4 \sqrt{2}M_{\chi}s_{00},\\ \Big{\langle}0\Big{|}(\chi_{v}^{+T}i\sigma_{2}\chi_{v}^{-})\Big{|} (\chi^{0}\chi^{0})_{S}\Big{\rangle}=&\,4M_{\chi}s_{0\pm},\\ \Big{\langle}0\Big{|}(\chi_{v}^{+T}i\sigma_{2}\chi_{v}^{-}-) \Big{|}(\chi^{0}\chi^{0})_{S}\Big{\rangle}=&\,4M_{\chi}s_{0\pm \pm}.\end{split} \tag{21}\]
Note that if we performed our entire calculation at tree level or neglected the Sommerfeld effect, we would take \(s_{00}=1\) and \(s_{0\pm}=s_{0\pm\pm}=0\). Using these, we can compute the full \(F_{\chi}\) for an arbitrary set of indices.
From these functions we can immediately derive the relevant spectra. At LL, all that is required is to derive the analogue of \(F_{0}\) and \(F_{1}\) as they appeared in Eq. (18). The main calculation is to compute the equivalent contractions for Eqs. (19) and (20), which are given by
\[\tilde{S}_{1}^{aba^{\prime}b^{\prime}}F_{\chi}^{a^{\prime}b^{\prime}ab}=2384M_ {\chi}^{2}\Big{|}2s_{0\pm\pm}+2s_{0\pm}+\sqrt{2}s_{00}\Big{|}^{2},\]
\[\tilde{S}_{2}^{aba^{\prime}b^{\prime}}F_{\chi}^{a^{\prime}b^{\prime}ab} =256M_{\chi}^{2}\left|4s_{0\pm\pm}+s_{0\pm}\right|^{2},\] \[\tilde{S}_{3}^{aba^{\prime}b^{\prime}}F_{\chi}^{a^{\prime}b^{\prime }ab} =1536M_{\chi}^{2}\left[8|s_{0\pm\pm}|^{2}+2|s_{0\pm}|^{2}+10\text{Re}(s_{0\pm} s_{0\pm\pm}^{*})+4\sqrt{2}\text{Re}(s_{00}s_{0\pm\pm}^{*})+\sqrt{2}\text{Re}(s_{00}s_ {0\pm}^{*})\right]\!,\] \[\tilde{S}_{4}^{aba^{\prime}b^{\prime}}F_{\chi}^{a^{\prime}b^{ \prime}ab} =128M_{\chi}^{2}\left|2s_{0\pm\pm}+5s_{0\pm}+3\sqrt{2}s_{00}\right| ^{2}+256M_{\chi}^{2}\left|4s_{0\pm\pm}+s_{0\pm}\right|^{2}. \tag{22}\]
Using these, we can evaluate,
\[H_{i}H_{S,ij}\tilde{S}_{j}^{aba^{\prime}b^{\prime}}F^{a^{\prime}b ^{\prime}ab}= 16\pi^{2}\alpha_{{}_{W}}^{2}U_{H}\left[\left(\frac{4}{3}|4s_{0 \pm\pm}+s_{0\pm}|^{2}+\frac{2}{3}|2s_{0\pm\pm}+5s_{0\pm}+3\sqrt{2}s_{00}|^{2} \right)\right.\] \[+U_{H_{S}}\left(\frac{8}{3}|4s_{0\pm\pm}+s_{0\pm}|^{2}-\frac{2}{3 }|2s_{0\pm\pm}+5s_{0\pm}+3\sqrt{2}s_{00}|^{2}\right)\!\right]\!, \tag{23}\]
from which we conclude
\[F_{0} =\frac{4}{3}|4s_{0\pm\pm}+s_{0\pm}|^{2}+\frac{2}{3}|2s_{0\pm\pm}+ 5s_{0\pm}+3\sqrt{2}s_{00}|^{2}, \tag{24}\] \[F_{1} =\frac{8}{3}|4s_{0\pm\pm}+s_{0\pm}|^{2}-\frac{2}{3}|2s_{0\pm\pm}+ 5s_{0\pm}+3\sqrt{2}s_{00}|^{2}.\]
With these modified forms for \(F_{0}\) and \(F_{1}\), the LL result is then identical to that derived for the wino in Ref. [62]. For completeness, we restate it below.
\[\left(\frac{d\sigma}{dz}\right)^{\text{LL}}= \,(F_{0}+F_{1})\sigma^{\text{tree}}e^{-8C_{A}\tilde{\alpha}_{W}L _{\chi}^{2}}\delta(1-z)\] \[+ 4\sigma^{\text{tree}}e^{-8C_{A}\tilde{\alpha}_{W}L_{\chi}^{2}} \bigg{\{}C_{A}\tilde{\alpha}_{W}F_{1}\Big{(}3\mathcal{L}_{1}^{S}(z)-2 \mathcal{L}_{1}^{J}(z)\Big{)}e^{8C_{A}\tilde{\alpha}_{W}\big{(}\Theta_{J}L_{J} ^{2}(z)-\frac{3}{4}\Theta_{S}L_{3}^{2}(z)\big{)}}\] \[\qquad\qquad\qquad-2C_{A}\tilde{\alpha}_{W}F_{0}\mathcal{L}_{1}^ {J}(z)e^{8C_{A}\tilde{\alpha}_{W}L_{J}^{2}(z)}\bigg{\}}, \tag{25}\]
where again \(\tilde{\alpha}_{W}=\alpha_{W}/4\pi\) and \(C_{A}=2\). The first line in this result describes the two photon final state, arising from \(\chi\chi\to\gamma\gamma\), and is given in terms of a cross section that parameterizes the tree-level rate and a massive Sudakov logarithm,
\[\sigma^{\text{tree}}=\frac{\pi\alpha_{{}_{W}}^{2}s_{{}_{W}}^{2}}{2M_{\chi}^{2} v},\qquad\quad L_{\chi}=\ln\bigg{(}\frac{m_{{}_{W}}}{2M_{\chi}}\bigg{)}, \tag{26}\]
where \(s_{{}_{W}}=\sin\theta_{W}\), \(c_{W}=\cos\theta_{W}\), and \(v=|\mathbf{v}_{1}-\mathbf{v}_{2}|\) is the relative velocity between the incident DM particles--notations we will use throughout. Substituting the Sommerfeld factors from Eq. (24) into Eq. (25), we see that the line cross-section is proportional to \(F_{0}+F_{1}=4|4s_{0\pm\pm}+s_{0\pm}|^{2}\). This shows that the purely doubly-charged contribution to the line emission is a factor of sixteen larger than the purely singly charged, and also that the two contributions interfere. The equivalent result for the wino is \(4|s_{0\pm}^{\text{wino}}|^{2}\) (again \(F_{0}+F_{1}\) using the wino equivalent values in Eq. (18)), which has the same form as the quintuplet when the doubly-charged contribution is turned off, although we caution that in the full result taking \(s_{0\pm\pm}\to 0\) does
not reproduce the wino cross-section.
The second and third lines of Eq. (25) correspond to the endpoint, arising from \(\chi\chi\to\gamma+X\), where the invariant mass of \(X\) is constrained to be near the lightcone. This contribution depends on an additional pair of logarithms and thresholds, associated with the jet (\(J\)) and soft (\(S\)) scales in the problem. In detail,
\[L_{J} =\ln\bigg{(}\frac{m_{ W}}{2M_{\chi}\sqrt{1-z}}\bigg{)}, \Theta_{J} =\Theta\left(1-\frac{m_{ W}^{2}}{4M_{\chi}^{2}}-z\right), \mathcal{L}_{1}^{J} =\frac{L_{J}}{1-z}\Theta_{J}, \tag{27}\] \[L_{S} =\ln\bigg{(}\frac{m_{ W}}{2M_{\chi}(1-z)}\bigg{)}, \Theta_{S} =\Theta\left(1-\frac{m_{ W}}{2M_{\chi}}-z\right), \mathcal{L}_{1}^{S} =\frac{L_{J}}{1-z}\Theta_{S}.\]
The extension to NLL proceeds identically. The expressions are more involved, but will be schematically identical to Eq. (25): all EFT functions will be identical between the wino and the quintuplet, with only the Sommerfeld contributions varying. To begin with, the differential NLL quintuplet cross-section can be written as8
Footnote 8: In this result we have set all EFT functions to their canonical scales. For instance, the weak coupling in the prefactor is evaluated at the hard matching scale \(\mu_{H}^{0}=2M_{\chi}\). A common technique for estimating the size of the theoretical uncertainty arising from neglecting higher order contributions is to vary these scales by a factor of \(\sim\)2. For this, the result with the scales unfixed is required, and can be obtained by extending the result in Eq. (28) to an arbitrary scale, exactly as done in Ref. [64].
\[\left(\frac{d\sigma}{dz}\right)^{\text{NLL}} =\frac{\pi\alpha_{ W}^{2}(2M_{\chi})s_{ W}^{2}(m_{ W})}{9M_{\chi}^{2}v}U_{H}\left[(\mathcal{F}_{0}+\mathcal{F}_{1})\big{|}_{ \Lambda\to 1}\right]\delta(1-z) \tag{28}\] \[+\frac{\pi\alpha_{ W}^{2}(2M_{\chi})s_{ W}^{2}(m_{ W})}{9M_{\chi}^{2}v(1-z)}U_{H}\left((V_{J}-1)\Theta_{J}+1\right)\Bigg{[} \mathcal{F}_{0}\,\frac{e^{\gamma_{E}\omega_{J}}}{\Gamma(-\omega_{J})}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\left((V_{S}-1)\Theta _{S}+1\right)\mathcal{F}_{1}\,\frac{\sigma_{E}(\omega_{J}+2\omega_{S})}{ \Gamma(-\omega_{J}-2\omega_{S})}\Bigg{]}.\]
The result is written in a similar form to the LL result of Eq. (25), with the exclusive two-photon final state on the first line, and the endpoint contribution on the last two. We note that \(\Gamma(x)\) is the Euler gamma function, not to be confused with the cusp anomalous dimensions introduced below. All functions in the result are identical to the NLL wino expression given in Ref. [64], except for \(\mathcal{F}_{0}\) and \(\mathcal{F}_{1}\), which account for the various Sommerfeld channels. For completeness, let us first restate the elements common to the wino, again referring to Ref. [64] for additional details. Firstly, the evolution of the hard function is encapsulated in
\[U_{H}=r_{H}^{2}\exp\bigg{\{}-\frac{2\Gamma_{0}}{\beta_{0}^{2}} \left[\frac{1}{\tilde{\alpha}_{ W}(2M_{\chi})}\left(\ln r_{H}+\frac{1}{r_{H}}-1\right)\right. \tag{29}\] \[\left.+\left(\frac{\Gamma_{0}}{\Gamma_{1}}-\frac{\beta_{1}}{ \beta_{0}}\right)(r_{H}-1-\ln r_{H})-\frac{\beta_{1}}{2\beta_{0}}\ln^{2}r_{H} \right]\right\}\!.\]
This result is written in terms of the first two perturbative orders of the \(\beta\) function and cusp
anomalous dimension,
\[\beta_{0}=\frac{19}{6},\ \ \ \ \beta_{1}=-\frac{35}{6},\ \ \ \ \Gamma_{0}=4,\ \ \ \ \Gamma_{1}=\frac{8}{9}(35-\pi^{2}), \tag{30}\]
as well as the ratio of the coupling between scales \(r_{H}=\alpha_{ W}(m_{ W})/\alpha_{ W}(2M_{\chi})\). The evolution of the jet and soft functions is contained in
\[V_{J} =\exp\bigg{\{}\frac{2\Gamma_{0}}{\beta_{0}^{2}}\left[\frac{1}{ \tilde{\alpha}_{ W}(\mu_{J}^{0})}\left(\ln r_{J}+\frac{1}{r_{J}}-1\right)+\left(\frac{ \Gamma_{1}}{\Gamma_{0}}-\frac{\beta_{1}}{\beta_{0}}\right)(r_{J}-1-\ln r_{J} )-\frac{\beta_{1}}{2\beta_{0}}\ln^{2}r_{J}\right]-\ln r_{J}\bigg{\}},\] \[V_{S} =\exp\bigg{\{}-\frac{3\Gamma_{0}}{2\beta_{0}^{2}}\left[\frac{1}{ \tilde{\alpha}_{ W}(\mu_{S}^{0})}\left(\ln r_{S}+\frac{1}{r_{S}}-1\right)+\left(\frac{ \Gamma_{1}}{\Gamma_{0}}-\frac{\beta_{1}}{\beta_{0}}\right)(r_{S}-1-\ln r_{S} )-\frac{\beta_{1}}{2\beta_{0}}\ln^{2}r_{S}\right]\bigg{\}},\] \[\omega_{J} =-\frac{2\Gamma_{0}}{\beta_{0}}\left[\ln r_{J}+\tilde{\alpha}_{ W}(\mu_{J}^{0})\left(\frac{\Gamma_{1}}{\Gamma_{0}}-\frac{\beta_{1}}{\beta_{0}} \right)(r_{J}-1)\right]\Theta_{J}, \tag{31}\] \[\omega_{S} =\frac{3\Gamma_{0}}{2\beta_{0}}\left[\ln r_{S}+\tilde{\alpha}_{ W}(\mu_{S}^{0})\left(\frac{\Gamma_{1}}{\Gamma_{0}}-\frac{\beta_{1}}{\beta_{0}} \right)(r_{S}-1)\right]\Theta_{S}.\]
Here the ratio of scales are given by \(r_{J}=\alpha_{ W}(m_{ W})/\alpha_{ W}(\mu_{J}^{0})\) and \(r_{S}=\alpha_{ W}(m_{ W})/\alpha_{ W}(\mu_{S}^{0})\), written in terms of the canonical scales \(\mu_{J}^{0}=2M_{\chi}\sqrt{1-z}\) and \(\mu_{S}^{0}=2M_{\chi}(1-z)\). Further, \(\Theta_{J}\) and \(\Theta_{S}\) are as defined in Eq. (27).
The last terms to be defined are those unique for the quintuplet. For those, we have
\[\mathcal{F}_{0} =\left[36\Lambda^{d}+18r_{HS}^{12/\beta_{0}}\Lambda^{c}\right]|s_ {00}|^{2}+\left[72\Lambda^{d}+9r_{HS}^{12/\beta_{0}}\Lambda^{c}\right]|s_{0 \pm}|^{2} \tag{32}\] \[+\left[72\Lambda^{d}+36r_{HS}^{12/\beta_{0}}\Lambda^{c}\right]|s_ {0\pm\pm}|^{2}+\sqrt{2}\left[72\Lambda^{d}+18r_{HS}^{12/\beta_{0}}\Lambda^{c} \right]\text{Re}(s_{00}s_{0\pm}^{*})\] \[+\sqrt{2}\left[72\Lambda^{d}-36r_{HS}^{12/\beta_{0}}\Lambda^{c} \right]\text{Re}(s_{00}s_{0\pm\pm}^{*})+\left[144\Lambda^{d}-36r_{HS}^{12/ \beta_{0}}\Lambda^{c}\right]\text{Re}(s_{0\pm}s_{0\pm\pm}^{*}),\]
and
\[\mathcal{F}_{1} =r_{H}^{6/\beta_{0}}\left(\left[18r_{HS}^{6/\beta_{0}}\Lambda^{a} -72c_{H}\Lambda^{b}\right]|s_{00}|^{2}+\left[9r_{HS}^{6/\beta_{0}}\Lambda^{a} -72c_{H}\Lambda^{b}\right]|s_{0\pm}|^{2}\right. \tag{33}\] \[+\left[36r_{HS}^{6/\beta_{0}}\Lambda^{a}+144c_{H}\Lambda^{b} \right]|s_{0\pm\pm}|^{2}+\sqrt{2}\left[18r_{HS}^{6/\beta_{0}}\Lambda^{a}-108c _{H}\Lambda^{b}\right]\text{Re}(s_{00}s_{0\pm}^{*})\] \[+\sqrt{2}\left[-36r_{HS}^{6/\beta_{0}}\Lambda^{a}\right]\text{Re }(s_{00}s_{0\pm\pm}^{*})+\left[-36r_{HS}^{6/\beta_{0}}\Lambda^{a}+72c_{H} \Lambda^{b}\right]\text{Re}(s_{0\pm}s_{0\pm\pm}^{*})\] \[\left.+\sqrt{2}\left[-36s_{H}\Lambda^{b}\right]\text{Im}(s_{00}s_{0 \pm}^{*})+\sqrt{2}\left[-144s_{H}\Lambda^{b}\right]\text{Im}(s_{00}s_{0\pm \pm}^{*})+\left[-216s_{H}\Lambda^{b}\right]\text{Im}(s_{0\pm}s_{0\pm\pm}^{*}) \right)\!.\]
These expressions introduce \(r_{HS}=r_{H}/r_{S}\), as well as
\[c_{H}=\cos\bigg{(}\frac{6\pi}{\beta_{0}}\ln r_{H}\bigg{)},\ \ \ \ \ \ \ \ s_{H}=\sin\bigg{(}\frac{6\pi}{\beta_{0}}\ln r_{H}\bigg{)}, \tag{34}\]
and further four functions \(\Lambda^{a-d}\). These functions are as follows, and are the same form as
appears for the wino,
\[\begin{split}\Lambda^{a}&=1+\tilde{\alpha}_{{}_{W}}(\mu_{ J}^{0})\left[\Gamma_{0}\Delta^{(2)}_{JSJ}+\beta_{0}\Delta^{(1)}_{JS}\right]\Theta_{J}-12 \tilde{\alpha}_{{}_{W}}(\mu_{S}^{0})\left[\Delta^{(2)}_{JSS}-\Delta^{(1)}_{JS} \right]\Theta_{S},\\ \Lambda^{b}&=1+\tilde{\alpha}_{{}_{W}}(\mu_{J}^{0}) \left[\Gamma_{0}\Delta^{(2)}_{JSJ}+\beta_{0}\Delta^{(1)}_{JS}\right]\Theta_{J} -12\tilde{\alpha}_{{}_{W}}(\mu_{S}^{0})\Delta^{(2)}_{JSS}\Theta_{S},\\ \Lambda^{c}&=1+\tilde{\alpha}_{{}_{W}}(\mu_{J}^{0}) \left[\Gamma_{0}\Delta^{(2)}_{J}+\beta_{0}\Delta^{(1)}_{J}\right]\Theta_{J}+2 4\tilde{\alpha}_{{}_{W}}(\mu_{S}^{0})\Delta^{(1)}_{J}\Theta_{S},\\ \Lambda^{c}&=1+\tilde{\alpha}_{{}_{W}}(\mu_{J}^{0}) \left[\Gamma_{0}\Delta^{(2)}_{J}+\beta_{0}\Delta^{(1)}_{J}\right]\Theta_{J}, \end{split} \tag{35}\]
with the functions \(\Delta\) written in terms of the polygamma function of order \(m\), \(\psi^{(m)}\), as follows,
\[\begin{split}\Delta^{(1)}_{J}&=\gamma_{E}+\psi^{(0 )}(-\omega_{J}),\\ \Delta^{(2)}_{J}&=\left(\gamma_{E}+\psi^{(0)}(- \omega_{J})\right)^{2}-\psi^{(1)}(-\omega_{J}),\\ \Delta^{(1)}_{JS}&=\gamma_{E}+\psi^{(0)}(-\omega_{J }-2\omega_{S}),\\ \Delta^{(2)}_{JSJ}&=\Delta^{(2)}_{JSS}=\left( \gamma_{E}+\psi^{(0)}(-\omega_{J}-2\omega_{S})\right)^{2}-\psi^{(1)}(-\omega_ {J}-2\omega_{S}).\end{split} \tag{36}\]
Finally, note that in the exclusive contribution to Eq. (28), we use the notation \(\Lambda\to 1\) to imply all of the functions in Eq. (35) are set to unity.
Whilst the NLL expression is more involved than the LL result, the associated theoretical uncertainties are significantly reduced. This is demonstrated in the right of Fig. 1, where we show the cumulative, or integrated \(d\sigma/dz\), taken from a given \(z_{\rm cut}\) to 1. The fact H.E.S.S. (or indeed any real imaging air Cherenkov telescope) does not have perfect energy resolution is represented by the fact the physically appropriate \(z_{\rm cut}\) is away from unity.
We will further explore these results in Sec. 5. Before doing so, however, we turn to the additional contribution to the spectrum that can result from bound states.
## 3 Bound State Formation
In this section we work out the rate of formation of relevant bound states, before considering the application of the SCET formalism to their annihilation in Sec. 4. The general formalism we employ is based on the methods of Ref. [70] for general non-Abelian gauge groups (see also Refs. [68; 2]). Throughout this work, we consider only single-vector-boson emission in the dipole approximation. We first review the key equations and define our notation, then work out the form of the generators and potential with our basis conventions (Sec. 3.1), and use these results to evaluate the cross sections for bound-state formation via emission of photon (Sec. 3.2) and weak gauge bosons (Sec. 3.3). We note already that when SU(2) is broken, the velocity dependence of bound-state formation differs from that of Sommerfeld-enhanced direct annihilation; we will present a discussion of this issue and the uncertainties associated with the velocity distribution of the DM halo when we turn to our numerical results in Sec. 5.2.
If we label the different states in the multiplet containing DM as \(\chi_{i}\) where \(i\) runs from 1 to 5, then for capture of a \(\chi_{i}\chi_{j}\) initial state into a \(\chi_{i^{\prime}}\chi_{j^{\prime}}\) bound state with quantum numbers (\(nlm\)), where all particles have equal masses \(M_{\chi}\) and the emitted particle has color index \(a\) and can be approximated as massless, the amplitude for radiative capture into a bound state (stripping off the polarization vector for the outgoing gauge boson) is given by Ref. [70]:
\[\mathcal{M}^{a}_{ii^{\prime},jj^{\prime}}=-\sqrt{2^{8}\pi\alpha_{\rm rad}M_{ \chi}}\,\biggl{\{}-if^{abc}(T_{1}^{b})_{i^{\prime}i}(T_{2}^{c})_{j^{\prime}j} \mathcal{Y}+\frac{1}{2}\left[(T_{1}^{a})_{i^{\prime}i}\delta_{j^{\prime}j}-(T _{2}^{a})_{j^{\prime}j}\delta_{i^{\prime}i}\right]\mathcal{J}\biggr{\}}. \tag{3.1}\]
Let us define the various notations introduced in this amplitude. Firstly, \(\alpha_{\rm rad}\) specifies the coupling associated with the radiated gauge boson: in our case, \(\alpha_{\rm rad}=\alpha\) for a radiated photon,9\(\alpha_{\rm rad}=c_{W}^{2}\alpha_{W}\) for a radiated \(Z\) boson, and \(\alpha_{\rm rad}=\alpha_{W}\) for a radiated \(W\) boson. \(T_{1}\) and \(T_{2}\) denote the generators of the representation associated with the \(\chi_{i}\) and \(\chi_{j}\) particles respectively, whilst \(f^{abc}\) are the structure constants. (We emphasize that for the moment we are discussing the more general result where in principle \(\chi_{i}\) and \(\chi_{j}\) could be in different representations. Shortly, we will specialize to the case where \(T_{1}=T_{2}=T_{\bf 5}\) appropriate for the quintuplet.) Finally, we have
Footnote 9: We denote the electromagnetic fine structure constant by \(\alpha=s_{ W}^{2}\alpha_{ W}\).
\[\begin{split}\mathcal{Y}&=4\pi M_{\chi}\,\alpha_{ \rm NA}\int\frac{d^{3}{\bf p}}{(2\pi)^{3}}\frac{d^{3}{\bf q}}{(2\pi)^{3}}\frac {{\bf q}-{\bf p}}{({\bf q}-{\bf p})^{4}}\tilde{\psi}^{*}_{nlm}({\bf p})\tilde {\phi}({\bf q}),\\ \mathcal{J}&\simeq\int\frac{d^{3}{\bf p}}{(2\pi)^{3} }\,{\bf p}\,\tilde{\psi}^{*}_{nlm}({\bf p})\tilde{\phi}({\bf p}).\end{split} \tag{3.2}\]
In the expression for \(\mathcal{J}\) we have made the approximation of neglecting the momentum of the outgoing gauge boson. \(\alpha_{\rm NA}\) is the coupling between the fermions and the \(t\)-channel gauge boson exchanged between them to support the potential, for the diagram where the bound state formation occurs through the emission of a gauge boson from the potential line. The two possible emission channels are depicted in Fig. 2. For example, when the bound-state formation occurs through emission of a photon or \(Z\) boson, the exchanged gauge boson will be a \(W\) boson, and so we will have \(\alpha_{\rm NA}=\alpha_{W}\). The \(\tilde{\psi}_{nlm}\) and \(\tilde{\phi}\) wavefunctions are the momentum-space wavefunctions of the final and initial states respectively (the corresponding real-space wavefunctions are labeled \(\psi_{nlm}\) and \(\phi\)).
Figure 2: Due to the non-Abelian nature of the gauge bosons generating the potential, the emission necessary for bound state (BS) formation can occur either from the external fermion line (left), or from the potential itself (right).
The \(\mathcal{Y}\) and \(\mathcal{J}\) coefficients can be rewritten in position space as,10
Footnote 10: We follow the Fourier transformation conventions of Ref. [70].
\[\begin{split}\mathcal{Y}&=-\frac{i}{2}M_{\chi}\alpha_ {\text{\tiny{NA}}}\int d^{3}\mathbf{r}\,\hat{\mathbf{r}}\,\psi^{*}_{nlm}( \mathbf{r})\phi(\mathbf{r}),\\ \mathcal{J}&\simeq-i\int d^{3}\mathbf{r}\,\psi^{*}_ {nlm}(\mathbf{r})\nabla\phi(\mathbf{r}).\end{split} \tag{3.3}\]
In subsequent equations we will suppress the \(\mathbf{r}\) dependence of the position-space wavefunctions for notational convenience. We emphasize that these expressions tacitly assume the two particles are distinguishable; we will follow the conventions of Ref. [68] for the normalization of two-particle states, which can introduce a factor of \(\sqrt{2}\) into terms involving transitions between states of identical and non-identical particles.11 Breaking SU(2) can also introduce an additional multiplicative factor inside the integral for \(\mathcal{Y}\), arising from the propagators in the potential line from which the particle is emitted; for the case of photon or \(Z\) emission, these are \(W\) propagators, and the additional factor takes the form \(e^{-m_{W}r}\)[68]. We will work out the correct replacement in the case of \(W\) emission later in this section.
Footnote 11: We emphasize that there is a subtlety here associated with the ordering of particles in the two-particle states, which can induce a sign flip that must be treated carefully. We discuss this in App. E.
Throughout this section, when solving for the wavefunctions for the initial and final states given a specific potential, we will adopt the normalization conventions and numerical approach of Ref. [68]. (We note that there was a minus sign error in the equation for bound-state formation in the original published version of Ref. [68], which has since been corrected in an erratum.)
### Generators and the potential
In general there is a degree of freedom to choose the basis for our generators, but since we are interested in transitions between two-body states whose constituents are mass eigenstates distinguished by their charges, it is convenient to use the basis discussed in App. A, where the \(\chi_{1}\), \(\chi_{2}\), \(\chi_{3}\), \(\chi_{4}\), \(\chi_{5}\) states correspond to states with electric charges \(+2\), \(+1\), \(0\), \(-1\) and \(-2\), as given in Eq. (A.9). It is important that the basis used to compute the bound-state formation rate and the basis used to compute the potential are identical; we will require the potential when solving for the initial- and final-state wavefunctions, which are relevant both for bound-state formation and for the Sommerfeld enhancement to direct annihilation. In this basis, we obtain for the generators:
\[\begin{split} T^{1}_{\mathbf{5}}&=\frac{1}{\sqrt{2 }}\begin{pmatrix}0&\sqrt{2}&0&0&0\\ \sqrt{2}&0&\sqrt{3}&0&0\\ 0&\sqrt{3}&0&\sqrt{3}&0\\ 0&0&\sqrt{3}&0&\sqrt{2}\\ 0&0&0&\sqrt{2}&0\\ \end{pmatrix},\quad T^{2}_{\mathbf{5}}&=\frac{i}{\sqrt{2}}\begin{pmatrix}0&- \sqrt{2}&0&0&0\\ \sqrt{2}&0&-\sqrt{3}&0&0\\ 0&\sqrt{3}&0&-\sqrt{3}&0\\ 0&0&\sqrt{3}&0&-\sqrt{2}\\ 0&0&0&\sqrt{2}&0\\ \end{pmatrix},\\ T^{3}_{\mathbf{5}}&=\text{diag}(2,1,0,-1,-2).\end{split} \tag{3.4}\]
The potential, up to terms corresponding to mass splittings between the two-body states (whose contribution is spelled out in App. A), can be written in the following form [72, 4]
\[V_{ij;i^{\prime}j^{\prime}}=N_{ij}N_{i^{\prime}j^{\prime}}\sum_{AB}K_{AB}\left(T^ {A}_{ii^{\prime}}T^{B}_{jj^{\prime}}+(-1)^{L+S}T^{A}_{ij^{\prime}}T^{B}_{ji^{ \prime}}\right)\frac{e^{-m_{A}r}}{4\pi r},\quad K=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\end{pmatrix}. \tag{10}\]
Here \(N_{ij}=1\) if \(i\neq j\) and \(1/\sqrt{2}\) if \(i=j\) (this corresponds to the aforementioned change in normalization for two-body states composed of identical vs distinguishable particles), and the indices \(A\), \(B\) run over \(\{\gamma,Z,W^{+},W^{-}\}\). The gauge couplings are included in the generators in this notation, following the conventions of Ref. [4]: explicitly, \(T^{\gamma}=g_{W}\sin\theta_{W}T^{3}\), \(T^{Z}=g_{W}\cos\theta_{W}T^{3}\), \(T^{W^{+}}=g_{W}T^{+}\), \(T^{W^{-}}=g_{W}T^{-}\), with \(T^{\pm}=(T^{1}\pm iT^{2})/\sqrt{2}\). Note that the \((-1)^{L+S}\) factor arises from treating the \(ij\) and \(ji\) states as representatives of a single two-body state, using the conventions employed in method-2 of Ref. [72] and also discussed in App. E. (Here \(L\) denotes orbital angular momentum and \(S\) denotes spin; see Ref. [72] for a detailed discussion.) This sign was also discussed in the context of the potential for two-body states with net charge \(Q=\pm 1\) in Ref. [2].
Thus, with the basis above, we obtain the following potential for the case with \(L+S\) even, where the 1st row/column corresponds to the \(\chi^{++}\chi^{--}\) two-body state (\(\chi_{1}\chi_{5}\)), the 2nd row/column corresponds to the \(\chi^{+}\chi^{-}\) state (\(\chi_{2}\chi_{4}\)), and the 3rd row/column corresponds to the \(\chi^{0}\chi^{0}\) state (\(\chi_{3}\chi_{3}\)):
\[V(r)=\alpha_{ W}\begin{pmatrix}-4\left(c_{ W}^{2}\frac{e^{-m_{A}r}}{r}+\frac{s_{ \chi}^{2}}{r}\right)&2\frac{e^{-m_{W}r}}{r}&0\\ 2\frac{e^{-m_{W}r}}{r}&-\left(c_{ W}^{2}\frac{e^{-m_{Z}r}}{r}+\frac{s_{\chi}^{2}}{r}\right)&3\sqrt{2} \frac{e^{-m_{W}r}}{r}\\ 0&3\sqrt{2}\frac{e^{-m_{W}r}}{r}&0\end{pmatrix}. \tag{11}\]
Note that the signs of the off-diagonal terms are opposite to the potential matrix given in Ref. [4] (and the analogue for the wino employed in Ref. [68]); this is a basis-dependent choice and either option is correct provided it is used self-consistently throughout the calculation. The effect of changing the basis in a way that modifies the signs in the off-diagonal terms of the potential is to flip the sign of one or more components of the resulting solution for the wavefunction; this compensates the changes in sign in generator elements in the new basis, when computing the bound-state wavefunctions.
It is also possible to go beyond the tree-level potential of Eq. (11) and include NLO corrections. Especially in proximity to resonances, the resulting modifications to the Sommerfeld enhancement and bound-state formation rate can be substantial [5]. We employ the analytic fitting functions for the NLO potential calculated by Refs. [5, 6]. Specifically, we make the
following replacements in Eq. (3.6), where \(L\equiv\ln(m_{W}r)\):
\[e^{-m_{ W}r} \to e^{-m_{ W}r}+\frac{2595}{\pi}\alpha_{ W}\begin{cases}(-1)\text{ exp}\left[-\frac{79\left(L-\frac{787}{12}\right)\left(L-\frac{736}{375}\right)\left(L- \frac{116}{65}\right)\left(L^{2}-\frac{286L}{59}+\frac{533}{77}\right)}{34 \left(L-\frac{512}{376}\right)\left(L-\frac{501}{281}\right)\left(L^{2}-\frac{ 286L}{61}+\frac{498}{7}\right)}\right],&m_{ W}r<555/94\\ \text{ exp}\left[-\frac{13267(L-\frac{76}{43})\left(L-\frac{28}{37}\right) \left(L-\frac{37}{30}\right)\left(L-\frac{389L}{88}+\frac{179}{129}\right)}{5 \left(L-\frac{191}{108}\right)\left(L-\frac{256}{153}\right)\left(L+\frac{8412 }{13}\right)\left(L^{2}-\frac{257L}{103}+\frac{179}{146}\right)}\right]&m_{ W}r>555/94\end{cases}\] \[c_{ W}^{2}e^{-m_{ Z}r} \to c_{ W}^{2}e^{-m_{ Z}r}+\alpha_{ W}\left[-\frac{80}{9}\,\frac{s_{ W}^{4}\left(\ln(m_{ Z}r)+\gamma_{E}\right)}{2\pi(1+(32/11)(m_{ W}r)^{-22/9})}+\frac{\left(\frac{19}{6}\ln(m_{ Z}r)-\frac{1960}{433}\right)}{2\pi(1+(7/59)(m_{ W}r)^{61/29})}\right. \tag{3.7}\] \[\left.-\frac{s_{ W}^{2}\left(-\frac{1}{30}+\frac{4}{135}\ln(m_{ W}r)\right)}{1+(58/79)(m_{ W}r)^{-17/15}+(1/30)(m_{ W}r)^{119/120}+(8/177)(m_{ W}r)^{17/8}}\right].\]
We will use the NLO potential for all calculations performed in this work. In particular, in addition to using it to compute the capture cross-sections in this section, we will also use it to compute the Sommerfeld enhancement appropriate for the direct annihilation discussed in Sec. 2. We note, however, that the relic abundance calculations in Refs. [2; 3], which we rely on for our value of the thermal mass \(M_{\chi}=13.6\pm 0.8\) TeV, did not use the NLO potential. As we will see in Sec. 5, our findings as to the quintuplet will vary considerably across the thermal mass range, and so updating the calculation to NLO is an important improvement left to be done.
### Bound state formation through emission of a photon
For the quintuplet it will often be possible to form a bound state through emission of \(W\) or \(Z\) gauge bosons, but let us first consider the case where the radiated particle is a photon, as this channel is available for all DM masses and provides the closest analogy to previous studies of the wino (_e.g._ Ref. [68]). In this case we have \(\alpha_{\text{rad}}=\alpha\), \(\alpha_{\text{NA}}=\alpha_{ W}\), and \(a=3\) (since the photon obtains its SU(2) couplings through the \(W^{3}\) component). Let us also assume both incoming particles are in the same representation (as appropriate for Majorana fermions), and \(T\) denotes the generators of that representation, so we can write:
\[\mathcal{M}^{3}_{ii^{\prime},jj^{\prime}} = i\sqrt{2^{6}\pi\alpha M_{\chi}}\,\biggl{\{}-if^{123}\left[(T^{1 })_{i^{\prime}i}(T^{2})_{j^{\prime}j}-(T^{2})_{i^{\prime}i}(T^{1})_{j^{\prime} j}\right]M_{\chi}\alpha_{ W}\int d^{3}\mathbf{r}\,\hat{\mathbf{r}}\,e^{-m_{ W}r}\psi^{*}_{nlm}\phi \tag{3.8}\] \[+ \left[(T^{3})_{i^{\prime}i}\delta_{j^{\prime}j}-(T^{3})_{j^{ \prime}j}\delta_{i^{\prime}i}\right]\int d^{3}\mathbf{r}\,\psi^{*}_{nlm}\nabla \phi\biggr{\}}.\]
From here, substituting in the generators above, we find the following non-zero matrix elements for bound-state formation
\[\mathcal{M}^{3}_{32,34} = i\sqrt{2^{6}\pi\alpha M_{\chi}}\,\biggl{\{}3M_{\chi}\alpha_{ W}\int d^{3}\mathbf{r}\,e^{-m_{ W}r}\,\hat{\mathbf{r}}\,\psi^{*}_{nlm}\phi\biggr{\}},\] \[\mathcal{M}^{3}_{22,44} = i\sqrt{2^{6}\pi\alpha M_{\chi}}\,\biggl{\{}2\int d^{3}\mathbf{r} \,\psi^{*}_{nlm}\nabla\phi\biggr{\}}, \tag{3.9}\]
\[{\cal M}^{3}_{12,54}=i\sqrt{2^{6}\pi\alpha M_{\chi}}\,\biggl{\{}-2M_{\chi}\alpha_{ W}\int d^{3}{\bf r}\,e^{-m_{ W}r}\,\hat{\bf r}\,\psi^{*}_{nlm}\phi\biggr{\}},\]
which correspond to capture into the \(\chi^{+}\chi^{-}\) bound state component from the \(\chi^{0}\chi^{0}\), \(\chi^{+}\chi^{-}\), and \(\chi^{++}\chi^{--}\) initial state components respectively. The equivalent matrix elements for capture into the \(\chi^{++}\chi^{--}\) bound-state component are given by
\[\begin{split}&{\cal M}^{3}_{11,55}=i\sqrt{2^{6}\pi\alpha M_{\chi}} \,\biggl{\{}4\int d^{3}{\bf r}\,\psi^{*}_{nlm}\nabla\phi\biggr{\}},\\ &{\cal M}^{3}_{21,45}=i\sqrt{2^{6}\pi\alpha M_{\chi}}\,\biggl{\{} 2M_{\chi}\alpha_{ W}\int d^{3}{\bf r}\,e^{-m_{ W}r}\,\hat{\bf r}\,\psi^{*}_{nlm} \phi\biggr{\}}.\end{split} \tag{3.10}\]
Combining these matrix elements, and including a factor of \(\sqrt{2}\) for the capture from the \(\chi^{0}\chi^{0}\) to \(\chi^{+}\chi^{-}\) state [68] to account for the differing normalization of states built from identical and distinguishable particles, we can write the cross section for bound-state formation as [70]:
\[\begin{split}\sigma v&=\int d\Omega_{k}\frac{k}{2^{ 7}\pi^{2}M_{\chi}^{3}}|\mathbf{\epsilon}(\hat{\bf k})\cdot{\cal M}|^{2}\\ &=\frac{2\alpha k}{\pi M_{\chi}^{2}}\int d\Omega_{k}\,\biggl{|} \int d^{3}{\bf r}\,\mathbf{\epsilon}(\hat{\bf k})\cdot[(2\psi^{*}_{CC}\nabla\phi_ {CC}+\psi^{*}_{C}\nabla\phi_{C})\\ &+\frac{1}{2}M_{\chi}\alpha_{ W}\hat{\bf r}\,e^{-m_{ W}r}\,\biggl{(}2\psi^{*}_{CC}\phi_{C}-2\psi^{*}_{C}\phi_{CC}+3\sqrt{2}\psi^{*}_{C} \phi_{N}\biggr{)}\biggr{]}\biggr{|}^{2},\end{split} \tag{3.11}\]
where \(k\) and \(\mathbf{\epsilon}(\hat{\bf k})\) respectively denote the momentum and polarization of the outgoing photon; a \(CC\) subscript indicates the \(\chi^{++}\chi^{--}\) component, \(C\) indicates \(\chi^{+}\chi^{-}\), and \(N\) indicates \(\chi^{0}\chi^{0}\). As previously, \(\psi\) and \(\phi\) indicate the final bound-state and initial wavefunctions, respectively.
Note that the potential of Eq. (3.6) is only accurate as written for two-body states with angular momentum quantum numbers \(L+S\) summing to an even value. If \(L+S\) is odd, the state is symmetric under particle exchange and cannot support a pair of identical fermions, and consequently the rows and columns corresponding to the \(\chi^{0}\chi^{0}\) state must be zeroed out. We are primarily interested in the behavior of quintuplet DM in the Milky Way halo, where on-shell charginos are not likely to be kinematically allowed (exciting the \(\chi^{+}\chi^{-}\) state requires 164 MeV of kinetic energy per particle, which for a Milky Way escape velocity of \(\sim\)500 km/s requires \(M_{\chi}\gtrsim 120\) TeV). Consequently we will always assume the initial two-body state is \(\chi^{0}\chi^{0}\) (at large separation) and so has \(L+S\) even; this means the bound state formed by the leading-order vector-boson emission will have \(L+S\) odd (the dipole selection rule is \(\Delta L=\pm 1\), \(\Delta S=0\)). The appropriate potentials are used to compute the wavefunctions for the scattering state (Eq. (3.6) as written) and the bound state (Eq. (3.6) with the third row/column removed).
### Bound state formation through emission of \(W\) and \(Z\) bosons
Since the SU(2) couplings of the \(Z\) boson are controlled by its \(W^{3}\) component, we can re-use the expression for capture via photon emission in the case of the \(Z\) boson, with the replacement \(\alpha\to c_{ W}^{2}\alpha_{ W}\) in the prefactor, and with the momentum \(k\) now depending on the mass of the \(Z\) boson,
\[k=\sqrt{\left(-E_{n}+\frac{M_{\chi}v^{2}}{4}\right)^{2}-m_{ Z}^{2}}. \tag{3.12}\]
Here the energy of the outgoing bound state is \(2M_{\chi}+E_{n}\), where \(E_{n}\) denotes the (negative) binding energy.12 In particular, this process is forbidden when \(m_{ Z}\) exceeds the available energy (_i.e._ the kinetic energy of the incoming particles and the (absolute value of the) binding energy of the final state).
Footnote 12: The convention here sets \(E_{n}=0\) as the rest mass energy of a pair of \(\chi^{0}\)s. For bound states that do not have a \(\chi^{0}\chi^{0}\) component, putting their constituents at infinite separation still leaves finite positive energy because of the charged/neutral mass-splitting, \(\delta_{0}\). Thus, in a capture process, the bound state carries off energy \(A\delta_{0}-(|E_{n}|+A\delta_{0})=-|E_{n}|\), where \(A\) is an integer that depends on the number of charged particles in the lightest component of the bound state. The second term, \(-(|E_{n}|+A\delta_{0})\) corresponds to the “ionization energy” necessary to separate the bound state into its constituents. We have neglected the subleading bound-state recoil kinetic energy. In this way we see that the boson emitted in the capture process has an energy independent of \(\delta_{0}\).
The emission of \(W^{\pm}\) bosons is more complicated as it involves a different set of matrix elements and Feynman diagrams, corresponding to formation of bound states with unit charge. In particular, when a \(W\) boson is emitted from the potential, the \(t\)-channel propagator must be a mixed \(W-Z\) or \(W-\gamma\) propagator, which modifies the structure of the matrix element. Performing the Fourier transform of the mixed propagator, we find that the appropriate replacement (compared to the photon-emission case where the propagator involves only \(W\) bosons) is:
\[e^{-m_{ W}r}\rightarrow\frac{2}{r^{2}(m_{ W}^{2}-m_{0}^{2})}\,\big{[}e^{-m_{0}r}(1+m_{0}r)-e^{-m_{ W}r}(1+m_{ W}r)\big{]}, \tag{3.13}\]
where \(m_{0}=m_{ Z}\) or \(0\) for the mixed \(W-Z\) and mixed \(W-\gamma\) propagator, respectively. Since the diagrams with these two propagator structures are identical except for the propagators and the coupling of the \(\gamma\) or \(Z\) to the fermion line, the sum of their contributions can be captured by inserting the propagator factor:
\[\begin{split}\zeta(r)\equiv\frac{2}{r^{2}}\,\bigg{[}& \frac{c_{ W}^{2}}{m_{ Z}^{2}-m_{ W}^{2}}\,\big{(}e^{-m_{ W}r}(1+m_{ W}r)-e^{-m_{ Z}r}(1+m_{ Z}r)\big{)}\\ +&\frac{s_{ W}^{2}}{m_{ W}^{2}}\,\big{(}1-e^{-m_{ W}r}(1+m_{ W}r)\big{)}\bigg{]}.\end{split} \tag{3.14}\]
Now repeating the calculation from the photon case for the case where the emitted gauge boson is \(W^{1}\) or \(W^{2}\) instead of \(W^{3}\), and inserting the \(\zeta(r)\) factors in the terms corresponding to \(W\) emission from the potential, we obtain the cross section for the \(Q=1\) case (the \(Q=-1\)
case is identical):
\[\begin{split}\sigma v&=\frac{2\alpha k}{\pi M_{\chi}^{2 }}\int d\Omega_{k}\left|\mathbf{\epsilon}(\hat{\mathbf{k}})\cdot\int d^{3}\mathbf{ r}\left[\sqrt{\frac{3}{2}}\psi_{+0}^{*}\nabla\phi_{N}-\frac{\sqrt{3}}{2}\psi_{+0}^{*} \nabla\phi_{C}\right.\\ &\left.+\frac{1}{\sqrt{2}}\psi_{++-}^{*}\nabla\phi_{C}-\frac{1}{ \sqrt{2}}\psi_{++-}^{*}\nabla\phi_{CC}\right.\\ &\left.+\frac{1}{2}\hat{\mathbf{r}}\zeta(r)M_{\chi}\alpha_{{}_{W }}\left(\sqrt{3}\psi_{+0}^{*}\phi_{C}+\sqrt{2}\psi_{++-}^{*}\phi_{C}+2\sqrt{2} \psi_{++-}^{*}\phi_{CC}\right)\right]\right|^{2}.\end{split} \tag{3.15}\]
Here the \(++-\) subscript denotes the component in the \(\chi^{++}\chi^{-}\) state, and the \(+0\) subscript denotes the component in the \(\chi^{+}\chi^{0}\) state, for the final \(Q=1\) bound state.
Note that the phase-space factor \(k\) for the outgoing \(W\) boson must be modified to:
\[k=\sqrt{\left(-E_{n}+\frac{M_{\chi}v^{2}}{4}\right)^{2}-m_{{}_{W}}^{2}}. \tag{3.16}\]
As for \(Z\) emission, this process is forbidden when \(m_{{}_{W}}\) exceeds the kinetic energy of the incoming particles + the binding energy of the final state.
The potential for the \(Q=\pm 1\) sector, needed to derive the wavefunctions for the bound states, is similarly given by:
\[V(r)=\alpha_{W}\begin{pmatrix}-2\left(\frac{s_{w}^{2}}{r}+\frac{c_{n}^{2}e^{-m _{{}_{Z}}r}}{r}\right)&\sqrt{6}\,\frac{e^{-m_{{}_{W}}r}}{r}\\ \sqrt{6}\,\frac{e^{-m_{{}_{W}}r}}{r}&(-1)^{L+S}\,3\,\frac{e^{-m_{{}_{W}}r}}{r} \end{pmatrix}, \tag{3.17}\]
where the first row/column corresponds to the \(\chi^{++}\chi^{-}\) state (\(Q=+1\)) or \(\chi^{--}\chi^{+}\) state (\(Q=-1\)), and the second row/column corresponds to the \(\chi^{+}\chi^{0}\) state (\(Q=+1\)) or \(\chi^{-}\chi^{0}\) state (\(Q=-1\)). Again note that the off-diagonal terms disagree with Ref. [4] by a sign; this is due to our choice of basis.
In principle there may also be \(Q=\pm 2,3,4\) bound states in the spectrum, which can be accessed by a series of transitions involving emission of \(W\) bosons. However, for the \(Q=4\) case, the only available state is \(\chi^{++}\chi^{++}\) (or \(\chi^{--}\chi^{--}\) in the \(Q=-4\) case), and the potential is a repulsive Coulomb potential mediated by \(\gamma\) and \(Z\) exchange, which does not support bound states. For \(Q=\pm 3\), the only available two-particle states are \(\chi^{++}\chi^{+}\) (\(\chi^{--}\chi^{-}\)), so again the potential is a scalar, and its value can be computed as
\[V(r)=\alpha_{{}_{W}}\left[2\,(-1)^{L+S}\,\frac{e^{-m_{{}_{W}}r}}{r}+2\left( \frac{c_{{}_{W}}^{2}e^{-m_{{}_{Z}}r}}{r}+\frac{s_{{}_{W}}^{2}}{r}\right)\right]. \tag{3.18}\]
We observe that for \(L+S\) even, this potential is always repulsive; for \(L+S\) odd, the potential vanishes in the unbroken limit and in the broken regime a residual repulsive potential remains. In either case, we do not expect bound states (this analysis also accords with the discussion in Ref. [2]).
The \(Q=\pm 2\) case is more interesting. There are two relevant states: for \(Q=2\) they are \(\chi^{+}\chi^{+}\) and \(\chi^{++}\chi^{0}\), with the former only being allowed for even \(L+S\). The potential then reads as follows,
\[V(r)=\alpha_{W}\begin{pmatrix}\frac{s_{w}^{2}}{r}+\frac{c_{w}^{2}e^{-m_{x}r}}{r }&2\sqrt{3}\,\frac{e^{-m_{w}r}}{r}\\ 2\sqrt{3}\,\frac{e^{-m_{w}r}}{r}&0\end{pmatrix}, \tag{3.19}\]
for even \(L+S\), where the 1st row/column corresponds to the \(\chi^{+}\chi^{+}\) state and the 2nd row/column corresponds to the \(\chi^{++}\chi^{0}\) state. This potential has an attractive eigenvalue that can support bound states, asymptoting to \(-3/r\) in the unbroken limit. For odd \(L+S\) only the \(\chi^{++}\chi^{0}\) state exists, which experiences no potential.
Thus in addition to the \(Q=0\leftrightarrow Q=\pm 1\) transitions through \(W\) emission already considered, the only bound-bound transitions we need to compute involving higher-charge states are \(Q=\pm 1\leftrightarrow Q=\pm 2\) (proceeding via \(W\) emission) and \(Q=\pm 1\leftrightarrow Q=\pm 1\), \(Q=\pm 2\leftrightarrow Q=\pm 2\) transitions via photon or \(Z\) emission.
### Capture rate results
In Fig. 3, we show examples of the formation cross-section for bound states with different quantum numbers, corresponding to capture from various initial partial waves. Formation rates for \(Q=1\) bound states become non-zero at masses high enough that the binding energies (plus the kinetic energy of the collision) exceed the \(W\) boson mass. We observe that at most mass points, the dominant capture rate is to \(s\)-wave bound states, corresponding to the \(p\)-wave (\(L=1\)) component of the initial state.
The overall size of the formation rate and its scaling with mass can be estimated analytically, as discussed in detail in App. F. To summarize, in the limit of high DM mass we expect
Figure 3: The capture cross section into bound states with \(Q=0\) (left) and \(Q=1\) (right), for three different transition: \(p\to s\), \(s\to p\), and \(d\to p\). These cross sections describe the total capture rate into all available states with these quantum numbers, not simply the most tightly bound.
the leading rates for bound-state formation and direct annihilation to take the form:
\[(\sigma v)_{\rm bsf}^{n=1,L=0}\simeq\frac{700\pi^{2}\alpha_{ W}^{3}}{M_{\chi}^{2}v},\ \ \ \ (\sigma v)_{\rm ann}\simeq\frac{720\pi^{2}\alpha_{ W}^{3}}{M_{\chi}^{2}v}. \tag{3.20}\]
For \(M_{\chi}v\lesssim m_{ W}\), we expect the \(p\to s\) capture cross section to experience a velocity suppression due to the \(p\)-wave initial state, which is parametrically of order \((M_{\chi}v/m_{ W})^{2}\).
In Fig. 4, we compare the dominant \(p\to s\) bound-state formation rate for capture into the ground state with the inclusive direct annihilation rate; the latter is computed including the Sommerfeld enhancement but without any SCET corrections. We overplot the analytic estimates given in Eq. (3.20), with a \(p\)-wave correction factor of \((M_{\chi}v/m_{ W})^{2}\) for the estimate corresponding to the bound-state formation rate. We observe that at the thermal quintuplet mass (13.6 TeV), we expect the direct annihilation to dominate due to the \(p\)-wave suppression of the leading bound-state formation channel, but this suppression is lifted at high masses; furthermore, even at lower masses the bound-state capture rate may exceed the rate for direct annihilation at specific mass values (such as at the peak near 13.5 TeV). However, recall that these are inclusive rates; to understand the relative contributions to the line and endpoint spectrum, we must now understand how the bound states eventually annihilate to SM particles.
## 4 Bound State Annihilation
Having computed all the relevant bound-state formation rates, the second part of our calculation involves determining the differential branching ratios for bound states to decay, producing hard photons. To compute these we need the differential decay rate of the bound state to a final state including a photon, as well as its total decay rate to all SM particles. For the
Figure 4: Bound-state formation rate for the spin-triplet ground state (blue) compared to the inclusive annihilation cross section (orange). Dashed lines indicate analytic estimates for the corresponding rates; see text for details.
differential decay rate, we can recycle our EFT developed for direct annihilation as described in Sec. 2. The factorized form of the differential cross section remains identical to Eq. (14) which we reproduce here for convenience,
\[\frac{d\sigma}{dz}=\sum_{a^{\prime}b^{\prime}ab}F_{\chi}^{a^{\prime}b^{\prime}ab} \frac{d\hat{\sigma}^{a^{\prime}b^{\prime}ab}}{dz}. \tag{17}\]
To apply this expression to bound states, we will need to update the initial state wavefunctions encoded in \(F_{\chi}\). For direct annihilation, \(F_{\chi}\) as given in the second line of Eq. (14), described an initial state of two free DM particles in the \(s\)-wave spin-singlet configuration. Here, however, our initial state is described by the two-body bound state wavefunctions computed in Sec. 3. The bound states can be classified according to their value of total orbital angular momentum \(L\), total spin \(S\), and charge \(Q\), and we will need to track all bound states at a given mass. Beyond this, however, the differential cross section, \(d\hat{\sigma}/dz\), is again given by Eq. (5), and each of the associated objects such as the jet and soft functions are identical to those used in the direct annihilation computation of Sec. 2; this is the advantage of the EFT approach, the infrared (IR) physics is identical for direct and bound-state annihilation. To compute the decay rate, one needs to simply alter the details of the initial state, such as overall kinematic factors and a modified form of \(F_{\chi}\). Nevertheless, as our interest is in the branching ratio of bound states to various decay rates, we will be computing ratios and will find the kinematic differences cancel (see Sec. 4.4), further increasing the similarity to the direct annihilation computation. We note, however, that to fully describe the possible end state of all bound states in the quintuplet spectrum, we would need to include additional operators in hard matching beyond the single operator we used for the direct annihilation given in Eq. (6). That operator described the annihilation of an \(L=S=Q=0\) initial state, so for the annihilation from states with \(L>0\), \(Q>0\), or \(S=1\), a new set of operators is required. Nevertheless, we will show that the contributions of the \(L>0\) and \(S=1\) states to the endpoint spectrum are suppressed, and the contributions from \(Q>0\) states can be captured within our existing framework, so that in fact the form of \(d\hat{\sigma}/dz\) we have already computed is sufficient. Formalizing the logic above, the decay cross section into the hard photon can be written as
\[\frac{d\sigma}{dz}\Big{|}_{\text{bound}}=\sum_{B}\sigma(\chi^{0}\chi^{0}\to B +X_{\text{us}})\frac{1}{\Gamma_{B}}\frac{d\Gamma_{B\to\gamma+X}}{dz}, \tag{18}\]
where \(\sigma(\chi^{0}\chi^{0}\to B+X_{\text{us}})\) is the total production cross section for the \(L=0\), \(S=0\) state including any decays from shallower bound states and \(\Gamma_{B}\) is the decay rate into all possible SM particles.
We sketch the structure of the contributions to the endpoint spectrum from bound-state formation in Fig. 5, and work out the required ingredients in the rest of this section. We begin in Sec. 4.1 by understanding the general structure of the decay cascade that follows
capture into an excited state, arguing that excited states will typically decay (possibly via multiple steps) to an \(L=0\) state before annihilating to SM particles. In Sec. 4.2 we study the operators through which \(L=0\) bound states can annihilate to SM particles, and show that the contribution to the endpoint spectrum from \(S=1\) bound states is power-suppressed; thus our endpoint calculation focuses on annihilation from \(L=S=0\) states. In Sec. 4.3 we discuss how to compute the wavefunction factors needed to obtain the photon endpoint spectrum from decay (to SM particles) of a given \(L=S=0\) bound state. In Sec. 4.4 we compute the inclusive rate for decay via annihilation into SM particles for \(L=S=0\) states, while in Sec. 4.5 we describe how to calculate the rates for decay into lower-lying bound states, for bound states of arbitrary \(L\), \(S\). Key points of the calculation and several results are presented in Sec. 4.6. Ultimately, we employ these rates to compute the overall endpoint annihilation signal from decay of all \(L=S=0\) states to SM particles, taking into account the possibility to populate these states by decay from all shallower \(L=0,1,2\) states, as encapsulated in Eq. (4.2).
Figure 5: Schematic diagram that shows how two initial quintuplet particles evolve into various final states, as we work out in detail in this section. Endpoint photons are violet, whereas the soft photons that arise in the dipole transitions involving bound states are red. Capturing to quintuplet bound-states can give an additional source of endpoint photons. In this work, we include those that result from the “\(S\!=\!0\) Tower.” In general, their contribution is suppressed compared to those from direct annihilation. As we discuss in Sec. 4.2.2, there is an enhanced capture rate to the “\(S=1\) Tower,” including direct capture to the lowest-lying \(L\!=\!0\), \(S\!=\!1\) state, indicated by the thicker arrows. However, the decay to endpoint photons from this tower is power suppressed compared to \(S\!=\!0\) (hence the smaller endpoint photon on the right). This combination turns out to balance such that both towers give parametrically similar contributions. However, since this is a subleading overall component, only “\(S\!=\!0\) Tower” endpoint photons are included in our analysis.
### The decay cascade
Starting with an initial DM pair with a specific mass, we can expect capture into a number of metastable bound states characterized by their total orbital angular momentum \(L\), spin \(S\), and charge \(Q\). These bound states have the option of either decaying into more tightly bound states with the same total spin but with \(|\Delta L|=1\) (in a single decay step), or annihilating directly into SM particles.13 To compute the final annihilation spectrum into photons, we therefore need to know the branching ratios for various annihilation and decay channels. The decay of a shallow state into a low-lying "stable" state (by which we mean stable against decay to other bound states) may happen via several intermediate decay steps with their own branching ratios. We are therefore required to implement this cascade of decays to obtain the effective production cross section for a specific "stable" bound state, so we can then compute the signal from its subsequent annihilation into SM particles producing a hard photon.
Footnote 13: Transitions where there is a change of spin or \(L\) by more than one unit are allowed, but suppressed, and we ignore them in this work; see _e.g._ Ref. [73].
Determination of the full decay cascade requires three ingredients for the spectrum from bound states at a given DM mass: 1. the direct capture cross-section into all bound states; 2. the decay rate from each initial state to all more deeply-bound states; and 3. the rate for direct annihilation into SM particles. The first of these ingredients proceeds as discussed in Sec. 3. What remains to be computed is then the competition between the decay of one bound state to a deeper one, versus direct annihilation into the SM.14 We will determine that in this section.
Footnote 14: Our discussion of this competition follows similar arguments in the literature, _e.g._ Refs. [68; 74].
Before doing so, we can already provide an analytic estimate. The bare cross section for free electroweak DM particles to annihilate to the SM scales parametrically as \(\sigma v\propto v^{2L}\alpha_{ W}^{2}/M_{\chi}^{2}\), where \(L\) is the orbital angular momentum of the two-body initial state. The equivalent decay rate of a bound state to SM particles is related to this expression by replacing an incoming plane-wave wavefunction with the bound-state wavefunction. We can parametrically estimate this with two steps. Firstly, we replace \(v\to\alpha_{ W}\) as the characteristic momentum associated with the potential is \(p\sim\alpha_{ W}M_{\chi}\) and \(v=p/M_{\chi}\) in the non-relativistic limit. Secondly, we must account for a multiplicative factor of \((\alpha_{ W}M_{\chi})^{3}\), which arises from the square of the bound-state wavefunction. (In more detail, as the wavefunctions are normalized by \(\int d^{3}\mathbf{r}\,|\psi|^{2}=1\) and have support over the Bohr radius, \(a_{0}\), their characteristic value is \(|\psi|^{2}\sim 1/a_{0}^{3}=(\alpha_{ W}M_{\chi})^{3}\).) Thus we expect the decay rate of a bound state with orbital angular momentum \(L\) to SM particles to scale approximately as \(\Gamma\propto(\alpha_{ W}^{2}/M_{\chi}^{2})\alpha_{ W}^{2L}(\alpha_{ W}M_{\chi})^{3}=\alpha_{ W}^{5+2L}M_{\chi}\). In contrast, a dipole-mediated decay to a lower-lying bound state scales as \(\alpha\,\alpha_{ W}^{4}M_{\chi}\) independent of \(L\); consequently, if such decays are allowed, they will generally dominate over annihilation to SM particles for \(L>0\). This argument suggests that unless dipole transitions to lower-lying states are forbidden, states with \(L>0\) will preferentially decay to \(L=0\) states before annihilating to the SM.
One might then ask whether the spectrum contains \(L>0\) states that have no allowed
dipole transitions to more deeply bound states. For such states, the branching ratio for decay to SM particles via annihilation might indeed be important. However, we argue in App. F that this can only occur for very high-\(L\) states (beyond the range we consider in this work) for which the formation rate is likely to be negligible. We will thus assume that these states can be neglected, and restrict to \(L=0\) states when computing the endpoint photon spectrum from bound state decay. Similarly, in principle there can be stable \(Q=\pm 1,2\), \(L=S=0\) states in the spectrum. However, we find that the branching ratio to these states is very small (sub-percent) compared to the \(L=S=Q=0\) states, and so we neglect them in computing the endpoint photon spectrum. (In fact, at lower masses the charged bound state contribution will be exactly zero when either there is no charged bound state in the spectrum, or when those available cannot be accessed due to insufficient energy to produce an on-shell \(W\).)
### Operators for bound state decay
#### 4.2.1 Leading power operators
In the direct-annihilation case, we expect \(s\)-wave annihilation (\(L=0\)) to dominate. However, in order to support the annihilation of bound states with higher angular momentum we need operators that are suppressed by powers of the DM velocity. To see which will contribute, we consider the various structures that arise from a tree-level matching calculation. The tree-level amplitude for annihilation to a final state \(\gamma+X\) has contributions from \(s\)-, \(t\)- and \(u\)-channel diagrams, which give the following leading-order operator when expanded to \(\mathcal{O}(v)\)
\[\mathcal{O}=\left(\chi_{v}^{T}i\sigma_{2}\left\{T_{\chi}^{d},\,T_{\chi}^{c} \right\}\chi_{v}\right)\left(\mathcal{B}_{\perp n}^{ic}\mathcal{B}_{\perp\bar {n}}^{jd}\right)i\epsilon^{ijk}(n-\bar{n})^{k}. \tag{110}\]
As already mentioned, this operator is identical to that used for direct annihilation in Eq. (6) and supports a bound state with \(L=S=0\) (and both are given before a BPS field redefinition).
As we will see, this is the primary operator required for computing the dominant contribution to the end-point spectrum from bound state annihilation. There is no operator at this order that supports \(S=1\), \(L=0\) bound state annihilation to gauge bosons at tree level. Such a state can annihilate to fermions and Higgs final states at tree level, but its contribution to end-point photons via bremsstrahlung is power suppressed in our EFT. These bound states however, can contribute substantially to the soft photon spectrum, as we will consider in Sec. 5.
For higher-\(L\) bound states with \(S=0\), there is a competition between decay to a state with lower \(L\) as compared with direct annihilation to SM particles. However, as discussed in Sec. 4.1, the decay to lower-\(L\) bound states always wins out for \(L>0\), so that only the decays of \(L=S=0\) states to SM particles remain relevant and the only operator we need is given in Eq. (110). Nevertheless, for completeness we provide the subleading operators in App. B. At the same time, there is no interference between the direct and bound state channels so that we may treat these cross sections separately. This is discussed in detail in App. C from an EFT perspective. If the widths of the bound states are parametrically much
smaller than the separation in their energy, then we may also safely neglect any interference between the various bound state channels. This is essentially the narrow-width approximation. Accordingly, to determine the total spectrum it will suffice to sum over the cross sections for the direct channels and the allowed bound state channels individually.
#### 4.2.2 Sub-leading power operators
As we saw in the previous section, there is no operator at leading power which supports an \(S=1\), \(L=0\) bound state annihilation into a hard photon. The only operators that support an \(S=1\) bound state are those which describe annihilation to fermions or scalars via an \(s\)-channel process. We can conceive of a hard photon emission from the final state SM Higgs or fermions; however, this is power suppressed by our SCET power counting parameter \(\lambda^{1/2}\) (where \(\lambda=1-z\)) at the amplitude level. We can see this explicitly by looking at the emission amplitude of a hard (collinear) photon off a collinear fermion in the final state as in the diagram below
The matrix element is then,
\[\begin{split}\mathcal{M}&=\bar{u}(\tilde{p})+ ie\bar{u}(p)\gamma^{\mu}\epsilon^{\mu}(k)\frac{i(\not{p}+\not{k})}{(p+k)^{2}+i\epsilon} \\ &\simeq\bar{u}(\tilde{p})-e\bar{u}(p)\gamma^{\mu}_{\perp} \epsilon^{\mu}(k)\frac{\frac{\not{\pi}}{2}}{n\cdot p}=\langle\tilde{p}|f_{n}| 0\rangle-e\langle k,p|A_{n,\mu\perp}\frac{1}{n\cdot\mathcal{P}}\bar{f}_{s} \gamma^{\mu}_{\perp}\frac{\not{n}}{2}|0\rangle.\end{split} \tag{100}\]
where the first term is the tree level diagram and the second term comes from a single photon emission. By the power counting of SCET, we see that the collinear fields scale as \(A_{n,\mu\perp},f_{n}\sim\lambda\), where \(\lambda\) is the expansion parameter of our EFT. The soft fermion field scales as \(f_{s}\sim\lambda^{3/2}\) while the soft momentum \(p\) scales as \(n\cdot p\sim\lambda\). This means that compared to the tree level, the hard photon emission is suppressed by a power \(\lambda^{1/2}\). If the fermion is ultra-soft, the suppression is enhanced to \(\lambda\). Further discussion of this operator is provided in App. B.
Including this operator in our analysis would be justified only if the production cross section for this channel compensates for the \(\lambda\) suppression (at the amplitude squared level), in order to be comparable to the \(S=0\) channel. Indeed this turns out to be true based on numerical calculations (see also App. F for an analytic estimate and discussion) and therefore, in principle we also need to include this sub-leading operator, in order to accurately compute the bound state contribution to the endpoint spectrum. However, a numerical analysis tells us that the leading \(S=0\) bound state channel, which will be the focus of the next subsection, is only a few percent of the direct annihilation cross section in terms of the contribution to the endpoint. The \(S=1\) bound-state contribution _is_ power-suppressed relative to direct annihilation; it is only appreciable compared to the leading-power term from \(S=0\) bound
states (whose formation rate is suppressed, by a different mechanism). So given the relative overall unimportance of the bound state contribution to the endpoint, we will not include the contribution from the \(S=1\) bound states here; a more precise calculation would need to account for this channel.
### Wavefunction factors for bound state annihilation
In this section we compute the wavefunction factors \(F^{aba^{\prime}b^{\prime}}\) relevant for the \(L=S=0\) bound states, which are needed to obtain the endpoint spectrum from the bound states' annihilation to SM particles. The definition of these factors remains identical to the case of direct annihilation given in Eq. (14), but now the operator is sandwiched between the \(L=S=0\) bound states.
\[F_{\chi}^{a^{\prime}b^{\prime}ab}=\Big{\langle}B\Big{|}\left(\chi_{v}^{T}i \sigma_{2}\left\{T_{\chi}^{a^{\prime}},\,T_{\chi}^{b^{\prime}}\right\}\chi_{v }\right)^{\dagger}\Big{|}0\Big{\rangle}\Big{\langle}0\Big{|}\left(\chi_{v}^{T }i\sigma_{2}\left\{T_{\chi}^{a},\,T_{\chi}^{b}\right\}\chi_{v}\right)\Big{|}B \Big{\rangle}. \tag{110}\]
As for the case of direct annihilation, the wavefunction factors will be evaluated at the IR scale of our EFT, which is the electroweak scale. At this scale, electroweak gauge symmetry is broken and the bound state wavefunctions are computed in terms of the broken eigenstates as in Eq. (21), but now for the bound state.
Once we have the bound state analogues of Eq. (21) in the broken basis, the next step is to relate these to the bound state wavefunction determined in Sec. 3. We start with the momentum-space representation of the bound state. In general for a two-particle bound state, we can express the bound state (which is an eigenfunction of the Hamiltonian) as
\[|\mathbf{P}\rangle=\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}\sqrt{M_{\chi}}}\phi_{ P}(\mathbf{k})c^{\dagger}\left(\frac{\mathbf{P}}{2}+\mathbf{k},\lambda\right)d^{ \dagger}\left(\frac{\mathbf{P}}{2}-\mathbf{k},\lambda^{\prime}\right)|0\rangle, \tag{111}\]
where \(\mathbf{P}\) is the momentum of the bound state, while \(2\mathbf{k}\) is the relative 3 momentum of the 2 particles making up the bound state. Further, \(\phi_{P}(\mathbf{k})\) is the momentum space bound state wavefunction whereas \(c^{\dagger}\), \(d^{\dagger}\) are creation operators for the two constituents of the bound state.
The operators that we have in SCET are bilinear local operators with non-trivial Dirac structures. Based on the definition above we can evaluate the overlap of the bilinear operators with the bound state for the \(s\)-wave states. After SU(2) breaking, a general matrix element of a bound state will take the form (working in four-component notation for the moment and suppressing the color structure),
\[\langle 0|\tilde{\chi}\gamma^{0}\gamma^{5}\chi|B\rangle=\int\frac{d^{3} \mathbf{k}}{(2\pi)^{3}\sqrt{M_{\chi}}}\phi_{P}(\mathbf{k})\bar{v}_{s}(\mathbf{ P}/2+\mathbf{k})\gamma^{0}\gamma^{5}u_{r}(\mathbf{P}/2-\mathbf{k}). \tag{112}\]
Working in the bound state rest frame, we can expand this result to leading order in velocity,
\[\langle 0|\tilde{\chi}\gamma^{0}\gamma^{5}\chi|B\rangle=\frac{\bar{v}_{s}(M_{ \chi})\gamma^{0}\gamma^{5}u_{r}(M_{\chi})}{\sqrt{M_{\chi}}}\psi^{B}(0)=-2\sqrt {M_{\chi}}\eta_{s}^{\dagger}\xi_{r}\psi^{B}(0). \tag{113}\]
where \(\psi^{B}({\bf x})\) is the position space analogue of \(\phi_{P}({\bf k})\), and we have introduced basis spinors \(\eta_{s}\) and \(\xi_{r}\) according to,
\[\bar{v}_{s}(M_{\chi})\gamma^{0}\gamma^{5}u_{r}(M_{\chi})=-2M_{\chi}\eta_{s}^{ \dagger}\xi_{r}. \tag{111}\]
For a bound state in the spin singlet configuration, we can evaluate the basis spinors explicitly, and we are left with,
\[\langle 0|\bar{\chi}\gamma^{0}\gamma^{5}\chi|B\rangle=2\sqrt{2M_{\chi}}\psi^{B}( 0). \tag{112}\]
For the computation at hand, we need to restore the color structure, which amounts to defining the bound state analogues of Eq. (21) which we used for direct annihilation. As our bound state is neutral, again there are only three objects to define
\[\begin{split}\Big{\langle}0\Big{|}(\chi_{v}^{0T}i\sigma_{2}\chi_ {v}^{0})\Big{|}B\Big{\rangle}=&\,4\sqrt{M_{\chi}}\psi_{0}^{B},\\ \Big{\langle}0\Big{|}(\chi_{v}^{+T}i\sigma_{2}\chi_{v}^{-}) \Big{|}B\Big{\rangle}=&\,2\sqrt{2M_{\chi}}\psi_{\pm}^{B},\\ \Big{\langle}0\Big{|}(\chi_{v}^{++T}i\sigma_{2}\chi_{v}^{--}) \Big{|}B\Big{\rangle}=&\,2\sqrt{2M_{\chi}}\psi_{\pm\pm}^{B}, \end{split} \tag{113}\]
where each of the bound state wavefunctions is the appropriate expression for that transition evaluated at the origin. We can then evaluate the wavefunction factors for the \(s\)-wave spin-0 bound state as follows,
\[\begin{split} F_{\chi}^{aabb}&=144\,(8M_{\chi}) \,\Big{|}2\psi_{\pm\pm}^{B}+2\psi_{\pm}^{B}+\sqrt{2}\psi_{0}^{B}\Big{|}^{2},\\ F_{\chi}^{a3a3}&=16\,(8M_{\chi})\,\big{|}4\psi_{ \pm\pm}^{B}+\psi_{\pm}^{B}\big{|}^{2},\\ F_{\chi}^{aa33}&=48\,(8M_{\chi})\,\big{(}4\psi_{ \pm\pm}^{B}+\psi_{\pm}^{B}\big{)}\,\Big{(}2\psi_{\pm\pm}^{B}+2\psi_{\pm}^{B}+ \sqrt{2}\psi_{0}^{B}\Big{)}^{*},\\ F_{\chi}^{abab}&=16\,(8M_{\chi})\,\big{|}4\psi_{ \pm\pm}^{B}+\psi_{\pm}^{B}\big{|}^{2}+8\,(8M_{\chi})\,\Big{|}2\psi_{\pm\pm}^{B }+5\psi_{\pm}^{B}+3\sqrt{2}\psi_{0}^{B}\Big{|}^{2}.\end{split} \tag{114}\]
Note, other than the different definition for the bound state wavefunctions versus the Sommerfeld factors, Eq. (113) versus Eq. (21), these results are identical to those for the direct annihilation used to determine Eqs. (19) and (20).
### Bound state decay rate into SM particles
In this subsection we compute rates for \(L=S=0\) bound states to decay into the SM. Because these states decay to gauge bosons, we can recast our previous results for photon emission through the \(L=S=0\) operator to obtain both the inclusive cross section and the differential branching ratio to photons. Taken together, these results will allow us to compute the hard photon spectrum from bound states, as dictated by Eq. (110). For the total (inclusive) decay rate for a given state, it suffices to look at the tree level cross section, since (as we explain next) there are no large logarithms induced due to loop corrections.
According to the KLN theorem [75; 76], for non-abelian gauge theories, the cross section
is IR finite only if we perform a sufficiently inclusive sum over both initial and final states. Since our initial states have a specific SU(2) color, one might expect to find IR divergences in the cross section when computing electroweak corrections. Since the electroweak symmetry is broken, these IR divergences should manifest themselves in the form of large logarithms \(\ln(M_{\chi}/m_{ W})\). Most of the examples which demonstrate this violation of the KLN theorem in the literature consider light-like initial state particles as opposed to heavy time-like momenta considered in this paper. As we shall see, this is the key difference which removes the presence of large logarithms in inclusive cross sections for the case of heavy particle annihilation. Here, by heavy, we mean that the mass of the initial particle is of the same order as the hard scale in the EFT.
To demonstrate this explicitly, consider again the case of the wino but now imagine the wino to be a much lighter particle (mass \(\sim\) electroweak scale) with a TeV scale (hard scale) energy. When we match the full theory onto an effective operator, the operator basis obtained after a tree level matching is fairly simple and reduces to
\[O_{r}=\left(\chi_{n}^{T}i\sigma_{2}\left\{T_{\chi}^{a},\,T_{\chi}^{b}\right\} \chi_{\bar{n}}\right)\left(Y_{r}^{abcd}\mathcal{B}_{\perp n^{\prime}}^{ic} \mathcal{B}_{\perp\bar{n}^{{}^{\prime}}}^{jd}\right)i\epsilon^{ijk}(n^{{}^{ \prime}}-\bar{n}^{{}^{\prime}})^{k}, \tag{111}\]
with
\[Y_{1}^{abcd}=Y_{n}^{ea}Y_{\bar{n}}^{eb}Y_{n^{\prime}}^{fc}Y_{\bar{n^{\prime}} }^{fd},\hskip 14.226378ptY_{2}^{abcd}=Y_{n}^{ea}Y_{\bar{n}}^{fb}Y_{n^{ \prime}}^{ec}Y_{\bar{n^{\prime}}}^{fd},\hskip 14.226378ptY_{3}^{abcd}=Y_{n}^{ea}Y_{ \bar{n}}^{fb}Y_{n^{\prime}}^{fc}Y_{\bar{n^{\prime}}}^{ed}. \tag{112}\]
Note that in this case we have three soft functions in place of two since we can distinguish between the directions of the initial states. We are considering the inclusive case so that we sum over the colors of the final state gauge bosons. We can now look at the soft operators that we get at the amplitude squared level. As a single example, we can consider the interference term between \(Y_{2}\) and \(Y_{3}\), which will contain \(Y_{n}^{ea}Y_{\bar{n}}^{fb}Y_{n}^{fa}Y_{\bar{n}}^{eb}\). It is then clear that the soft Wilson lines do not cancel out due to the distinction between the n and \(\bar{n}\) directions, if the initial state colors, \(a,b\), are not summed over. On the other hand, if instead \(n=\bar{n}=v\), then the Wilson lines would cancel. This will render the soft function trivial and hence no Sudakov logs from KLN violation exist in this case. This implies that at NLL accuracy, for computing the inclusive cross section, we only need to consider the inclusive tree level cross section.
For the \(L=S=0\) operator, we need the differential branching ratio to the photon, in addition to the inclusive cross section. The differential decay rate is given by the following factorized formula which takes the same form as that of the differential cross section in Eq. (5)
\[\frac{d\Gamma}{dz}=A_{0}\int\frac{d\Omega_{\gamma}}{4\pi}\,F_{\chi}^{a^{\prime }b^{\prime}ab}HJ_{\gamma}J_{\bar{n}}SH_{\bar{n}}\otimes H_{S}\otimes C_{S}, \tag{113}\]
where \(A_{0}\) is an overall kinematic factor. We do not need to know its explicit form since it will cancel out in the branching ratio.
To compute the inclusive cross section, we can make use of the stage 1 EFT given in
Eq. (4) with all the functions evaluated to tree level
\[\begin{split}\left[\frac{d\Gamma}{dz}\right]_{\text{Stage 1}}=& \,A_{0}\int\frac{d\Omega_{\gamma}}{4\pi}\,F_{\chi}^{a^{\prime}b^{\prime}ab}J_{ \gamma}\int\frac{dk^{+}}{2\pi}\,J_{\bar{n}}(k^{+})\\ &\times\int\frac{dq^{+}}{2\pi}\,\left(\sum_{i=1}^{4}H_{ij}S_{ij}^ {{}^{\prime}\,a^{\prime}b^{\prime}ab}(q^{+})\right)\delta(2M_{\chi}(1-z)-k^{+} -q^{+}),\end{split} \tag{111}\]
where we have explicitly written out the convolution, and \(d\Omega_{\gamma}\) is an integral over the outgoing direction of the photon. This form is sufficient for computing the inclusive cross section, since we as explained earlier in this section, we do not have to resum any logs.
An identical approach can be used to determine the inclusive decay rate of the \(L=S=0\) state, which at tree level decays purely to gauge bosons. Only several small alterations are required between the semi-inclusive (\(\gamma+X\) final state) versus inclusive cross section. We adjust the wavefunction factors contracted into the soft function, replace \(J_{\gamma}\) by \(J_{n}\) to allow for any final state gauge boson (we are no longer requiring a photon in the final state), and we remove the restriction on the phase space (as there is not observed endpoint photons) and therefore integrate over the full phase space. Further, we only need the various functions at tree level, and so we use
\[J_{\gamma}=1,\hskip 14.226378ptJ_{\bar{n}}(k^{+})=\delta(2M_{\chi}k^{+}), \hskip 14.226378ptS^{{}^{\prime}\,a^{\prime}b^{\prime}ab}(q^{+})=\delta(q^{+}) \delta^{aa^{\prime}}\delta^{bb^{\prime}}. \tag{112}\]
For the inclusive case, we have a factor of 3 due to the color sum in the final state where we are now allowing for all gauge bosons instead of just a photon, although the function remains the same which is why we have retained the \(J_{\gamma}\) notation. Then we are left with
\[\left.\frac{d\Gamma}{dz}\right|_{\text{inc.}}=\frac{A_{0}}{(2\pi)^{2}}\int \frac{d\Omega_{\gamma}}{4\pi}\,F_{\chi}^{abab}\,\delta(2M_{\chi}(1-z)). \tag{113}\]
so now when we integrate over \(z\) and \(\Omega_{\gamma}\) we have
\[\Gamma=3A_{0}\frac{F_{\chi}^{abab}}{(2\pi)^{2}2M_{\chi}}. \tag{114}\]
The branching ratio, therefore can be obtained combining Eqs. (110) and (114) where the factor \(A_{0}\) cancels out.
### Bound state transitions
The previous subsections have established how to compute the photon spectrum from decay of \(L=S=0\) bound states via annihilation to SM particles. The remaining necessary ingredient is to determine how these states are populated through radiative capture and decays. The initial formation rate for bound states has already been discussed in Sec. 3; this subsection details the computation for shallowly-bound states to decay to lower-lying bound states.
Transitions between bound states, mediated by emission of a vector boson, can be computed using very similar expressions to those discussed in Sec. 3 for the initial bound-state formation. The are three salient differences: 1. the scattering-state wavefunction is now replaced with the bound-state wavefunction; 2. we must account for cases where the initial state has odd \(L+S\) and the final state has even \(L+S\); and 3. we need address cases where the initial state has net total charge \(Q\neq 0\). As discussed in Sec. 3, the second issue can be taken into account by modifying the potential used to compute the initial- and final-state wavefunctions.
The expression for the decay rate between states with total charge \(Q=0\) due to photon emission is given by a straightforward modification of Eq. (3.11),
\[\begin{split}\Gamma&=\frac{2\alpha k}{\pi M_{\chi} ^{2}}\int d\Omega_{k}\left|\int d^{3}\mathbf{r}\,\mathbf{\epsilon}(\hat{\mathbf{k }})\cdot\left[(2\psi_{CC}^{*}\nabla\phi_{CC}+\psi_{C}\nabla\phi_{C})\right. \right.\\ &\left.\left.+\frac{1}{2}M_{\chi}\alpha_{{}_{W}}\hat{\mathbf{r}} \mathbf{\epsilon}^{-m_{W}\tau}\left(2\psi_{CC}^{*}\phi_{C}-2\psi_{C}^{*}\phi_{CC}+ 3\sqrt{2}\psi_{C}^{*}\phi_{N}\right)\right]\right|^{2},\end{split} \tag{4.20}\]
where as previously \(k\) is the momentum of the emitted photon and \(\mathbf{\epsilon}(\hat{\mathbf{k}})\) is its polarization vector. Now \(\psi(\mathbf{r})\) is the wavefunction for whichever of the initial and final states has \(L+S\) odd, while \(\phi(\mathbf{r})\) is the wavefunction for whichever state has \(L+S\) even (the dipole selection rule ensures that states connected by a dipole transition have opposite signs for \(L+S\)). We can make this simplification because the absolute value of the matrix element does not depend on the direction of the transition (although only one direction will have a positive \(k\) and hence non-zero available phase space). Consequently, once we have computed the matrix element for an \(L+S\)-even \(\to L+S\)-odd transition (as in Eq. (3.11)), we can reuse the same matrix element for a transition in the reverse direction, only modifying the phase-space factors. The contribution from \(Z\) emission can be obtained by replacing \(\alpha\to\alpha_{{}_{W}}c_{{}_{W}}^{2}\) in the prefactor and replacing the momentum factor \(k\) as described in Eq. (3.12).
To see explicitly that the matrix element is invariant under time-reversal (up to conjugation), consider relabeling \(i\leftrightarrow i^{\prime}\), \(j\leftrightarrow j^{\prime}\) in Eq. (3.1), and likewise swapping the notation for the initial- and final-state wavefunctions \(\psi_{nlm}\leftrightarrow\phi\) (that is, \(\phi\) remains the wavefunction for the \(ij\) two-particle state, whether it is the initial or final state). The index relabeling has the effect of transposing the generator matrices; as the generators are Hermitian, this is equivalent to taking their complex conjugates. The relabeling of the wavefunctions applies complex conjugation to both \(\mathcal{J}\) and \(\mathcal{Y}\), and additionally flips the sign of \(\mathcal{Y}\), as can be seen from Eq. (3.2). Consequently, we obtain:
\[\mathcal{M}^{a}_{i^{\prime}i,j^{\prime}j} =-\sqrt{2^{8}\pi\alpha_{\text{rad}}M_{\chi}}\left\{-if^{abc}(T_{1} ^{b})^{*}_{i^{\prime}i}(T_{2}^{c})^{*}_{j^{\prime}j}(-\mathcal{Y}^{*})+\frac{1 }{2}\mathcal{J}^{*}\left[(T_{1}^{a})^{*}_{i^{\prime}i}\delta_{j^{\prime}j}-(T_{ 2}^{a})^{*}_{j^{\prime}j}\delta_{i^{\prime}i}\right]\right\}\] \[=\left[-\sqrt{2^{8}\pi\alpha_{\text{rad}}M_{\chi}}\left\{-if^{abc }(T_{1}^{b})_{i^{\prime}i}(T_{2}^{c})_{j^{\prime}j}\mathcal{Y}+\frac{1}{2} \mathcal{J}\left[(T_{1}^{a})_{i^{\prime}i}\delta_{j^{\prime}j}-(T_{2}^{a})_{j^ {\prime}j}\delta_{i^{\prime}i}\right]\right\}\right]^{*}\] \[=\mathcal{M}^{a*}_{ii^{\prime},jj^{\prime}}, \tag{4.21}\]
where \({\cal Y}\) and \({\cal J}\) are the functions computed from Eq. (3.2) for the \(ij\to i^{\prime}j^{\prime}\) matrix element.
Transitions between \(Q=0\) and \(Q=\pm 1\) bound states are similarly related to the cross section for capture into \(Q=\pm 1\) states, Eq. (3.15), with a rate given by:
\[\begin{split}\Gamma&=\frac{2\alpha k}{\pi M_{\chi}^ {2}}\int d\Omega_{k}\left|\mathbf{\epsilon}(\hat{\mathbf{k}})\cdot\int d^{3} \mathbf{r}\left[\sqrt{\frac{3}{2}}\psi_{+0}^{*}\nabla\phi_{N}-\frac{\sqrt{3}} {2}\psi_{+0}^{*}\nabla\phi_{C}\right.\right.\\ &\left.+\frac{1}{\sqrt{2}}\psi_{++-}^{*}\nabla\phi_{C}-\frac{1}{ \sqrt{2}}\psi_{++-}^{*}\nabla\phi_{CC}\right.\\ &\left.\left.+\frac{1}{2}\hat{\mathbf{r}}\zeta(r)M_{\chi}\alpha_{ W}\left(\sqrt{3}\psi_{+0}^{*}\phi_{C}+\sqrt{2}\psi_{++-}^{*}\phi_{C}+2\sqrt{2} \psi_{++-}^{*}\phi_{CC}\right)\right]\right|^{2}\!,\end{split} \tag{4.22}\]
with \(\zeta(r)\) as defined in Eq. (3.14), and phase space factor \(k\) as defined in Eq. (3.16). Here \(\phi\) denotes the \(Q=0\) wavefunction and \(\psi\) denotes the \(Q=1\) wavefunction; in the case where the \(Q=0\) wavefunction has \(L+S\) odd, the terms containing \(\phi_{N}\) should be set to zero. For the \(Q=1\) sector there are no states containing two identical particles, so the set of allowed states is the same for \(L+S\) even or odd, although the wavefunctions for the two cases will differ due to the \(L+S\)-dependent potential.
Next, let us consider transitions mediated by photon or \(Z\) emission between two \(Q=\pm 1\) states. In this case we obtain (suppressing the position dependence of the wavefunctions)
\[\Gamma =\frac{2\alpha k}{\pi M_{\chi}^{2}}\int\!d\Omega_{k}\left|\mathbf{ \epsilon}(\hat{\mathbf{k}})\cdot\int d^{3}\mathbf{r}\left\{\frac{3}{2}(\psi_{ ++,-}^{*})_{f}\nabla(\psi_{++,-})_{i}+\frac{1}{2}(\psi_{+,0}^{*})_{f}\nabla( \psi_{+,0})_{i}\right.\right. \tag{4.23}\] \[\left.\left.+\frac{1}{2}\hat{\mathbf{r}}e^{-m_{W}r}M_{\chi} \alpha_{W}\Big{[}3(-1)^{(L+S)_{i}}(\psi_{+,0}^{*})_{f}(\psi_{+,0})_{i}\!-\! \sqrt{6}(\psi_{+,0}^{*})_{f}(\psi_{++,-})_{i}\!+\!\sqrt{6}(\psi_{++,-}^{*})_{ f}(\psi_{+,0})_{i}\Big{]}\right\}\right|^{2}\!\!,\]
for photon emission, and as above we obtain the \(Z\)-emission contribution by replacing \(\alpha\to\alpha_{W}c_{W}^{2}\) and modifying the phase-space factor \(k\) as defined in Eq. (3.12). In the expression above, initial and final states are distinguished by \(i\) and \(f\) subscripts, and this expression can be used for transitions where the initial state is either \(L+S\)-odd or \(L+S\)-even. Note that there is an explicit \((L+S)_{i}\)-dependent factor in the second line, which is necessary to ensure the consistency of the matrix element when the initial and final states are swapped. We discuss the details of its origin in Appendix E.
Finally, for transitions mediated by \(W\) emission between \(L+S\)-odd \(Q=\pm 1\) states and \(L+S\)-even \(Q=\pm 2\) states (as discussed above, we do not expect any \(L+S\)-odd \(Q=\pm 2\) bound states), the decay rate is given by:
\[\begin{split}\Gamma&=\frac{k\alpha_{W}}{\pi M_{\chi}^ {2}}\int d\Omega_{k}\left|\mathbf{\epsilon}(\hat{\mathbf{k}})\cdot\int d^{3} \mathbf{r}\left\{\left[\psi_{+,0}^{*}-\sqrt{3/2}\psi_{++,-}^{*}\right]\nabla \phi_{++,0}-\sqrt{3}\psi_{+,0}^{*}\nabla\phi_{+,+}\right.\right.\\ &\left.\left.-\hat{\mathbf{r}}M_{\chi}\alpha_{W}\zeta(r)\left[ \sqrt{6}\psi_{++,-}^{*}\phi_{++,0}+\sqrt{3}\psi_{+,0}^{*}\phi_{+,+}\right] \right\}\right|^{2}\!\!,\end{split} \tag{4.24}\]
where here \(\psi\) denotes the wavefunction for the \(Q=1\) state and \(\phi\) denotes the wavefunction for
the \(Q=2\) state, with subscripts labeling the components as above. The cross section is identical for transitions between \(Q=-1\) and \(Q=-2\) states, with the appropriate modifications to the wavefunction labels.
### Key points from the bound state decay calculation
We now have all the ingredients for computing the differential decay rate of a bound state to a hard photon, so let us finally summarize our prescription (see also Fig. 5). Firstly, the number of bound states at a given mass is shown on the left of Fig. 6. We have argued that production of a bound state with a given \(S\) and \(L>0\) will generically lead to production (via one or more decays) of an \(L=0\) bound state with the same \(S\), so it unnecessary to compute the rate to produce SM particles directly from those \(L>0\) states. We confirm this numerically on the right of Fig. 6: however measured, the branching fraction is always greater than 90%. For instance, at 100 TeV, the deepest \(Q=0\)\(L+S\) odd \(p\)-wave bound state decays to the deepest \(s\)-wave state with 99.6% probability and to the second deepest \(s\)-wave state with a likelihood of 0.4%. (The next most dominant transition is to the third deepest \(s\)-wave bound state, and only occurs with a probability of 0.006%.)
We also discussed decays of the \(S=1,\,L=0\) bound states, noting that they generate only a power suppressed contribution to the photon endpoint spectrum. Nevertheless, the contribution from the \(S=1\) states can be comparable to that from the \(S=0\) states, after accounting for both their (enhanced) production rate and (suppressed) endpoint spectrum from annihilation. However, in practice this means that the contributions to the endpoint
Figure 6: (Left) The number of bound states of different types as a function of mass. For \(Q=0\) we show the breakdown of the bound states into various types, whereas for the charged bound states we simply show the total. (Right) The branching fraction for \(Q=0\)\(L+S\) odd \(p\)-wave bound states to decay to the deepest \(s\)-wave state. We show the branching fraction for the deepest bound state, for an average over all of the \(Q=0\)\(L+S\) odd \(p\)-wave bound states weighted by their capture cross section, and then a simple mean over all states. Finally, we also show the branching fraction for the deepest \(Q=1\)\(L+S\) odd \(p\)-wave bound states to decay to the deepest \(Q=0\)\(s\)-wave state. As claimed, in all cases decay to the deepest \(s\)-wave state dominates.
spectrum from both the \(S=0\) and \(S=1\) bound states are suppressed compared to direct annihilation (either due to power suppression or because of the small formation rate).
For the aforementioned reasons, we focus on the \(L=S=0\) bound states to estimate the size of the (generally subdominant) bound-state contribution. Doing so implies we can reuse our SCET results from the direct annihilation case, with appropriate modification of the wavefunction factors. This choice does mean that the total bound-state contribution to the endpoint photon spectrum could increase by an \(\mathcal{O}(1)\) factor in an improved calculation (once the \(S=1\) states are included). Because the bound-state contribution to the endpoint spectrum is suppressed, this typically corresponds to a percent-level theoretical uncertainty in the overall endpoint spectrum, with larger uncertainties at specific mass points where the bound state formation cross section is enhanced relative to direct annihilation. The contribution to continuum photons (not near the endpoint) from bound state formation can be markedly larger, compared to the effect on the endpoint spectrum, as there is no power suppression in the continuum contribution from annihilation of the \(S=1\) bound states. We will discuss each of these contributions to the total spectrum in the next section.
## 5 The Combined Photon Spectrum and Numerical Results
At this stage we have all ingredients required to determine the quintuplet annihilation spectrum, including both direct annihilation and the contribution of bound states. In this section we collect our results to determine the full energy distribution of photons the quintuplet generates at the thermal mass of 13.6 TeV, but also for a wider range of masses. We will estimate the impact of several uncertainties on our results, such as the residual theoretical uncertainty on the NLL computations, but also from astrophysical uncertainties such as the distribution of \(v\) values and on the DM density in the inner Galaxy. Finally, we will put these results together to estimate the sensitivity of existing and upcoming IACTs to quintuplet DM.
### Predictions for the spectrum and rate of photon production
A central goal of this work is to accurately determine the distribution of photons that emerge when two SU(2) quintuplets annihilate. This spectrum forms the signal template for telescopes searching for high energy photons, and therefore is a central theoretical input. To achieve this, throughout we have computed differential cross sections \(d\sigma/dz\), both for the direct annihilation in Eq. (28), and also for the bound state contribution by combining the results of the previous sections with Eq. (1). For indirect detection, observables are sensitive to \(d\langle\sigma v\rangle/dz\). To begin with we will assume the DM states are incident with a fixed \(v=10^{-3}\), revisiting the validity of this approximation in the next subsection. In order to extract the shape of the photon distribution from the differential cross section, it is common in indirect detection to introduce
a photon spectrum \(dN/dE\), and our convention for doing so is the following,15
Footnote 15: Further discussion of the connection between spectra used in indirect detection and the corresponding field theoretic quantities can be found in, for instance, Ref. [77].
\[\frac{d\langle\sigma v\rangle}{dE}=\langle\sigma v\rangle_{\rm line}\times\frac{ dN}{dE}. \tag{128}\]
This choice follows Ref. [62], and implies that our spectrum is normalized with respect to the _line cross section_, \(\langle\sigma v\rangle_{\rm line}\equiv\langle\sigma v\rangle_{\gamma\gamma}+ \frac{1}{2}\langle\sigma v\rangle_{\gamma Z}\), which is defined as the rate to produce two photons at exactly \(E=M_{\chi}\). By construction, \(dN/dE\) will contain a contribution of exactly \(2\delta(E-M_{\chi})\) for the line, but it will also contain contributions from endpoint photons, bound state decays, and continuum photons arising primarily from the unstable particles the direct annihilations or bound state decays can produce. For each of these latter components, however, their additions to \(dN/dE\) will be weighted by a branching fraction \(\langle\sigma v\rangle_{i}/\langle\sigma v\rangle_{\rm line}\), with \(\langle\sigma v\rangle_{i}\) the cross section for that particular contribution. The rationale for anchoring our calculations to the line cross section is that \(\chi\chi\to\gamma\gamma\), which has a spectrum of exactly \(dN/dE=2\delta(E-M_{\chi})\), is a common experimental target, and therefore there are a wide number of existing constraints on \(\langle\sigma v\rangle_{\rm line}\) which we can then directly compare with. Further discussion of this point can be found in Ref. [62].
The spectra of line and endpoint photons produced by decay of the bound states is computed using the methods of Sec. 4.16 For our bound state formation and transition calculations, we include only states with \(L=0,1,2\). Capture into \(L=3\) and higher states requires at least \(L=2\) for the initial state, and we expect the contributions from components of the initial state with high \(L\) to be suppressed at low velocities, by a factor that is parametrically
Figure 7: The cross section for line photons, breaking down the contributions from the direct annihilation and bound states. While generically the direct annihilation dominates, for isolated masses near Sommerfeld peaks the bound state contribution can be the leading one. For all masses lower than those shown bound states are strictly subdominant.
\((M_{\chi}v/m_{ W})^{2L}\) (although for sufficiently high masses, \(M_{\chi}\gtrsim 100\) TeV, this suppression is lifted). It is also worth noting that for essentially all the parameter space most relevant for experimental searches with H.E.S.S, at \(M_{\chi}<20\) TeV, we find that no \(L=3+\) bound states exist in the spectrum. We independently expect capture into states with high principal quantum number \(n\) (which is required for high \(L\)) to be suppressed, as (1) the finite range of the potential means only a limited number of states are bound at all, so unlike in unbroken gauge theories there is no infinite tower of high-\(n\) states, (2) capture into weakly-bound states is suppressed by a phase space factor, and (3) analytic approximations (App. F) suggest that we can expect the leading contribution to the capture cross section to be exponentially suppressed for large \(n\). In practice, our numerical calculation expresses the bound states as a linear combination of 30 basis states for each combination of \(L\), \(Q\) and \((-1)^{L+S}\), allowing us to access up to 30 distinct bound states indexed by different values of \(n\) (although as we approach this upper bound we expect the spectrum to become less accurate), and we include all these states in our calculation. We have checked at sample mass points that our binding energies and cross sections for capture into lower-\(n\) states are not significantly affected by doubling the number of basis states. For the reasons given above, we generally expect the error due to the omission of higher-\(n\) states to be small.
Before showing the full distributions of line and endpoint photons, we can already consider one measure of the importance of bound states to the resulting photon signal: their contribution to \(\langle\sigma v\rangle_{\rm line}\). This is shown in Fig. 7, where we separate the contribution to the line from the direct annihilation to that of processes involving an intermediate bound state. At this stage we only include bound-state contributions that produce line photons, with energy essentially at \(M_{\chi}\). The figure makes clear a point already estimated earlier: direct annihilation generally dominates the production of line photons at \(E\simeq M_{\chi}\) by \(1-2\) orders of magnitude. However, the bound-state contribution can be significant and even dominate at isolated mass points, for instance as at \(M_{\chi}=68.1\) TeV, and therefore a reliable prediction at arbitrary masses must include this contribution.
Moving beyond the line, in Fig. 8 we show the full spectrum, broken down by various contributions, for two masses: the thermal mass of \(M_{\chi}=13.6\) TeV, and a mass where bound state contributions are significant, \(68.1\) TeV. For each mass we show two versions of the spectrum. In the lower panels, we show the unsmeared spectra, which is the distribution of photons that emerge from the annihilations. (Note in this case the line contribution is simply \(dN/dE=2\delta(E-M_{\chi})\) and so represented by a vertical line.) In the upper panels, we have convolved the raw spectra with a finite experimental energy resolution in order to model what would actually be seen at a realistic instrument. For this we take the energy resolution of the H.E.S.S. telescope, determined from Ref. [18]. In detail, we fix the relative width \(\Delta E/E\) to 0.17 and 0.11 for \(E=500\) GeV and \(E=10\) TeV, respectively, and then vary logarithmically between these endpoints, freezing the ratio either side. From this we compute \((dN/dE)_{\rm smeared}\) as \(dN/dE\) convolved with a Gaussian of width equal to the energy resolution.
In terms of these two notions of the spectra, Fig. 8 shows five contributions to the photons distributions for the two masses. The first three of these are: 1. the direct annihilation line;
2. direct annihilation endpoint; and 3. the bound state contribution to the line and endpoint. Again we see clearly a point noted for the wino in Refs. [62; 64]: the endpoint contribution makes a considerable modification to the observed number of photons with \(E\sim M_{\chi}\), with the peak smeared spectra enhanced by 1.9 and 3.1 for \(M_{\chi}=13.6\) and 68.1 TeV. The bound state contribution is more modest: it is effectively negligible at the thermal mass, and a factor of 1.7 enhancement at 68.1 TeV, which again is a mass with an anomalously large bound state contribution to the hard photon spectrum.
Beyond these three, we also show the contribution of two continuum sources, both of which can generate lower energy photons. The first of these is the continuum emission arising from direct annihilation. This results from tree level annihilation of the quintuplets into \(W\) or \(Z\) bosons. The latter of these arises from \(\gamma Z\) and \(ZZ\) final states, and is a simple reweighting
Figure 8: The quintuplet annihilation spectrum for two masses, the thermal mass of 13.6 TeV (left), and a mass where bound state contributions are appreciable, 68.1 TeV (right). For each mass we show results convolved with the H.E.S.S. energy resolution (top) and unsmeared (below). The full spectrum is broken into five individual components: the line, endpoint, bound state line and endpoint, the direct annihilation continuum, and bound state contribution to the continuum. Details of each are provided in the text.
of the line cross section as
\[\frac{\langle\sigma v\rangle_{ZZ}+\frac{1}{2}\langle\sigma v\rangle_{Z\gamma}}{ \langle\sigma v\rangle_{\gamma\gamma}+\frac{1}{2}\langle\sigma v\rangle_{\gamma Z }}=\frac{c_{W}^{2}}{s_{W}^{2}}. \tag{110}\]
For the \(W^{+}W^{-}\) final state, the tree level annihilation rate, with the Sommerfeld effect included can be computed as,
\[\begin{split}\langle\sigma v\rangle_{{}_{WW}}=\frac{\pi\alpha_{W} ^{2}}{M_{\chi}^{2}}&\Big{[}18|s_{00}|^{2}+25|s_{0\pm}|^{2}+4|s_{0 \pm\pm}|^{2}\\ &+30\sqrt{2}\text{Re}(s_{00}s_{0\pm}^{*})+12\sqrt{2}\text{Re}(s_ {00}s_{0\pm\pm}^{*})+20\text{Re}(s_{0\pm}s_{0\pm\pm}^{*})\Big{]}.\end{split} \tag{111}\]
As discussed for the case of the wino in Ref. [78], higher order corrections to this should not be appreciable, and so we do not include them. We can then add the \(W^{+}W^{-}\) final state to \(dN/dE\) with weighting \(\langle\sigma v\rangle_{{}_{WW}}/\langle\sigma v\rangle_{\text{line}}\) along with the \(Z\) contribution. In each case, to determine the spectrum of photons that result from these electroweak bosons we use PPPC4DMID [79] with the electroweak corrections turned off, to avoid any double counting of the endpoint contributions we computed. As seen in Fig. 8, these contributions are important for \(E_{\gamma}\ll M_{\chi}\).
The final contribution to the spectrum we consider is continuum photons arising from the bound state decays. This contribution is not the main focus of this work, but to get an estimate of its size and spectrum, we assume the most common SM decay products are light quarks (equally weighted between flavors), and employ the corresponding gamma-ray spectrum from PPPC4DMID. The motivation for this choice is that the bound states will decay through their couplings to (on-shell or off-shell) \(W\) and \(Z\) bosons, with the exact channel depending on their \(L\) and \(S\) quantum numbers, and the gauge bosons in turn decay dominantly to quarks due to their large associated degrees of freedom. We weight the continuum spectrum by the ratio between the bound state capture cross section and \(\langle\sigma v\rangle_{\text{line}}\), similar to how we weight the \(Z\) and \(W\) continuum components above. At the thermal mass, this ratio is roughly 31, and the visible contribution can be seen in Fig. 8. The contribution is dominated by the \(Q=0\)\(p\to s\) capture cross section which sits near a Sommerfeld peak in this capture rate at 13.6 TeV (_cf._ Fig. 4). To highlight this, at the edges of the uncertainty band on the thermal mass, 12.8 and 14.4 TeV, the equivalent ratio is reduced significantly, to 0.15 and 0.70, respectively. At 68.1 TeV the ratio is larger still - just over 471 - and is dominated by the \(Q=0\) and \(Q=1\)\(p\to s\) capture rates.
Figure 8 highlights the various contributions to the spectrum, but does not capture the variation of the spectrum as a function of mass. The variation can be considerable, as shown in Fig. 9. From the definition, the line contribution to this spectrum is fixed at \(2\delta(E-M_{\chi})\) independent of mass. What is not fixed, however, is the endpoint and continuum contributions, which can vary significantly even for small changes in mass. (The bound state contributions are not significant for the masses shown.) As shown in Ref. [39], such rapid variations can lead to sharp features in the instrumental sensitivity to \(\langle\sigma v\rangle_{\text{line}}\), as the shape of the DM
signal being searched for varies rapidly with mass. These effects do not occur for the wino or higgsino, where the spectra varies relatively smoothly with mass (see Ref. [39]).
The origin of this behavior seems to be interference between the different Sommerfeld factors, associated with the distinct mass eigenstates for the final annihilation: \(\chi^{0}\chi^{0}\), \(\chi^{+}\chi^{-}\), and \(\chi^{++}\chi^{--}\). These states have different branching ratios into the various SM final states, and the positions of the resonance peaks can differ between the interfering Sommerfeld factors. As we vary the mass, we can move rapidly from the sharp turn off of one Sommerfeld peak to the sharp turn on of another, and in doing so transition rapidly in the strength of the associated endpoint and continuum contributions, as seen in Fig. 9.
One might ask why this behavior was not seen for the wino, which also has multiple Sommerfeld factors that can interfere with each other. For the line cross section, one might suspect that the issue is that only the \(\chi^{+}\chi^{-}\) state can annihilate to photons at tree-level; however, we also do not see sharp quintuplet-like features in annihilation of wino DM to W bosons, which is allowed at tree-level from both the \(\chi^{0}\chi^{0}\) and \(\chi^{+}\chi^{-}\) states. More insight can be gained by working in the basis of eigenstates of the potential at small \(r\), rather than mass eigenstates; this corresponds to the basis of potential eigenstates in the limit of unbroken SU(2), as discussed in App. F. In the limit of unbroken SU(2), the relevant potential for the quintuplet (coupling states with total charge zero and even \(L+S\)) has two eigenstates that experience attractive interactions and one eigenstate that experiences repulsive interactions. We expect that the linear combination of Sommerfeld factors corresponding to the repulsed eigenstate at small \(r\) should be suppressed, as the SU(2) symmetry is restored in the small-\(r\) regime. We have confirmed numerically that this suppression is quite pronounced, typically several orders of magnitude at the velocities we consider. We would also expect a difference in the linear combination of Sommerfeld factors corresponding to the two attracted eigenstates, with the larger-magnitude eigenvalue yielding a larger Sommerfeld enhancement, but this dif
Figure 9: The full quintuplet spectrum for three different masses, \(M_{\chi}=14\), \(16\), and \(18\ \mathrm{TeV}\). As shown, the spectrum can change significantly as a function of mass, a feature which does not arise for the wino or higgsino.
ference is much less dramatic, and so the two attracted eigenstates still experience meaningful interference.
We thus attribute the sharp features observed in the quintuplet case to this interference between the different attracted eigenstates (in the small-\(r\) potential-dominated regime); its absence in the wino case is presumably related to the fact that the wino has only one attracted eigenstate (_e.g._ Ref. [68]). We thus expect this behavior (sharp variations in the spectrum with mass) to be ubiquitous for larger SU(2) representations.
### Uncertainty associated with the velocity distribution of dark matter
The complete initial-state wavefunctions naturally depend on the relative velocity of the incoming DM particles, which in the discussion so far we have simply set as \(v=10^{-3}\). In this subsection we explore the systematic uncertainties associated with our modeling of the velocity. We first discuss the detailed dependence of the cross sections on the relative DM velocity, and then explore the effects on our spectra of averaging over different plausible velocity distributions. The effects of the long-range potential on the wavefunction saturate when \(v\lesssim m_{\text{\tiny{W}}}/M_{\chi}\), which is true for halo velocities (\(v\sim 10^{-3}\)) for \(M_{\chi}\lesssim 80\) TeV; consequently, except near resonances or for very heavy DM, we do not expect the Sommerfeld enhancement from the weak interactions to depend sensitively on the velocity distribution. However, the bound-state formation rate from \(L>0\) partial-wave components of the initial state _will_ have a non-trivial velocity dependence even below this threshold. Furthermore, for the quintuplet the thermal mass is only a factor of few below the saturation threshold, and in systems with higher velocities than the Sun's neighborhood - such as galaxy clusters - both the direct annihilation cross section and the bound-state formation rates are expected to depend sensitively on the velocity.
Figure 10: The dependence of the cross sections on \(v\) for four different fixed masses. We show the case of direct endpoint annihilation (left, the analogue of Fig. 11), and bound state capture (right, as in Fig. 12).
In Fig. 10 we show how annihilation and capture vary as a function of velocity at four different masses. We observe that for a mass of \(M_{\chi}=14\) TeV, noticeable velocity dependence is present at \(v\gtrsim 4\times 10^{-3}\). As we discuss in depth in App. F, the oscillatory behavior observed at high velocities can be understood in the limit of unbroken SU(2). This behavior originates from interference between the different eigenvalues of the potential. At low velocities, by contrast, SU(2) breaking effects are expected to suppress the oscillations. For higher DM masses, where the velocity dependence is relevant even for \(v\lesssim 10^{-3}\), our previous annihilation cross section plots should be taken as an illustrative estimate. A full calculation at high mass would involve integrating the formulae given in this paper over the true velocity distribution in the region of interest. The oscillatory behavior of the cross section at high velocities means that assuming a single velocity could in principle lead to large errors in this case.
We now estimate the effect of averaging over the velocity distribution. The characteristic scale of the DM velocity dispersion should be comparable to the circular velocity of the visible matter, which in the vicinity of the Sun has been measured to be \(v_{\rm circ}\simeq 240\) km/s [80]. Since the Milky Way's rotation curve is roughly flat at the Sun's location, we expect the velocity dispersion to be of a similar order over much of the Galaxy. However, close to the Galactic Center the DM velocity is not well-known. In DM-only simulations the velocity dispersion falls as one approaches the Galactic Center (_e.g._ Ref. [81]) but simulations including baryons have demonstrated the opposite behavior (_e.g._ Refs. [82; 83; 84]). Even at the Sun's location, the full DM velocity distribution is not well-understood: the distribution is often treated as Maxwellian up to some escape velocity, although this is only a crude approximation (_e.g._ Ref. [85]). The escape velocity is determined to be \(\sim\)500 km/s at the location of the solar system [86; 87; 88].17 Within the Maxwellian approximation, the distribution is specified by that escape velocity and the velocity dispersion, with the latter having a greater effect on the annihilation rate.
Footnote 17: Ref. [88] finds that the escape velocity increases slightly toward the Galactic Center. However, they only present results in to a radius of around 5 kpc, where it is closer to 650 km/s. The precise value of this cutoff is numerically unimportant for this work though, due to the exponential suppression in the distribution (_cf._ Eq. (100)). In practice, we truncate the particle velocity at 500 km/s, but the numerical difference between this and 2400 km/s is at most a part per million in the annihilation rate.
For the Milky Way, we use the velocity dispersion values obtained in Ref. [89] for a variety of NFW-profiles. In particular, we take the slowest and fastest velocities for locations interior to the solar system. This gives a range \(v_{\rm disp}\in[130,330]\) km/s. As a function of \(v_{\rm disp}\), the magnitude of the relative WIMP velocity is drawn from the following 1D probability distribution,
\[f(v)=\sqrt{\frac{27}{4\pi}}\,\frac{v^{2}}{v_{\rm disp}^{3}}e^{-3v^{2}/4v_{\rm disp }^{2}}. \tag{100}\]
Here \(v_{\rm disp}\) is the RMS velocity for a single DM particle, which is equal to the three-dimensional
velocity dispersion \(\sigma_{v,3d}\) defined in Ref. [89] by
\[v_{\rm disp}^{2}=\sigma_{v,3d}^{2}=\frac{\int dv\,v^{4}f(r,v)}{\int dv\,v^{2}f(r,v )}. \tag{100}\]
Here \(f(r,v)\) is the speed distribution for a single DM particle at a distance \(r\) from the Galactic Center.
In Fig. 11, we plot the leading contribution to endpoint photon production, direct annihilation from an \(s\)-wave initial state, for two different velocity distributions, normalized to the simple assumption of all quintuplets having a fixed \(v=10^{-3}\). Except on resonances, the uncertainty is typically negligible. We also see that off resonance, and particularly at lower masses, the simple fixed-velocity assumption is a good approximation of either more realistic model. The reason for this is simply that we are in the saturation regime, as seen in Fig. 10. Therefore, we conclude that in general the fixed velocity assumption is a good one at low masses, although at higher masses one is generically underestimating the actual cross section, sometimes by more than an order of magnitude. Accordingly, for an actual experimental analysis, completeness would require an appropriate weighting of the cross section according to the specific region of interest studied.
For bound state capture, the off-resonance uncertainties are generally larger than for direct annihilation, as anticipated. This is demonstrated in Fig. 12, where we show \(p\)-to-\(s\) capture, the leading single-photon dipole transition. As we see by comparing the rates with those in Fig. 7 though, capture is generally far subdominant to direct annihilation. In this channel, however, the simple assumption of all DM having \(v=10^{-3}\) generally provides a result in the middle of the band given by the remaining two options. Even where it does not, we see that
Figure 11: An estimate of the range of uncertainty in our results associated with the DM velocity distribution for the dominant direct annihilation. We use two Maxwell distributions with \(v_{\rm disp}\) – _cf._ Eq. (101) – at the extremal values found by Ref. [89]. We divide their resulting \(\langle\sigma v\rangle\) by that of the simplified case of all quintuplets annihilating with \(v=10^{-3}\).
it still provides a good approximation to the more realistic velocity profiles. Accordingly, just as with direct annihilation, we will take this value as a representative approximation of this subleading contribution to endpoint photons when forecasting experimental sensitivity.
### Estimating the experimental sensitivity to quintuplet DM
Finally we turn to an estimate of the experimental sensitivity to the quintuplet DM hypothesis using the spectra we have computed. Using our definition of the spectrum in Eq. (109), the average DM-generated flux an instrument observes from a region of interest (ROI) of solid angle \(\Omega_{\rm ROI}\) is,
\[\frac{d\Phi}{dE}=\frac{\langle\sigma v\rangle_{\rm line}}{8\pi M_{\chi}^{2}} \left(\frac{dN}{dE}\right)_{\rm smeared}\bigg{(}\frac{1}{\Omega_{\rm ROI}} \int ds\,d\Omega\,\rho_{\chi}^{2}\bigg{)}. \tag{110}\]
As defined, the flux has units of [counts/cm\({}^{2}\)/s/TeV/sr] (for a detailed discussion of the units, see Ref. [90]). The final term in parentheses here is often referred to as the \(J\)-factor, and is an integral over the DM density squared in the region being observed. If the DM density \(\rho_{\chi}\) was known exactly, then for a model like the thermal quintuplet the flux is fully determined, as we have computed both the cross section and spectrum (up to residual uncertainties from higher order terms in the theory prediction, and velocity distribution as discussed in the last subsection).
To test the quintuplet hypothesis, we need to compare the flux in Eq. (110) to experimental measurements. For this, we will estimate the sensitivity of H.E.S.S. to the endpoint photon signal using the "mock analysis" method described in Ref. [62]. The approach in that work was to make use of the publicly available H.E.S.S. data in Ref. [18], where a search for \(\chi\chi\to\gamma\gamma\) was performed using 112 hours of Galactic Center observations taken by the instrument between \(2004-2008\). This is a small fraction of the observations H.E.S.S. has taken towards the galactic
Figure 12: Capture rate by emission of a dipole photon from \(p\)-wave initial state to \(s\)-wave quintuplet bound state for \(Q=0\) states. The three choices of velocity mirror those in Fig. 11. We see two different values of \(v_{\rm disp}\) plotted, plus a \(v_{\rm fixed}=10^{-3}\) whose capture rate we divide by.
center. As emphasized in Ref. [39], the collaboration has already collected roughly 800 hours of data in the region, and continues to collect roughly 150 hours each year. In that sense, the dataset we consider represents a small fraction of what is available. A further limitation of our approach is that the analysis in Ref. [18] was a search purely for line photons, and therefore adopted a flexible background model that absorbed all smooth components. The analysis is therefore unsuitable to consider continuum contributions, which can play an important role in these sorts of analyses, as emphasized in, for instance, Ref. [44]. Our rationale for adopting this mock analysis, however, is that Ref. [18] provided enough information that the full analysis they undertook can be performed reconstructed, as was shown in Ref. [62]. Later on we will provide a rough estimate of how sensitive more recent and upcoming analyses could be.
For the mock analysis, we fit the data provided in Ref. [18] to a combination of the flux in Eq. (5.6) and a parametric background model adopted by the experiment--full details are provided in Ref. [62]. If we have a prediction for \(\rho_{\chi}\), this approach combined with our prediction for the spectrum can be used to obtain an estimated sensitivity for \(\langle\sigma v\rangle_{\rm line}\). For this we adopt the Einasto profile [91] used by the H.E.S.S. analysis (based on Ref. [92]),
\[\rho_{\rm Einasto}(r)\propto\exp{\left[-\frac{2}{\alpha}\left(\left(\frac{r} {r_{s}}\right)^{\alpha}-1\right)\right]}, \tag{5.7}\]
with \(\alpha=0.17\), \(r_{s}=20\) kpc, and the normalization fixed to 0.39 GeV/cm\({}^{3}\) at the solar
Figure 13: (Left) The estimated sensitivity of H.E.S.S. I using 112 hours of galactic center (GC) data to the quintuplet. Assuming an Einasto profile, we show sensitivity to \(\langle\sigma v\rangle_{\rm line}\) as defined in Eq. (5.1), which can then be compared to the equivalent theoretical prediction. Across the entire mass range considered here, H.E.S.S. would be able to exclude the quintuplet assuming an Einasto profile. (Right) If the DM profile is cored as in Eq. (5.8), the core sizes that would be required to make a non-observation consistent with our quintuplet predictions. At the thermal mass, 13.6 TeV, a core of 1 kpc is required, whereas at upper edge of the thermal band, 14.4 TeV, the results would be consistent with a 0.5 kpc core.
radius, \(r=8.5\) kpc. The resulting estimated constraint is shown in the left of Fig. 13.18 We emphasize, that even though this is a constraint on \(\langle\sigma v\rangle\) it is _not_ based solely on the line prediction--the results are based on the estimated detectability of the entire spectrum resulting from line, endpoint, and bound state photons. For the mass range considered, the bound state contribution is negligible (_cf._ Fig. 7), however the endpoint is not: at 13.6 TeV, it enhances the sensitivity by a factor of 1.9. As mentioned above, by default, the results do not include either continuum contribution considered in Fig. 8. The motivation for this is the particular background model adopted in Ref. [18] was designed solely to search for a narrow line feature, and there can be considerable degeneracy with the continuum emission (see the discussion in Ref. [62]). Nevertheless, we have tested adding the continuum emission from direct annihilation to \(W\) and \(Z\) final states, and found for most masses it has no impact on the estimated limits, although there is a slight fluctuation around the thermal mass, which increases the sensitivity by \(\simeq\)20%. We also note that there are several locations, such as just below \(M_{\chi}=3\) TeV and just above the thermal mass where there is a larger theoretical error. This results from the sharp variations in the spectra observed in Fig. 9. In fact, the sensitivity to these features is reduced by the insensitivity of the background model used in this work to smooth features. These results can also be compared to Ref. [39], where an alternate H.E.S.S. analysis is performed using the spectra from this work, and much sharper variations are observed in their sensitivity. (We note that the results of that work made use of the LO Sommerfeld potential calculations, not the NLO results we have adopted here.)
Footnote 18: One can find an early attempt to make projected \(\gamma\)-ray constraints from the H.E.S.S. galactic center data in Ref. [93]. Their Fig. 7 is analogous to our Fig. 13. The earlier paper did not include LL (or NLL) resummation, nor the NLO corrections to the electroweak potential. For similar Einasto parameters, their bounds on the line cross section are about an order of magnitude weaker than ours (however, note that our predicted line cross section is also smaller).
The results of the mock analysis suggest that for the central value of the thermal mass, even 112 hours of H.E.S.S. data can exclude the thermal prediction by a factor of 10. Nevertheless, this varies considerably across the uncertainty band on the thermal mass: at 12.8 TeV, the exclusion factor is 55, whereas it is only 4 at 14.4 TeV. The sharp variation is a result of the thermal window sitting near a Sommerfeld resonance, as shown in Fig. 13. We emphasize once more that although we use the NLO potential in our computations, the thermal mass range was computed with the LO potential, and given the sensitivity of our findings to the exact mass, computing the thermal mass at NLO will be important for narrowing the fate of the thermal quintuplet. If we relax the thermal cosmology assumption and consider a broader range of masses, we see the quintuplet is excluded across the full \(0.5-20\) TeV mass range. Yet this statement is contingent on the form of \(\rho_{\chi}\) adopted, about which there is considerable uncertainty. In particular, the density profile may flatten toward the inner Galaxy. As the annihilation signal is sensitive to \(\rho_{\chi}^{2}\) flattening the profile has a marked impact on the flux, making this one of the dominant uncertainties in \(\rho_{\chi}\) for our purposes. We parameterize a possible flattening of the profile by replacing the Einasto density profile with a constant value
for Galactocentric distances \(r<r_{c}\), where we will refer to \(r_{c}\) as the "core size":19
Footnote 19: To be explicit, we fix the normalization of the Einasto profile in Eq. (108) to the solar radius _before_ we impose the core restriction of Eq. (109). This implies that for a core size larger than the solar radius, the profile will predict less than 0.39 GeV/cm\({}^{3}\) at our location. Of course, the more significant concern is that such a large core is not consistent with observations, as discussed in the text, and should solely be viewed as a proxy for how much the \(J\)-factor needs to be reduced.
\[\rho(r)=\begin{cases}\rho_{\rm Einasto}(r)&r>r_{c},\\ \rho_{\rm Einasto}(r_{c})&r<r_{c}.\end{cases} \tag{110}\]
We can then ask what choice of \(r_{c}\) would raise the estimated constraint on \(\langle\sigma v\rangle_{\rm line}\) above the theoretical prediction. This is plotted in the right-hand panel of Fig. 13, both employing our full endpoint spectrum and in the case where we (incorrectly) use only the line cross section in setting the bounds. To provide an estimate for what core sizes are consistent with data, we note that simulations of Milky Way like galaxies can generate \(\mathcal{O}(1\ {\rm kpc})\) cores [94], however, measurements of stars in the Bulge seem to disfavor \(r_{c}\gtrsim 2\ {\rm kpc}\)[95; 96]. At the lower end of the thermal mass range, 12.8 TeV, the thermal Quintuplet would already be in tension with this, requiring a 2.8 kpc core. At the central (13.6 TeV) and upper (14.4 TeV) end, however, the required core size is 1.0 and 0.5 kpc, respectively, and therefore not obviously in tension with our mock analysis. A more recent study claims evidence for a few kpc core that could potentially saturate the earlier limits [97]. If confirmed, this could suppress the \(J\)-factor by nearly an order of magnitude. That would challenge the indirect-detection community to set aggressive limits, but as we estimate, even cores of this size are in reach of CTA and possibly H.E.S.S (_cf._ Fig. 14).
As already mentioned, the mock analysis we consider makes use of only a very small amount of existing data, and with that we forecast a sensitivity to \(\langle\sigma v\rangle_{\rm line}\simeq 8.5\times 10^{-27}\ {\rm cm}^{3}/{\rm s}\) at the central thermal mass. (The error on this value due to uncertainty on the spectrum from our NLL calculation is less than 1%, far smaller than the variation across the thermal mass range, which is closer to 10%.) Using 500 hours of H.E.S.S. data and an identical Einasto profile, Ref. [39] forecast a sensitivity at the thermal mass of \(\simeq\)\(9.3\times 10^{-28}\ {\rm cm}^{3}/{\rm s}\), almost a factor of ten better than used here. This is significantly more than the naive \(\sqrt{5}\) the additional data would suggest, which can be primarily attributed to the fact that work used H.E.S.S. II observations, whereas we adopt the sensitivity from H.E.S.S. I, combined with a different analysis used in that work. With such a sensitivity, we would require a core size slightly larger than 3.5 kpc to save the thermal quintuplet, which would be in tension with observations. Nevertheless, repeating the process at 14.4 TeV, the required core size would only be 1.6 kpc, and therefore not yet clearly excluded. We can also give a crude estimate for the sensitivity the upcoming CTA could have for the quintuplet. Although no dedicated forecast for the quintuplet has been performed using the spectra in our work, we can estimate the improved sensitivity as follows. Reference [64] performed an identical mock analysis to ours for the NLL wino spectrum, estimating sensitivity at \(M_{\chi}=13.6\) TeV of \(\langle\sigma v\rangle_{\rm line}\simeq 8\times 10^{-27}\ {\rm cm}^{3}/{\rm s}\),
slightly stronger than the sensitivity to the quintuplet. Using the identical NLL spectrum, Ref. [44] then estimated that with 500 hours of data, CTA could reach \(\simeq 1\times 10^{-28}\) cm\({}^{3}\)/s, a factor of eighty improvement. Assuming the same improvement for the quintuplet, CTA would be sensitive to \(\langle\sigma v\rangle_{\rm line}\simeq 1.1\times 10^{-28}\) cm\({}^{3}\)/s, excluding the thermal value by a factor of eight hundred. To not have seen the thermal quintuplet, we would need to core \(\rho_{\chi}\) out to almost 8.6 kpc - beyond the solar radius - which is simply inconsistent with observations. Even at the upper end of the mass range, a core of 6.4 kpc would be required. In this sense, CTA would provide the definitive word on the whether the thermal quintuplet is the DM of our Universe.
These results are summarized in Fig. 14, where the point shows the core size required for the central thermal mass, 13.6 TeV, whereas the upper and lower error bars corresponds to the lower and upper ends of the thermal mass range. We also show the core size disfavored by the analysis in Ref. [96]. The figure summarizes the conclusion reached above: CTA will seemingly have the final word of the thermal quintuplet, however if a full analysis of H.E.S.S. data sees no sign of the signal, the model would already begin to be disfavored. There are two important caveats to this conclusion. The first is that our findings are based off an extrapolation from a mock analysis of H.E.S.S. I data, and are no substitution for a full analysis or projection using the present and forecast H.E.S.S. and CTA instrumental responses. To give one example of what could change, an analysis that accounts for the continuum emission could be able to even more strongly test the quintuplet. Secondly, the range of masses we have considered is the thermal mass window of \(13.6\pm 0.8\) TeV that was determined using the LO potential, whereas the remainder of our calculations use the NLO results, as emphasized several times
Figure 14: An estimate for the required core size of the Einasto profile, given no preference for a signal is seen in our H.E.S.S. mock analysis, and if no signal emerges in an analysis of the full H.E.S.S. dataset, or at CTA. The central values correspond to 13.6 TeV, the central thermal mass, whereas the upper and lower error bars correspond to 12.8 and 14.4 TeV, the edges of the thermal mass window. The dashed corresponds to the rough core size constraint from Ref. [96]. Our results suggest that H.E.S.S. can already considerably test the quintuplet, with the final word likely being left to CTA.
already. Updating the thermal mass using the NLO potentials will be important. To give a sense for the impact this could have, repeating the analysis in Fig. 14 for 13.6 TeV using the LO potential in our calculations, the core size would change from 1.0, 3.5, and 8.5 kpc, to 0.7, 2.5, and 7.1 kpc.
## 6 Conclusions
For all the vastness of the DM parameter space, a thermal WIMP has remained a constant focus for decades. Minimal DM is an exemplar of thermal DM, and through indirect detection, many of the associated models are on the verge of being detected or firmly excluded, as we have shown for the quintuplet in the present work. Either way, these are important times in the search for DM.
With this in mind, the present work has computed the quintuplet annihilation spectrum to NLL accuracy, and established the formalism to straightforwardly extend this to higher odd SU(2) representations. We plot the spectrum along with projected limits from a simple extension of the H.E.S.S. I analysis in Fig. 13. In doing so, we have demonstrated the power of the EFT of Heavy DM, and also extended this formalism to include the contribution from the rich set of bound states the model contains. While the bound states can make a significant contribution to the continuum photon emission, their impact on the number of photons with \(E_{\gamma}\sim M_{\chi}\) is minimal, except at isolated masses. As seen in earlier studies of the wino, the same cannot be said for endpoint photons from direct annihilation, which again provide an \(\mathcal{O}(1)\) correction to the line signal seen in IACTs.
Taken together, we estimated that with our spectra, H.E.S.S. should almost be able to probe the entire allowed range for the quintuplet once uncertainties on the DM density in the inner galaxy are accounted for. Performing this analysis using the existing data, and the soon-to-be-collected data with CTA will be critical (_cf._ Fig. 14). The use of background models which enhance sensitivity to smooth features such as the continuum as well as the full contribution from the endpoint-photon spectrum can provide an additional piece of experimental leverage beyond conventional line searches.
On the theory side, the thermal abundance should be recomputed using NLO potentials, as the sensitivity to the quintuplet depends strongly on what end of the predicted thermal mass range one sits. Finally, it will be interesting to extend the techniques in this work to additional representations, such as a \(\mathbf{7}\) of SU(2), where we expect key features of the quintuplet such as the strong variation in the spectrum as a function of mass, to appear and be even more pronounced.
The work we have presented benefited from useful discussions with Tobias Binder, Marco Cirelli, Tongyan Lin, Alessandro Montanari, Emmanuel Moulin, Nikhil Raghuram, and Diego Redigolo. MB is supported by the DOE (HEP) Award DE-SC0019470. VV is supported
by startup funds from the University of South Dakota. TRS' work is supported by the Simons Foundation (Grant Number 929255, T.R.S), by the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions), and by the U.S. Department of Energy, Office of Science, Office of High Energy Physics of U.S. Department of Energy under grant Contract Number DE-SC0012567. TRS thanks the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452, for hospitality during the completion of this work.
## Appendix A Quintuplet Dark Matter: A Brief Review
Here we provide a brief review of quintuplet DM, also referred to as 5-plet electroweak DM. In particular, we firstly outline the relevant group theory necessary to specify the interactions used for the calculations in the main text. We will then review phenomenological aspects of the model beyond indirect detection.
### Interactions
Quintuplet DM consists of adding to the SM five Majorana fermions that transform together in the \(\mathbf{5}\) representation of SU(2), and as a singlet under the remaining SM forces. Above the electroweak symmetry breaking scale, we collect the five fields into a multiplet \(\chi=(\chi^{1},\,\dots,\,\chi^{5})^{T}\), in terms of which the DM Lagrangian takes the following form
\[\mathcal{L}_{\text{\tiny DM}}=\frac{1}{2}\bar{\chi}(i\not{D}-M_{\chi})\chi= \frac{1}{2}\bar{\chi}([i\not{\partial}-M_{\chi}]\mathbb{1}+g_{W}T^{a}_{ \mathbf{5}}\not{W}^{a})\chi. \tag{110}\]
In the final expression, the first two contributions represent the kinetic terms for each of the fields, which are diagonal amongst the multiplets as indicated by \(\mathbb{1}\). The final term describes the interaction between the additional fields and the SM electroweak bosons. Importantly, we emphasize that the interaction strength is specified by the SM SU(2) gauge coupling, \(g_{W}\), and is not a free parameter. Instead, \(M_{\chi}\) is the unique free parameter in the theory. If we assume a conventional thermal origin for the quintuplet, then even the mass can be fixed by the observed relic density to \(M_{\chi}=13.6\pm 0.8\) TeV [2; 3].20 The 13.6 TeV quintuplet is therefore a zero parameter DM model.
Footnote 20: As emphasized above, this value was computed with the LO electroweak potential. Redoing the analysis with the NLO potential would reduce an important theoretical uncertainty.
After electroweak symmetry breaking, the five Majorana fermions rearrange themselves into three mass eigenstates: a Majorana neutral fermion \(\chi^{0}\), and two charged Dirac fermions \(\chi^{+}\) and \(\chi^{++}\). At leading order, each of these states maintains a mass of \(M_{\chi}\). However, radiative corrections to the charged states break this degeneracy, raising the masses of the charged fermions, singling out \(\chi^{0}\) as the lightest state and the DM candidate. These corrections have been computed, and in detail \(\delta_{0}=M_{\chi^{+}}-M_{\chi^{0}}\simeq 164\) MeV, and \(\delta_{+}=M_{\chi^{++}}-M_{\chi^{+}}=\chi^{+}\).
\(3\delta_{0}\)[1; 98]. For most aspects of our calculations these mass splittings will be irrelevant and we will take \(\delta_{0}\simeq\delta_{+}\simeq 0\). However, we do include them in the electroweak potential used to compute Sommerfeld factors, and scattering- & bound-state wavefunctions. This is done by adding \(2\delta_{0}\) to the diagonal term in the potential matrix corresponding to the \(\chi^{+}\chi^{-}\) component of any state, \(8\delta_{0}\) to the diagonal term corresponding to the \(\chi^{++}\chi^{--}\) component, and similar, appropriate shifts for diagonal elements in the potential matrix corresponding to components of \(Q\neq 0\) states (the shift is given by the difference between the rest mass of the state constituents and \(\chi^{0}\chi^{0}\)).
We now turn to the interaction term in Eq. (115), \(\frac{1}{2}g_{ W}\bar{\chi}T^{a}_{\bf 5}\not{W}^{a}\chi\). Here \(a=1,2,3\) indexes the electroweak gauge bosons, which transform together in an adjoint of SU(2). In the broken theory, these are mapped to the charge and mass eigenstates in the usual way,
\[W^{1}_{\mu}=\frac{1}{\sqrt{2}}(W^{+}_{\mu}+W^{-}_{\mu}),\ \ W^{2}_{\mu}=\frac{i}{ \sqrt{2}}(W^{+}_{\mu}-W^{-}_{\mu}),\ \ W^{3}_{\mu}=s_{ W}A_{\mu}+c_{ W}Z_{\mu}. \tag{116}\]
The only part of Eq. (115) that remains undetermined is \(T^{a}_{\bf 5}\), the three generators of SU(2) in the quintuplet representation. A convenient basis in which to specify \(T^{a}_{\bf 5}\) is the basis of charged states discussed above, where the DM can be cleanly identified. We can determine the charged states through their couplings to the bosons in Eq. (116). In particular, as \(A_{\mu}\) couples to charge, we can read off the charges of the states as soon as we know \(T^{3}\). It will also be convenient to introduce
\[T^{\pm}=\frac{1}{\sqrt{2}}(T^{1}\pm iT^{2}), \tag{117}\]
in terms of which \(T^{a}W^{a}=T^{+}W^{+}+W^{-}T^{-}+W^{3}T^{3}\), independent of representation.
Before evaluating the generators for the quintuplet, let us review how the argument proceeds for the simpler case of the wino--a triplet or \({\bf 3}\) of SU(2). In this case, it is conventional to exploit the fact that the generators are given by the structure constants of SU(2), so that
\[\bar{\chi}T^{a}_{\bf 3}\gamma^{\mu}\chi=\bar{\chi}_{b}(T^{a}_{\bf 3})_{bc} \gamma^{\mu}\chi_{c}=-i\epsilon_{abc}\bar{\chi}_{b}\gamma^{\mu}\chi_{c}. \tag{118}\]
This approach depended on the fact we already knew a representation of the generators for the adjoint, and so does not simply generalise to larger representations. However, we can derive a representation more systematically as follows. Recall that one representation of \({\bf n}\) in SU(2) is an \(n-1\) index symmetric tensor, with each index transforming in the fundamental. In this representation we denote the adjoint as \(\chi^{ij}\), and the quintuplet as \(\chi^{ijkl}\), where the indices take values 1 and 2. Beginning with the wino, \(\chi^{ij}\) has three unique components, which
we can embed into a vector as21
Footnote 21: The \(\sqrt{2}\) ensures that \(\chi\) as it appears in \(\bar{\chi}\not{\partial}\chi/2=\bar{\chi}^{ij}\not{\partial}\chi^{ij}/2\) is canonically normalized. An identical argument explains the coefficients in Eq. (A.9).
\[\chi=\begin{pmatrix}\chi^{1}\\ \chi^{2}\\ \chi^{3}\end{pmatrix}=\begin{pmatrix}\chi^{111}\\ \sqrt{2}\chi^{12}\\ \chi^{22}\end{pmatrix}.\] (A.5)
We can use this representation to explicitly construct \(T^{a}_{\bf 3}\) as follows. The key is that we know exactly how the generators act on \(\chi^{ij}\) as each index transforms in the fundamental. Accordingly,
\[T^{a}(\chi^{ij})=[(T^{a}_{F})^{i}_{k}\delta^{j}_{l}+\delta^{i}_{k}(T^{a}_{F})^{ j}_{n}]\chi^{kl},\] (A.6)
where \(T^{a}_{F}=\sigma^{a}/2\), with \(\sigma^{a}\) the Pauli matrices. Consider a generic infinitesimal transformation, \(U=1+iu\), with \(u=u_{a}T^{a}\). If we take \(u^{a}=(0,0,\kappa)\) with \(\kappa\ll 1\), then the components of \(\chi\) transform as
\[\delta\chi^{1}=i\kappa\chi^{1},\ \ \delta\chi^{2}=0,\ \ \delta\chi^{3}=i\kappa \chi^{3}.\] (A.7)
From this, we can read off the action of an infinitesimal \({\bf U}\) on \(\chi\), and hence infer that we must have
\[T^{3}_{\bf 3}=\text{diag}(+1,\,0,\,-1).\] (A.8)
We can now identify \(\chi^{1}\), \(\chi^{2}\), and \(\chi^{3}\) as having charges \(+1\), \(0\), and \(-1\), yielding the expected spectrum in the broken phase.22 The remaining components of \(T^{a}_{\bf 3}\) can be derived identically.
Footnote 22: The representation in the charge basis is not unique. For instance, the transformation \(\chi^{1,2}\to e^{\pm i\phi}\chi^{1,2}\) leaves the charge assignments unchanged, but will introduce a phase into the off-diagonal \(W^{\pm}\) couplings. The same will be true for the off-diagonal quintuplet couplings.
This approach readily generalizes to the quintuplet. The five unique components of \(\chi^{ijkl}\) can be embedded into a vector as follows,
\[\chi=\begin{pmatrix}\chi^{1}\\ \chi^{2}\\ \chi^{3}\\ \chi^{4}\\ \chi^{5}\end{pmatrix}=\begin{pmatrix}\chi^{1111}\\ 2\chi^{1112}\\ \sqrt{6}\chi^{1122}\\ 2\chi^{1222}\\ \chi^{2222}\end{pmatrix}=\begin{pmatrix}\chi^{++}\\ \chi^{+}\\ \chi^{0}\\ \chi^{-}\\ \chi^{--}\end{pmatrix}.\] (A.9)
To justify the charge assignments, we repeat the above analysis to find
\[T^{3}_{\bf 5}=\text{diag}(+2,\,+1,\,0,\,-1,\,-2).\] (A.10)
Further, we can compute
\[T_{\bf 5}^{+}=\begin{pmatrix}0&\sqrt{2}&0&0&0\\ 0&0&\sqrt{3}&0&0\\ 0&0&0&\sqrt{3}&0\\ 0&0&0&0&\sqrt{2}\\ 0&0&0&0&0\end{pmatrix}, \tag{111}\]
and \(T_{\bf 5}^{-}=(T_{\bf 5}^{+})^{T}\). In the main body we will exclusively work in the charged basis of Eq. (111), using the form of \(T_{\bf 5}^{a}\) as above whenever necessary.
### Phenomenology
We end this section with a brief description of quintuplet phenomenology beyond indirect detection. As already noted, the case where \(M_{\chi}\simeq 13.6\) TeV is particularly appealing, as this is the mass singled out from a conventional cosmology via a WIMP-miracle-style argument. The interactions of the quintuplets, reviewed in the previous subsection, are sufficient to keep it in thermal equilibrium in the early Universe. As the Universe cools, eventually the quintuplet undergoes a conventional freeze-out. When this occurs dictates the final abundance, and matching to the observed DM density fixes the single parameter of the theory, \(M_{\chi}\). For larger (smaller) values of \(M_{\chi}\) than \(13.6\) TeV, one naively over (under) produces the observed DM density. Nevertheless, it is entirely possible that there are effects in the early Universe that modify this simple picture. The presence of additional beyond-the-SM states that either decay to the SM or directly to the quintuplet can dilute or increase its abundance, respectively, making a wider mass range viable. Further, for \(M_{\chi}\leq 13.6\) TeV, even with no additional states, the quintuplet would represent a well motivated (and predictable) fraction of DM. For these reasons, in the main body we considered a wide range of quintuplet masses, although we emphasize once more that the scenario where \(M_{\chi}\simeq 13.6\) TeV is compelling.
As with any DM candidate, one can also consider searching for the quintuplet with either direct detection or at a collider. For direct detection, as the quintuplet carries no U(1) hypercharge, it does not couple to the \(Z\) boson at tree level. Nevertheless, couplings to SM nucleons can arise at loop level. The spin-independent cross-section is \(\sigma\simeq(1.0\pm 0.3)\times 10^{-46}\) cm\({}^{2}\)[3] (see also Ref. [99, 100]), beyond the reach of current searches at \(13.6\) TeV, and even next generation instruments such as LZ [101]. Nevertheless, the cross-section is a factor of \(\sim 4\) above the neutrino floor, and is in reach of generation-3 instruments such as DARWIN [102].
Detection at colliders is similarly challenging, but also potentially within reach of future instruments. Existing LHC searches reach masses of \(\sim 270\) GeV; even the high-luminosity dataset will only reach to \(\sim 520\) GeV [103]. Even future hadron colliders are unlikely to reach the thermal mass. A future \(100\) TeV hadron collider will just be able to reach the thermal masses for canonical neutralinos such as the higgsino and wino [104, 105]. Given these two candidates have significantly lower thermal masses of 1 and 2.9 TeV respectively [106, 107, 4, 108],
the prospects of probing the 13.6 TeV quintuplet appear discouraging. Future muon colliders operating at lower center-of-mass energies could reach the quintuplet, although they would still need to obtain \(\sqrt{s}\simeq 35\) TeV [3] (see also Refs. [109; 110]). Taken together, in the short term indirect detection remains the most likely avenue for probing quintuplet DM.
## Appendix B Operators for higher-\(L\) Bound State annihilation
As argued in Sec. 4.1, the higher-\(L\) bound states preferentially decay to the deeper bound states with lower \(L\) instead of directly annihilating to SM particles. Nevertheless, for completeness here we provide the complete set of relevant operators up to \(\mathcal{O}(v)\).
We consider the structure of the operators which support up to \(p\)-wave bound states. Thus we need to keep operators (at the amplitude level) suppressed by at most one power of the DM 3-momentum; the \(\mathcal{O}(v^{0})\) operators will support both the direct annihilation as well as \(s\)-wave bound state annihilation. By matching to the full tree level amplitude, we can obtain the structure that supports \(S=0\), \(L=1\) states,23
Footnote 23: Throughout this appendix we will work with 4-component DM fields and not reduce it to 2 components as was done for the \(L=S=0\) operator in the main text.
\[\mathcal{O}_{1}=\mathbf{v}_{\chi}\cdot\mathbf{n}\left(\bar{\chi}\Big{[}T^{a},T^{b}\Big{]}\gamma^{0}\gamma^{5}\chi\right)i\epsilon^{ijk}(n-\bar{n})^{k} \mathcal{B}^{i,a}_{\perp n}\mathcal{B}^{j,b}_{\perp\bar{n}}, \tag{110}\]
where it is understood that the subscript \(\chi\) on \(\mathbf{v}_{\chi}\) indicates that the velocity vector is the velocity of the state created/annihilated by the \(\chi\) field (as opposed to the \(\bar{\chi}\) field). As is evident, this operator supports a bound state with \(L=1\) and \(S=0\). Likewise we can write down the next set of operators,
\[\mathcal{O}_{2} =\mathbf{v}_{\chi}\cdot\mathbf{n}\left(\bar{\chi}\Big{\{}T^{a},T^ {b}\Big{\}}\gamma^{i}\chi\right)(n-\bar{n})^{i}\mathcal{B}^{\mu,a}_{\perp n} \mathcal{B}^{b}_{\perp\bar{n}\mu}, \tag{111}\] \[\mathcal{O}_{3} =\mathbf{v}_{\chi}\cdot\mathcal{B}^{a}_{\perp n}\mathcal{B}^{b}_ {\perp\bar{n}\mu}\left(\bar{\chi}T^{b}T^{a}\gamma^{\mu\perp}\chi\right)\!,\] \[\mathcal{O}_{4} =\mathbf{v}_{\chi}\cdot\mathcal{B}^{b}_{\perp\bar{n}}\mathcal{B} ^{a}_{\perp n\mu}\left(\bar{\chi}T^{a}T^{b}\gamma^{\mu\perp}\chi\right)\!.\]
From their Dirac structure, it is clear that these operators support \(L=1,S=1\) bound states. There is also another operator which supports an \(L+S\) odd bound state, specifically the \(L=1\), \(S=0\) bound state which arises out of a correction to an ultra-soft gauge boson emission off the heavy DM particles. The details of this operator are involved and hence are separately given further below.24
Footnote 24: A complete analysis and resummation of this channel at NLL would require us to do a full two loop computation in order to recover the anomalous dimension as well matching to stage II of the EFT. We leave this work for the future.
We note that decays of SU(2)-singlet bound states with \(L+S\) odd into two (transverse) gauge bosons are constrained by charge conjugation invariance and parity. This is because the underlying Lagrangian in Eq. (110) which couples the electroweak gauge sector and a Majorana quintuplet fermion is \(C\) and \(P\) invariant. A fermion-antifermion bound state has
eigenvalue \((-1)^{L+S}\) and \(P\) eigenvalue \((-1)^{L+1}\). A \(\gamma\gamma\), \(\gamma Z\), or \(ZZ\) final state has \(C\) eigenvalue \(+1\), thereby forbidding an \(L+S\) odd bound state decay into them by \(C\) alone. Decays to the \(W^{+}W^{-}\) final state are allowed regardless of the \(L\), \(S\) quantum numbers of the bound state.25 In all cases, decays to longitudinal gauge bosons are potentially allowed, as well.
Footnote 25: One might think the Landau-Yang theorem also forbids decay from \(L+S\)-odd initial states into two bosons at all orders, but the application of the theorem to non-Abelian theories is subtle; there is a generalized Landau-Yang theorem that holds so long as the decay products are in a color-singlet state [111], but this is not the case for the decay of \(L+S\)-odd states. In Ref. [68], one can see explicitly that for wino-onium, decays to \(W^{+}W^{-}\) are allowed for all combinations of initial-state \(L\) and \(S\).
Finally, let us return to the additional operator which is sub-leading in velocity that we can write down by looking at the emission of another gauge boson which usually contributes to the \(Y_{v}\) Wilson line. We do not get any such correction from the \(Y_{n}\) or the \(Y_{\bar{n}}\) Wilson line since we do not wish to consider sub-leading terms in the SCET power counting parameter.
We start again with our \(\mathcal{O}(v^{0})\) operator at the amplitude level before we dress it with soft Wilson lines
\[\bar{\chi}\{T^{a}_{\chi},T^{b}_{\chi}\}\Gamma\chi\mathcal{B}^{a}_{\perp n} \mathcal{B}^{b}_{\perp\bar{n}}, \tag{114}\]
where \(\Gamma\) is an arbitrary Lorentz structure. Consider the emission of an SCET ultra-soft gauge boson of momentum k off the initial \(\chi\) or \(\bar{\chi}\) particle. First looking at the \(\chi\) particle,
\[u_{e}(\tilde{p})+ig\frac{i(\not{p}-\not{k}+M_{\chi})}{(p-k)^{2}-M_{\chi}^{2}+ i\epsilon}\gamma^{\mu}\epsilon^{a}_{\mu}(T^{a}_{\chi})_{ce}u_{e}(p) \tag{115}\]
\(\tilde{p}\), p is the momentum of the \(\chi\) particle. Let us now expand this result to \(\mathcal{O}(v)\) (except inside the spinor). We see that apart from the usual Wilson line contribution, we also get two other relevant terms,
\[u_{e}(\tilde{p})-g_{ W}(T^{a}_{\chi})_{ce}\left(-\frac{\epsilon^{a}_{0}}{k_{0}}+ \frac{\mathbf{v}\cdot\epsilon^{a}(k)}{k_{0}}-\frac{\epsilon^{a}_{0}(k) \mathbf{v}\cdot\mathbf{k}}{k_{0}^{2}}\right)u_{e}(M_{\chi}). \tag{116}\]
If we sum the infinite series of gauge bosons with one of the propagators expanded out to \(\mathcal{O}(v)\), we then have a structure
\[\chi\to Y_{v}\mathbf{v}\cdot\mathbf{B_{s}}\chi \tag{117}\]
where we have defined the following Hermitian operator,
\[\mathbf{B}_{s}=\frac{Y_{v}^{\dagger}\left(\mathcal{P}-g_{ W}\mathbf{A}_{s}\right)Y_{v}}{v\cdot\mathcal{P}}, \tag{118}\]
where \(\mathcal{P}\) is the momentum label operator. If this term is included in a larger expression, the \(v\cdot\mathcal{P}\) factor in the denominator only acts on the terms in the numerator while the one in the numerators acts only on the \(Y_{v}\) Wilson line to the right. We can now combine this with the
emissions off \(\bar{\chi}\) to give us an effective operator, now dressed with soft Wilson lines
\[\bar{\chi}\Big{\{}Y^{\dagger}_{v}\{T^{a},T^{b}\}\Gamma Y_{v},{\bf v }\cdot{\bf B}_{s}\Big{\}}\chi{\cal B}^{a^{\prime}}_{\perp n}{\cal B}^{b^{\prime} }_{\perp\bar{n}}Y^{aa^{\prime}}_{n}Y^{bb^{\prime}}_{\bar{n}}. \tag{111}\]
We can once again use our Wilson line identity to write this in terms of our usual soft function
\[\bar{\chi}\Big{\{}\{T^{a},T^{b}\}\Gamma,{\bf v}\cdot{\bf B}_{s} \Big{\}}\chi{\cal B}^{c}_{\perp n}{\cal B}^{d}_{\perp\bar{n}}Y^{abcd}. \tag{112}\]
We can then formally define a new object \({\cal B}_{s}\) to explicitly separate out all the soft fields from the heavy DM fields
\[{\cal B}^{a}_{s}T^{a}={\bf B}_{s}. \tag{113}\]
Then we see that
\[{\cal B}^{a}_{s}={\rm Tr}[{\bf B}_{s}T^{a}], \tag{114}\]
where we have used \({\rm Tr}[T^{a}T^{b}]=\delta^{ab}\). Our operator now becomes
\[\bar{\chi}\Big{\{}\{T^{a},T^{b}\}\Gamma,T^{e}\Big{\}}\chi\,{\cal B }^{c}_{\perp n}{\cal B}^{d}_{\perp\bar{n}}Y^{abcd}{\bf v}\cdot{\cal B}^{e}_{s}. \tag{115}\]
In summary, we have an additional soft function which is explicitly suppressed in velocity. The Dirac structure for this operator is \(\gamma^{0}\gamma^{5}\) so that this is the only operator that supports an L+S odd bound state. However, it is clear that the soft operator only begins at one loop and hence we are always forced into at least a 3 gauge boson final state.
At the amplitude squared level, we simply have a square of this operator and there will no interference terms with other bound state operators since all other operators support an L+S even bound state. Let us consider the soft operator,
\[S^{abea^{\prime}b^{\prime}e^{\prime}}=\langle 0|(Y^{a^{\prime}b^{\prime}3d}{ \bf v}\cdot{\cal B}^{e^{\prime}}_{s})^{\dagger}{\cal M}|X_{s}\rangle\langle X _{s}|Y^{ab3d}{\bf v}\cdot{\cal B}^{c}_{s}|0\rangle, \tag{116}\]
where \({\cal M}\) is the measurement performed on the soft operator and \(X_{s}\) are the soft modes. This soft operator now has 6 free indices which must be contracted into the DM wavefunction factor. Since the soft final state is completely inclusive, we can simplify the operator as
\[S^{abea^{\prime}b^{\prime}e^{\prime}}=|{\bf v}|^{2}\langle 0|(Y^{a^{\prime}b^{ \prime}3d}{\cal B}^{e^{\prime}i}_{s})^{\dagger}{\cal M}|X_{s}\rangle\langle X _{s}|Y^{ab3d}{\cal B}^{ei}_{s}|0\rangle. \tag{117}\]
We can now move the \(|{\bf v}|^{2}\) factor to the DM wavefunction and instead redefine a soft operator
\[S^{abea^{\prime}b^{\prime}e^{\prime}}=\langle 0|(Y^{a^{\prime}b^{\prime}3d}{ \cal B}^{e^{\prime}i}_{s})^{\dagger}{\cal M}|X_{s}\rangle\langle X_{s}|Y^{ab3d }{\cal B}^{ei}_{s}|0\rangle. \tag{118}\]
The only term that contributes at one loop is the \({\cal B}\) since it is 0 at \({\cal O}(\alpha^{0}_{ W})\). So we can set
all other Wilson lines to their tree level values. Explicitly,
\[Y^{ab3d}\big{|}_{\text{tree}}=(Y_{v}^{fa}Y_{n}^{f3})(Y_{v}^{gb}Y_{\bar{n}}^{gd}) \big{|}_{\text{tree}}=\delta^{a3}\delta^{bd}.\] (B.16)
Thus, our operator at one loop becomes
\[S_{\text{1-loop}}^{abea^{\prime}b^{\prime}e^{\prime}}=\delta^{a3}\delta^{a^{ \prime}3}\delta^{bb^{\prime}}\langle 0|(\mathbf{\mathcal{B}}_{s}^{e^{\prime},i})^{ \dagger}\mathcal{M}|X_{s}\rangle\langle X_{s}|\mathbf{\mathcal{B}}_{s}^{e,i}|0\rangle.\] (B.17)
We will only calculate this operator to one loop to elucidate its properties. The one-loop integrand up to an overall factor take the form,
\[\begin{split} I=& 2\delta^{ee^{\prime}}g_{ W}^{2}\int\frac{d^{d}k}{(2\pi)^{(d-1)}}\frac{\delta^{+}(k^{2}-M_{\chi}^{2}) \delta(q^{+}-k^{+})}{k_{0}^{2}}\\ -&\delta^{ee^{\prime}}g_{ W}^{2}\int\frac{d^{d}k}{(2\pi)^{(d-1)}}\frac{\delta^{+}(k^{2}-M_{\chi}^{2}) \delta(q^{+}-k^{+})k^{2}}{k_{0}^{4}}.\end{split}\] (B.18)
where \(q^{+}\) is the contribution to the final photon momentum from the soft function. The second term is proportional to \(M_{\chi}^{2}\) and gives a power correction in \(M_{\chi}^{2}/(q^{+})^{2}\) and hence can be ignored. The first term does not give a UV divergence, but will contribute a log which will be relevant for stage 2 of the EFT. In detail,
\[2\delta^{ee^{\prime}}g_{ W}^{2}\int\frac{d^{d}k}{(2\pi)^{(d-1)}}\frac{\delta^{+}(k^{2}-M_{\chi}^{2}) \delta(q^{+}-k^{+})}{k_{0}^{2}}=2\delta^{ee^{\prime}}\frac{\alpha_{ W}}{\pi}\frac{q^{+}}{(q^{+})^{2}+M_{\chi}^{2}}.\] (B.19)
Going to Laplace space and expanding out in the limit \(M_{\chi}\to 0\), we have
\[I=-2\delta^{ee^{\prime}}\frac{\alpha_{ W}}{\pi}\ln(M_{\chi}se^{ \gamma_{E}}).\] (B.20)
This result has an IR divergence. At first glance, this may not be that surprising since all our soft operators in the direct-channel annihilation also were IR divergent. In those cases, we could trace back the IR divergence to the violation of the KLN theorem due to the semi-inclusive nature of the final state. In this case however, the interesting point is that, even when the final state is completely inclusive, _i.e._, we do not constrain the final state to be just a photon, our soft function is still IR divergent. Here we can trace this to the exclusive nature of the initial state where we demand that our operator support an \(L=1\), \(S=0\) state. This forces the emission of the 3rd gauge boson which has no virtual counterpart, leading to an IR divergence. This is very similar to the IR divergence that appears in the computation of PDFs in QCD.
## Appendix C Unstable Particle Effective Theory
In this section, we justify our use of Eq. (4.2) for computing the decay rate of bound states. Let us now look at the effective theory for resonances systematically. In the literature, this is
referred to as unstable particle effective theory (for a review see Ref. [112]).
To begin with, if we have an intermediate resonance state we expect a propagator in our amplitude of the form
\[D=\frac{i}{p^{2}-M_{*}^{2}}, \tag{112}\]
where \(M_{*}\) is a complex pole. If we write \(M_{*}=M+i\Gamma/2\), then we have the result
\[D=\frac{i}{p^{2}-M^{2}-i\Gamma M+\Gamma^{2}/4}. \tag{113}\]
We wish to work in a regime of narrow width, _i.e._\(p^{2}-M^{2}\sim\Gamma M\ll M^{2}\), so that the propagator is well approximated by
\[D\simeq\frac{i}{p^{2}-M^{2}-i\Gamma M}, \tag{114}\]
and we want to develop an effective theory with an expansion in the small parameter \(\lambda=\Gamma/M\). For the case of inclusive decays of our DM bound state, \(\Gamma M_{\chi}\sim\alpha_{ W}^{5}M_{\chi}^{2}\); we are interested in the inclusive decay rate, so there are no large logarithms and the perturbative cross section begins at \(\alpha_{ W}^{2}\), and due to the nontrivial wavefunction, we have an additional factor of at least \(\alpha_{ W}^{3}\). The effective coupling thus scales as \(\lambda\sim\alpha_{ W}^{5}\ll 1\).
The hard scale in this process is just the resonance mass, which in our case is simply \(\sim\)\(M_{\chi}\). We can now treat this as an HQET theory writing \(p=M_{\chi}v+k\), where \(v\) is the four-velocity of the resonance with \(v^{2}=1\) and \(k\) is the residual momentum. Given the scaling above of \(p^{2}-M^{2}\sim\Gamma M\), we can immediately see that \(k\sim\Gamma\), so that we have a soft mode \(k^{\mu}\sim(\Gamma,\Gamma,\Gamma)\equiv M_{\chi}(\alpha_{ W}^{5},\alpha_{ W}^{5},\alpha_{ W}^{5})\). Obviously, the question is how does this scale relate to the mass scale \(m_{ W}\sim vM_{\chi}\) that we already have. Now the \(\alpha_{ W}\) in the decay rate is evaluated at the scale \(M_{\chi}\) while the \(\alpha_{ W}\) in the HQET scaling is at the scale \(m_{ W}\); however, we can see by the equations relating \(\alpha_{ W}\) at the two scales that they are parametrically of the same order. If that is the case, then our mode has \(k^{\mu}\sim M_{\chi}(\alpha_{ W}^{5},\alpha_{ W}^{5},\alpha_{ W}^{5})\) with \(k^{2}\ll m_{ W}^{2}\) and hence can only be populated by a massless mode such as the photon.
Given the fact that in our case \(\lambda\sim\alpha_{ W}^{5}\), any corrections of the order \(\lambda^{1}\) are minute and will be sub-dominant in any error band at the accuracy we are aiming for. So we can safely work at leading order in \(\lambda\). Following [112], it is clear that at this order the only term that exists is the propagator with the 1PI self-energy corrections. Any communication between the production and decay states via radiative emissions of our mode \(k^{\mu}\) only occurs at \(\mathcal{O}(\lambda)\) and hence is severely suppressed. Additionally, since \(\Gamma\sim\alpha_{ W}^{5}M_{\chi}\), but the splitting between bound states, \(\Delta E_{n}\sim\alpha_{ W}^{2}M_{\chi}\), any interference _between_ bound states is also subleading. Therefore, we can safely ignore any radiative corrections by this mode. This suppression is a manifestation of the length separation in space-time between the process of production and decay.
Now let us look at a cross-section for production of a resonance and its subsequent decay.
We assume that we are in a regime \(p^{2}-M^{2}\sim\Gamma M\), where \(p\) is the momentum of the intermediate resonance state and \(M\) is the real part of the pole. Let \(N\) be our initial scattering state that will create the resonance, and we focus on the cross section to produce an observed final state \(f\) and an ultrasoft photon \(\gamma_{\text{us}}\) indicating that a bound state was formed. In detail, the differential cross section is
\[\frac{d\sigma}{dz}=\frac{1}{\mathcal{N}}\int d\Pi_{\gamma}d\Pi_{f}|\mathcal{M} (N\to f+\gamma_{\text{us}})|^{2}\delta^{(4)}(p_{\gamma}+p_{f}-p_{N}) \mathcal{M}_{z}(f), \tag{100}\]
where \(\mathcal{M}_{z}\) is the measurement (in this case the photon energy) function on the final state particles, and \(\mathcal{N}\) is a normalizing kinematic factor. Since the photon emitted during the bound state formation is an ultrasoft photon, there is no measurement on it (via a multipole expansion of the measurement function) and its phase space is integrated over fully. The amplitude squared will contain the squared propagator for the resonance,
\[J=\frac{1}{(p^{2}-M^{2})^{2}+\Gamma^{2}M^{2}}. \tag{101}\]
Now, the key point is that if we are not interested in the details of the variation of the cross section near resonance, and we are sufficiently inclusive over \(p^{2}\) around the resonance (by a value \(\gg\Gamma M\)), then we can make the following substitution
\[\lim_{\Gamma/M\to 0}J\to\frac{\pi}{\Gamma M}\delta(p^{2}-M^{2}). \tag{102}\]
This substitution is true only in the distribution sense, _i.e._ under the integral which at least encompasses the region of the size of the width about the resonance. This is the narrow width approximation. This substitution then puts the intermediate resonance on-shell. The cross section can then be written as
\[\begin{split}\frac{d\sigma}{dz}&=\frac{1}{\mathcal{ N}}\frac{\pi}{\Gamma M}\int d\Pi_{\gamma}d\Pi_{f}|\mathcal{M}(N\to B(p)+ \gamma_{\text{us}})|^{2}|\mathcal{M}(B(p)\to f)|^{2}\\ &\times\delta^{(4)}(p_{\gamma}+p_{f}-p_{N})\delta(p^{2}-M^{2}) \mathcal{M}_{z}(f).\end{split} \tag{103}\]
Here \(B(p)\) represents a bound state with momentum \(p\), and again this result is true at leading order in \(\Gamma/M\), which forbids any communication between the production and decay states. If we then insert a factor of unity, \(1=\int d^{4}p\,\delta^{(4)}(p-p_{f})\), the result can be rearranged to yield,
\[\begin{split}\frac{d\sigma}{dz}=&\frac{1}{\mathcal{ N}}\frac{\pi}{M}\Big{[}\int d\Pi_{\gamma}d\Pi_{R}|\mathcal{M}(N\to B(p)+ \gamma_{\text{us}})|^{2}\delta^{(4)}(p_{\gamma}+p-p_{N})\Big{]}\\ \times&\frac{1}{\Gamma}\Big{[}\int d\Pi_{f}| \mathcal{M}(B(p)\to f)|^{2}\delta^{(4)}(p-p_{f})\mathcal{M}_{z}(f)\Big{]}\\ =&\sigma(N\to B+\gamma_{\text{us}})\frac{1}{\Gamma} \frac{d\Gamma_{B\to f}}{dz},\end{split} \tag{104}\]
which is simply the product of the production cross section and the differential branching ratio.
The above separation holds where the process proceeds solely through the long lived bound state, but in practice the result should be summed over all bound states in the spectrum, as well as the direct annihilation case where no bound states are formed. Applied to our specific case we arrive at Eq. (100).
## Appendix D Proof of the Wilson Line Identity
In this section, we prove the identity involving soft Wilson lines used in Eq. (101) that eventually leads to a universal factorization of the IR physics in terms of soft and jet functions independent of the representation. The property that we wish to show is
\[S_{v}T^{a}S_{v}^{\dagger}=T^{a^{\prime}}S_{v}^{a^{\prime}a}, \tag{101}\]
where \(T\) is a generator in an arbitrary representation (we used \(\mathbf{5}\) for the quintuplet), and on the left hand side we have two Wilson lines in the same representation, whereas on the right it is in the adjoint. (In the main text we used \(Y_{v}\) for the latter, we keep all as \(S\) here for notational convenience.) In the main text we actually used \(S_{v}^{\dagger}T^{a}S_{v}=S_{v}^{aa^{\prime}}T^{a^{\prime}}\), although this follows from the above by applying various inverses. In position space, the Wilson lines are defined as
\[\begin{split} S_{v}(x)&=Pe^{ig\int_{-\infty}^{x}ds \,v\cdot A_{s}(vs)}=Pe^{ig\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y\,v \cdot A_{s}(\bar{v}\cdot y)},\\ S_{v}^{\dagger}(x)&=\bar{P}e^{-ig\int_{-\infty}^{x} ds\,v\cdot A_{s}(vs)}.\end{split} \tag{102}\]
where \(v\cdot A_{s}=v\cdot A_{s}^{a}T^{a}\), with \(T^{a}\) in the appropriate representation for the Wilson line, whilst \(P\) is path ordering and \(\bar{P}\) indicates anti-path ordering. The variable \(s\) parametrizes the path along the light cone direction \(n\) from \(x\) to \(-\infty\). The statement in Eq. (101) is a generalization of the identity applied for QCD [113] for other group representations. The soft Wilson line obeys the equation
\[\begin{split}\frac{d}{d\bar{v}\cdot x}S_{v}(x)&=ig \,v\cdot A_{s}(\bar{v}\cdot x)S_{v}(x),\\ \frac{d}{d\bar{v}\cdot x}S_{v}^{\dagger}(x)&=-S_{v} ^{\dagger}(x)ig\,v\cdot A_{s}(\bar{v}\cdot x).\end{split} \tag{103}\]
Coming back to our question, let us define \(U^{a}(x)=S_{v}(x)T^{a}S_{v}^{\dagger}(x)\). We can then immediately see
\[\frac{d}{d\bar{v}\cdot x}U^{a}(x)=[igv\cdot A_{s}(\bar{v}\cdot x),U^{a}(x)]. \tag{104}\]
We will solve this equation by recursion, order by order in the coupling \(g\) to build up the full
solution. The tree level result is simply \(U^{a(0)}(x)=T^{a}=T^{a^{\prime}}\delta^{a^{\prime}a}\). At the next order,
\[\begin{split} U^{a(1)}(x)&=\int_{-\infty}^{\bar{v} \cdot x}d\bar{v}\cdot y\left[ign\cdot A_{s}(\bar{v}\cdot y),T^{a}\right]\\ &=ig\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y\,v\cdot A_{s} ^{b}(\bar{v}\cdot y)if^{baa^{\prime}}T^{a^{\prime}}\\ &=T^{a^{\prime}}ig\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y \,v\cdot(A_{s}(\bar{v}\cdot y))^{a^{\prime}a}.\end{split} \tag{102}\]
At \(\mathcal{O}(g^{2})\),
\[\begin{split} U^{a(2)}(x)&=\int_{-\infty}^{\bar{v} \cdot x}d\bar{v}\cdot y_{1}\int_{-\infty}^{\bar{v}\cdot y_{1}}d\bar{v}\cdot y_ {2}\left[igv\cdot A_{s}(\bar{v}\cdot y_{1}),[igv\cdot A_{s}(\bar{v}\cdot y_{ 2}),T^{a}]\right]\\ &=\frac{1}{2}\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{1} \int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{2}P\Big{\{}\left[igv\cdot A_{ s}(\bar{v}\cdot y_{1}),[igv\cdot A_{s}(\bar{v}\cdot y_{2}),T^{a}]\right]\Big{\}}\\ &=T^{a^{\prime}}\frac{(ig)^{2}}{2}\int_{-\infty}^{\bar{v}\cdot x} d\bar{v}\cdot y_{1}\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{2}P\Big{\{} \left(v\cdot A_{s}(\bar{v}\cdot y_{1})v\cdot A_{s}(\bar{v}\cdot y_{2})\right) ^{a^{\prime}a}\Big{\}}.\end{split} \tag{103}\]
From here, we can see that the \(n^{th}\) term will be
\[\begin{split} U^{a(n)}(x)&=\frac{1}{n!}\int_{- \infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{1}\int_{-\infty}^{\bar{v}\cdot x}d \bar{v}\cdot y_{2}\ldots\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{n}\\ &\times P\Big{\{}\left[igv\cdot A_{s}(\bar{v}\cdot y_{1}),[igv \cdot A_{s}(\bar{v}\cdot y_{2}),[\ldots[igv\cdot A_{s}(\bar{v}\cdot y_{n}),T^ {a}]]]\right]\Big{\}}\\ &=T^{a^{\prime}}\frac{(ig)^{n}}{n!}\int_{-\infty}^{\bar{v}\cdot x }d\bar{v}\cdot y_{1}\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{2}\ldots \int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{n}\\ &\times P\Big{\{}\left(v\cdot A_{s}(\bar{v}\cdot y_{1})v\cdot A_{ s}(\bar{v}\cdot y_{2})\ldots v\cdot A_{s}(\bar{v}\cdot y_{n})\right)^{a^{\prime}a} \Big{\}}.\end{split} \tag{104}\]
Summing to all orders then proves our result.
## Appendix E Subtle Signs in the Bound-state Formation and Decay Calculations
In the bulk of the paper, we have freely used results from Ref. [70] that are written in terms of two-body states of the form \(|ij\rangle\). However, in general our 2-body states for non-identical particles will in fact be combinations of the form \(\frac{1}{\sqrt{2}}(|ij\rangle+(-1)^{L+S}|ji\rangle)\). (This convention choice is also discussed in the context of Sommerfeld enhancement in Ref. [72], where these two approaches are labeled "method-1" and "method-2"; we largely adopt "method-2", where we treat \(|ij\rangle\) and \(|ji\rangle\) as components of a single state, rather than tracking them separately.) The factor of \((-1)^{L+S}\) arises from a factor of \((-1)^{S+1}\) from the behavior of the spin configuration under particle exchange, a factor of \((-1)^{L}\) from the parity of the spatial wavefunction, and a factor of \((-1)\) from the exchange of two fermions.
This means that when considering a transition of the form \(|ij\rangle\to|i^{\prime}j^{\prime}\rangle\), we also need
to include transitions between the \(|ji\rangle\) and \(|j^{\prime}i^{\prime}\rangle\) states. In many cases this does not make a difference and it is adequate to represent states purely by one component \(|ij\rangle\). For example, if the \(|ij\rangle\to|i^{\prime}j^{\prime}\rangle\) and \(|ji\rangle\to|j^{\prime}i^{\prime}\rangle\) processes have equal rates, but \(|ij\rangle\to|j^{\prime}i^{\prime}\rangle\) and \(|ji\rangle\to|i^{\prime}j^{\prime}\rangle\) are forbidden (for example, this occurs if \(|ij\rangle=|0,++\rangle\)), then the combined rate is the same as what one would obtain from purely considering the \(|ij\rangle\to|i^{\prime}j^{\prime}\rangle\) process. However, this behavior is not universal.
As an example of a case where this matters, consider the transition between \(Q=1\) bound state components \(|+0\rangle\to|+0\rangle\). Writing out the individual components of these states (labeled as 23 and 32, following the notation of Sec. 3),26 the full matrix element should be:
Footnote 26: One might be tempted to write the 23 state as \(|+0\rangle\) and 32 as \(|0+\rangle\). However, we reserve \(|+0\rangle\) for the full quantum state such that \(|+0\rangle=(|2\,3\rangle+(-1)^{L+S}|3\,2\rangle)/\sqrt{2}\).
\[\begin{split}\mathcal{M}&=\frac{1}{2}\,\Big{(} \mathcal{M}_{22,33}+(-1)^{(L+S)_{i}}\mathcal{M}_{32,23}+(-1)^{(L+S)_{f}} \mathcal{M}_{23,32}\\ &\qquad+(-1)^{(L+S)_{i}+(L+S)_{f}}\mathcal{M}_{33,22}\Big{)}. \end{split} \tag{124}\]
Now we can write \((L+S)_{f}=(L+S)_{i}+1\,(\text{mod }2)\) for dipole transitions, and consequently:
\[\mathcal{M}=\frac{1}{2}\,\Big{(}\mathcal{M}_{22,33}-\mathcal{M}_{33,22}+(-1)^{ (L+S)_{i}}\,(\mathcal{M}_{32,23}-\mathcal{M}_{23,32})\Big{)}. \tag{125}\]
Now as calculated in Sec. 3, if \(\psi_{i}\) and \(\psi_{f}\) denote the initial- and final-state wavefunctions, we have:
\[\mathcal{M}_{22,33}^{3} =i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\,\big{[}(T^{3})_{22} -(T^{3})_{33}\big{]}\int d^{3}\mathbf{r}\,\psi_{f}^{*}\nabla\psi_{i} \tag{126}\] \[=i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\int d^{3}\mathbf{r} \,\psi_{f}^{*}\nabla\psi_{i},\] \[\mathcal{M}_{33,22}^{3} =i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\,\big{[}(T^{3})_{33} -(T^{3})_{22}\big{]}\int d^{3}\mathbf{r}\,\psi_{f}^{*}\nabla\psi_{i}\] \[=i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\int d^{3}\mathbf{r} \,(-\psi_{f}^{*}\nabla\psi_{i}),\] \[\mathcal{M}_{23,32}^{3} =i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\,\bigg{\{}\big{[}-i( (T^{1})_{32}(T^{2})_{23}-(T^{2})_{32}(T^{1})_{23}\big{]}\,M_{\chi}\alpha_{ \text{NA}}\int d^{3}\mathbf{r}\,\hat{\mathbf{r}}\psi_{f}^{*}\psi_{i}\bigg{\}}\] \[=i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\,\bigg{\{}-3M_{\chi} \alpha_{\text{NA}}\int d^{3}\mathbf{r}\,\hat{\mathbf{r}}\psi_{f}^{*}\psi_{i} \bigg{\}},\] \[\mathcal{M}_{32,23}^{3} =i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\,\bigg{\{}\big{[}-i( (T^{1})_{23}(T^{2})_{32}-(T^{2})_{23}(T^{1})_{32}\big{]}\,M_{\chi}\alpha_{ \text{NA}}\int d^{3}\mathbf{r}\,\hat{\mathbf{r}}\psi_{f}^{*}\psi_{i}\bigg{\}}\] \[=i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\,\bigg{\{}3M_{\chi} \alpha_{\text{NA}}\int d^{3}\mathbf{r}\,\hat{\mathbf{r}}\psi_{f}^{*}\psi_{i} \bigg{\}},\]
recalling that the "3" superscript is for \(\gamma,\,Z\) emission, depending on the value of \(\alpha_{\text{rad}}\), the coupling of the emitted boson to the charged particle that radiated it. We also recall that the
terms with \(\alpha_{\rm NA}\) correspond to emission off the virtual particles sourcing the potential. The \(\alpha_{\rm NA}\) factor is thus the coupling of the virtual line to the WIMPs. For the case of capture by a radiated \(\gamma\), \(\alpha_{\rm rad}=\alpha_{\rm em}\) and \(\alpha_{\rm NA}=\alpha_{W}\).
Thus, overall we have \(\mathcal{M}_{32,23}=-\mathcal{M}_{23,32}\) and \(\mathcal{M}_{33,22}=-\mathcal{M}_{22,33}\), and consequently:
\[\mathcal{M} =\mathcal{M}_{22,33}+(-1)^{(L+S)_{i}}\mathcal{M}_{32,23}\] \[=i\sqrt{2^{6}\pi\alpha_{\rm rad}M_{\chi}}\left[\int d^{3}{\bf r} \,\psi_{f}^{*}\nabla\psi_{i}+(-1)^{(L+S)_{i}}3M_{\chi}\alpha_{\rm NA}\int d^{3} {\bf r}\,\hat{\bf r}\psi_{f}^{*}\psi_{i}\right]. \tag{100}\]
The \((-1)^{(L+S)_{i}}\) factor is obtained by treating the different components correctly, and is required to ensure the correct behavior of the matrix element under time reversal.
## Appendix F Analytic Approximate Results for Annihilation and Bound-state Formation
In this final appendix we provide analytic estimates for the annihilation and bound state formation rate of DM. We consider the quintuplet case of interest first, followed by providing equivalent results for a general representation.
### Results for the quintuplet
In the limit of unbroken SU(2), the wavefunctions and their integrals, and hence the bound-state capture rate, can be computed analytically in the low-velocity limit. In this regime the Sommerfeld enhancement can also be computed analytically. These results can also be applied (approximately, and with caveats we will discuss below) to the case where SU(2) is broken but the DM mass is very heavy relative to the symmetry breaking scale. These calculations can be useful both as a cross-check on our numerical results, and to develop intuition for which channels are likely to dominate the overall annihilation signal. As such, we present the details of these analytic calculations below, beginning with the DM in the quintuplet representation.
#### f.1.1 Capture and annihilation rates
As an opening example, let us estimate the spin-averaged capture rate into the spin-triplet ground state via photon emission. The total cross section for this process is given by (see App. C of Ref. [68]),27
Footnote 27: This expression as written includes capture only from the components of the incoming state that experience an attractive potential; we expect the contribution from repulsed incoming states to be suppressed, due to their small overlap with the bound states.
\[\sigma v =\frac{3}{2}\times\frac{2^{8}\pi\alpha\,k}{3}\left|\sum_{i}({\bf I }\cdot\eta_{i})\alpha_{W}(\alpha_{W}\lambda_{f}M_{\chi}/2)^{-3/2}e^{-2\lambda _{i}/\lambda_{f}}e^{\pi\alpha_{W}\lambda_{i}/(2v)}\Gamma(1-i\alpha_{W}\lambda _{i}/v)\right.\] \[\left.\times\eta_{f}^{\dagger}\left[\lambda_{i}\hat{C}_{1}+\hat{C }_{2}\lambda_{i}/\lambda_{f}\right]\eta_{i}\right|^{2}, \tag{101}\]
The notation we employ follows Ref. [68]; \(\eta_{i}\) and \(\eta_{f}\) are potential eigenvectors which for the quintuplet are given by (in our basis) \(\eta_{f}=\{-2,1,0\}/\sqrt{5}\), \(i=1,2\) with \(\eta_{1}=\{\sqrt{2},-\sqrt{2},1\}/\sqrt{5}\), \(\eta_{2}=\{-2,-1,\sqrt{2}\}/\sqrt{7}\), with corresponding attractive eigenvalues \(\lambda_{f}=5\), \(\lambda_{1}=6\), \(\lambda_{2}=3\). The \(I\) vector describes the fraction of the incoming plane wave in each state; we will choose \(I=\{0,0,1\}\), as the state asymptotes to two noninteracting, neutral DM particles. The energy of the outgoing photon is \(k\), which at low velocities can be approximated as the binding energy of the ground state, \(\lambda_{f}^{2}\alpha_{w}^{2}M_{\chi}/4\). Lastly, the \(\hat{C}_{1}\) and \(\hat{C}_{2}\) matrices describe the couplings between the different components of the initial and final states; for capture via photon (or \(Z\)) emission, they take the form:
\[\hat{C}_{1}=\begin{pmatrix}2&0&0\\ 0&1&0\\ 0&0&0\end{pmatrix},\quad\hat{C}_{2}=\begin{pmatrix}0&2&0\\ -2&0&3\sqrt{2}\\ 0&-3\sqrt{2}&0\end{pmatrix}. \tag{110}\]
In the wino and positronium cases studied in Ref. [68], there was only one eigenstate that experienced an attractive initial-state potential, and so the sum in Eq. (110) was trivial. There is a simple expression for \(|e^{\pi\alpha_{w}\lambda_{i}/(2v)}\Gamma(1-i\alpha_{w}\lambda_{i}/v)|^{2} \simeq 2\pi\alpha_{w}\lambda_{i}/v\) in the limit of small \(v\) (for positive \(\lambda_{i}\)), which scales purely as \(1/v\), and consequently in those cases (wino and positronium) \(\sigma v\) had a simple \(1/v\) scaling at low relative velocities. However, in the quintuplet case, we see there are multiple terms in the sum that can interfere with each other, and so even in the limit of unbroken SU(2), we expect there to be a non-trivial velocity dependence in the capture cross section.
Similarly, the Sommerfeld factors can be read off from the components of the scattering-state wavefunction at the origin, and in the unbroken limit this wavefunction has the form \(\sum_{i}(I\cdot\eta_{i})\eta_{i}\,\phi(\lambda_{i}\alpha_{w},r)\), where \(\phi(\alpha,r)\) is the solution to the scalar Schrodinger equation with an attractive Coulomb potential with coupling \(\alpha\). In principle this sum runs over both positive and negative eigenvalues of the potential (corresponding to both attracted and repulsed eigenstates), but for low velocities we expect the contribution of the eigenstates experiencing a repulsive interaction to be very small. Nonetheless, where (as in the quintuplet case) there are two eigenstates experiencing an attractive potential, the value of the wavefunction at the origin (and hence the Sommerfeld factors) will experience a non-trivial interference between the two contributions. This can give rise to a velocity dependence differing from the simpler case where there is only one attractive eigenstate. Similar interference effects can be seen in the form of rapid changes in spectrum with respect to \(M_{\chi}\) in the case of broken SU(2) symmetry, where the interference occurs between the various Sommerfeld factors with resonances at different positions (as discussed in the context of Fig. 9). In contrast, the manifestation of the eigenstate interference identified here persists in the SU(2)-symmetric limit and does not require any resonance structure, only differing (velocity-dependent) phases between the interfering contributions.
However, as noted in Ref. [68; 114], at low velocities the system is often in an "adiabatic" regime where the incoming particle wavefunction evolves such that at short distances it has
complete overlap with the eigenvector with the largest-magnitude attractive eigenvalue. The criterion for this behavior is roughly \(v\lesssim\delta/m_{{ W}}\), where \(\delta\) is the mass splitting between the states; for \(\delta=164\) MeV, we expect this behavior to hold roughly for \(v\lesssim 2\times 10^{-3}\), _i.e._ for Milky-Way-scale velocities and lower. Note that this criterion is independent of the DM mass, so even when the DM is very heavy and the ratio \(m_{{ W}}/M_{\chi}\) is small, the effect of SU(2) breaking can still be seen in the presence of this adiabatic regime. In this case, the interference will be suppressed for both bound-state formation and Sommerfeld enhancement, with only the \(i=1\), \(\lambda_{i}=6\) term contributing significantly, and with the coefficient \(I\cdot\eta_{i}\) replaced with \(\delta_{i1}\).
This is an important simplifying approximation within its regime of validity. Note that the presence of this regime relies on SU(2) being broken, and also on low velocity (\(v\lesssim\delta/m_{{ W}}\)); it will not appear if an unbroken symmetry ensures the degeneracy of the mass eigenstates, and it will also not generally be relevant in the early universe (_e.g._ for relic density calculations) where velocities are much higher. However, it is well-suited to the case of indirect detection in the Milky Way halo.
For example, within this approximation, we obtain the spin-averaged capture rate to the ground state as:
\[\sigma v\simeq\frac{2^{8}}{5}\frac{\pi\alpha\alpha_{{ W}}}{M_{\chi}^{2}}\times\frac{3^{3}\cdot 2^{9}}{5^{2}}e^{-24/5}\frac{\pi \alpha_{{ W}}}{v}=\frac{2^{17}\cdot 3^{3}}{5^{3}}e^{-24/5}\frac{\pi^{2}\alpha\alpha_{{ W}}^{2}}{M_{\chi}^{2}v}\simeq\frac{233\pi^{2}}{v}\frac{\alpha\alpha_{{ W}}^{2}}{M_{\chi}^{2}}. \tag{111}\]
Here we have employed the low-velocity approximation \(|e^{\pi\alpha_{{ W}}\lambda_{i}/(2v)}\Gamma(1-i\alpha_{{ W}}\lambda_{i}/v)|^{2}\simeq 2\pi\alpha_{{ W}}\lambda_{i}/v\).
In the same regime, where the initial state rotates into the most-attracted eigenstate, the \(s\)-wave direct annihilation cross section to gauge bosons can be computed as,
\[\sigma v\simeq\frac{720\pi^{2}\alpha_{{ W}}^{3}}{M_{\chi}^{2}v}. \tag{112}\]
In this unbroken limit, the effective branching ratio to the line (_i.e._ to \(\gamma\gamma\) + half the branching ratio to \(\gamma Z\)) should be given by \((s_{{ W}}^{4}+s_{{ W}}^{2}c_{{ W}}^{2})/3=s_{{ W}}^{2}/3\), so the line cross section should be:
\[(\sigma v)_{\rm line}\simeq\frac{240\pi^{2}\alpha_{{ W}}^{2}\alpha}{M_{\chi}^{2}v}. \tag{113}\]
As for the bound-state formation, at higher velocities (\(v\gtrsim 2\times 10^{-3}\)), we expect to see the onset of interference between the contributions from the two attracted eigenstates, resulting in a non-monotonic dependence of the cross section on velocity even in the \(s\)-wave case. This behavior, and its onset at roughly Milky Way-scale velocities, can be observed in Fig. 10. (Note that the velocity dependence can also be suppressed if the DM mass is small enough that the Sommerfeld enhancement is fully saturated, such that the velocity dependence of the individual Sommerfeld factors is very different from the case of unbroken SU(2) symmetry.)
This would suggest the cross-sections for capture (to the spin-triplet ground state) and for annihilation producing a line should be very similar, at least when this adiabatic approxi
mation holds (numerical calculations indicate the non-adiabatic cross section for bound-state capture, as estimated in Eq. 11, can range between larger than the adiabatic result by a factor of \(\sim\)2 and smaller by a factor of \(\sim\)5, as \(v\) is varied). However, for \(v\ll m_{ W}/M_{\chi}\), we have the SU(2)-breaking effect that \(p\)-wave processes should be parametrically suppressed by a factor of order \((vM_{\chi}/m_{ W})^{2}\), which at a 13.6 TeV mass can suppress the \(p\)-wave capture cross section by \(\sim\)2 orders of magnitude.
We can also study the cross-section for spin-singlet \(s\to p\) capture to an \(n=2\), \(l=1\) state. In this case we have \(k=(25/16)\alpha_{ W}^{2}M_{\chi}\), and
\[\begin{split}\sigma v&=\frac{2^{13}\pi\alpha k}{3^ {3}}\frac{1}{\alpha_{ W}}M_{\chi}^{-3}\frac{1}{\lambda_{f}^{3}}\left|\sum_{i}( \mathbf{I}\cdot\eta_{i})e^{-4\lambda_{i}/\lambda_{f}}e^{\pi\alpha_{ W}\lambda_{i}/(2v)}\Gamma(1-i\alpha_{ W}\lambda_{i}/v)\eta_{f}^{\dagger}\right.\\ &\times\left[\lambda_{i}\left(\frac{4\lambda_{i}}{\lambda_{f}}-3 \right)\hat{C}_{1}+\hat{C}_{2}\left(3-12\frac{\lambda_{i}}{\lambda_{f}}+8\frac {\lambda_{i}^{2}}{\lambda_{f}^{2}}\right)\right]\eta_{i}\right|^{2}\\ &=\frac{2^{9}\pi\alpha\alpha_{ W}}{5\cdot 3^{3}M_{\chi}^{2}}\Bigg{|}\frac{3}{25}\sqrt{\frac{2}{5}}e^{-24/5} e^{3\pi\alpha_{ W}/(2v)}\left(37e^{12/5}\Gamma(1-3i\alpha_{ W}/v)\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\left.+89e^{3\pi\alpha_{ W}/(2v)}\Gamma(1-6i\alpha_{ W}/v)\right)\Bigg{|}^{2},\end{split} \tag{122}\]
or in the adiabatic regime,
\[\begin{split}\sigma v&\to\frac{2^{9}\pi\alpha\alpha _{ W}}{5\cdot 3^{3}M_{\chi}^{2}}\left|\frac{267}{25}\sqrt{2}e^{-24/5}e^{3\pi\alpha_{ W}/(2v)}\Gamma(1-6i\alpha_{ W}/v)\right|^{2}\\ &=\frac{2^{12}\cdot 89^{2}\pi^{2}}{5^{5}\,v}e^{-48/5}\frac{ \alpha\alpha_{ W}^{2}}{M_{\chi}^{2}}\\ &\simeq\frac{0.70\pi^{2}}{v}\frac{\alpha\alpha_{ W}^{2}}{M_{\chi}^{2}}.\end{split} \tag{123}\]
We see that the scale for this cross-section is naturally two orders of magnitude smaller than the previous ones, which arises from the various numerical prefactors, primarily the factor of \(e^{-48/5}\) compared to \(e^{-24/5}\) for the capture to the \(n=1\) state, corresponding to a suppression factor of \(8\times 10^{-3}\). If these exponential terms were removed, the other prefactors would differ by less than a factor of 3. (Numerical calculations indicate the non-adiabatic cross section is larger than the adiabatic one in this case, by factors between 1 and 3.6.) This is suggestive that only the capture to the ground-state is likely to be comparable to direct annihilation, and the capture from the \(p\)-wave initial-state component suffers from a \(v^{2}\) suppression once \(v\) drops below \(m_{ W}/M_{\chi}\), which renders it subdominant at the \(\mathcal{O}(1\%)\) level for our 13.6 TeV benchmark point.
This suppression for higher-\(n\) capture also suggests that the contribution to the _endpoint_ hard photon spectrum from bound state formation and decay will be suppressed, as these
contributions are dominated by capture into states with odd \(L\) (and thus \(n>1\)) and \(S=0\) that decay to \(L=S=0\) states before annihilating (see Sec. 4 for a more in-depth discussion). Capture to the ground-state via emission of a dipole gauge boson changes \(L\) by 1, thus requiring an initial \(L=1\) state (which must then have \(S=1\) if it contains identical DM particles), and \(S=1\) states do not produce a leading-power contribution to the endpoint spectrum when they decay.
For the \(Q=1\) sector, let us again consider capture from the spin-triplet \(p\)-wave incoming wave to the spin-triplet \(s\)-wave state. In the unbroken limit the potential matrix for the final state takes the form:
\[V=\begin{pmatrix}-2&\sqrt{6}\\ \sqrt{6}&-3\end{pmatrix}. \tag{111}\]
Here the first row refers to the \(++-\) state and the second to the \(+\,0\) state. The transition matrices are now,
\[\hat{C}_{1}=\begin{pmatrix}-\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}&0\\ 0&-\frac{\sqrt{3}}{2}&\sqrt{\frac{3}{2}}\end{pmatrix},\quad\hat{C}_{2}= \begin{pmatrix}2\sqrt{2}&\sqrt{2}&0\\ 0&\sqrt{3}&0\end{pmatrix}. \tag{112}\]
In this case we must also replace the \(\alpha\) prefactor in the cross section with \(\alpha_{ W}\). The attractive eigenvalue for the final state has \(\lambda_{f}=5\), \(\eta_{f}=\{-\sqrt{2},\sqrt{3}\}/\sqrt{5}\). Again, the binding energy of the ground state is \(k=\alpha_{ W}^{2}\lambda_{f}^{2}M_{\chi}/4=(25/4)\alpha_{ W}^{2}M_{\chi}\) (and since in the unbroken limit \(m_{ W}=0\), we do not need to include a kinematic suppression for the \(W\) mass). Then we obtain for the unbroken limit:
\[\begin{split}\sigma v=\frac{2^{7}3^{2}\pi\alpha_{ W}^{2}}{5^{4}M_{\chi}^{2}}\Big{|}16e^{-12/5}e^{6\pi\alpha_{ W}/(2v)}\Gamma(1-6i\alpha_{ W}/v)\\ +7e^{-6/5}e^{3\pi\alpha_{ W}/(2v)}\Gamma(1-3i\alpha_{ W}/v) \Big{|}^{2},\end{split} \tag{113}\]
When we assume the adiabatic regime, the result reduces to,
\[\sigma v=\frac{2^{17}\cdot 3^{3}\pi^{2}}{5^{3}}e^{-24/5}\frac{\alpha_{ W}^{3}}{vM_{\chi}^{2}}\simeq 233\frac{\pi^{2}\alpha_{ W}^{3}}{vM_{\chi}^{2}}. \tag{114}\]
This is for capture to the \(Q=+1\) state; there is an equal rate for capture to the \(Q=-1\) state. Note the \(\alpha_{ W}\) prefactor (rather than \(\alpha\)); including formation of the \(Q=0\) state through \(Z\) emission (as well as photon emission) would similarly promote that capture rate to have a prefactor of \(\alpha_{ W}\) rather than \(\alpha\), for an overall capture rate (summing across all three channels) of \(\sim 700\pi^{2}\alpha_{ W}^{3}/(M_{\chi}^{2}v)\), similar to the full \(s\)-wave direct annihilation rate. The primary difference between this \(p\to s\) capture rate and the inclusive direct annihilation rate will arise from velocity suppression of the \(p\to s\) capture cross section in the broken-SU(2) case (with this suppression being lifted in the truly unbroken limit).
#### f.1.2 Presence of metastable bound states
The unbroken-SU(2) limit is also helpful for studying the question of whether there could be \(L>0\) states in the spectrum whose decays to more deeply bound states are highly suppressed, leading them to decay through annihilation to SM particles with a substantial branching ratio. States which are degenerate in the unbroken limit are likely to remain close in energy as we reduce \(M_{\chi}\), and therefore decays between them will be suppressed (although if this is decisive in determining whether a state is metastable, a more careful analysis will be required). The \(L+S\)-even potential for the quintuplet has two attractive eigenvalues, \(Z=6\) and \(Z=3\), whereas the \(L+S\)-odd potential has a single attractive eigenvalue, \(Z=5\). Thus for spin-singlet states (\(S=0\)) we expect \(L\)-even bound states with energies \(E_{n}/\alpha_{ W}^{2}M_{\chi}=-9/n^{2},-2.25/n^{2}\), and \(L\)-odd bound states with energies \(E_{n}/\alpha_{ W}^{2}M_{\chi}=-6.25/n^{2}\). For spin-triplet states (\(S=1\)) we expect \(L\)-odd states with energies \(E_{n}/\alpha_{ W}^{2}M_{\chi}=-9/n^{2},-2.25/n^{2}\), and \(L\)-even states with energies \(E_{n}/\alpha_{ W}^{2}M_{\chi}=-6.25/n^{2}\).
We first consider the case of \(L\)-odd states. Given a spin-singlet \(L\)-odd state with \(n>L>0\) (binding energy \(6.25/n^{2}\)), there should always be a more deeply bound state with \(n^{\prime}=n\), \(L^{\prime}=L-1\) (binding energy \(9/n^{2}\)), which is accessible through a dipole transition. So in the spin-singlet case and Coulomb limit there should be no metastable states with \(L>0\). For the spin-triplet, the case is slightly more complicated, as given an \(L\)-odd state with \(n>L>0\) (binding energy \(9/n^{2}\) or \(2.25/n^{2}\)), the accompanying state with \(n^{\prime}=n\), \(L^{\prime}=L-1\) has binding energy \(6.25/n^{2}\). We see that the \(L\)-odd states with binding energies \(2.25/n^{2}\) will always have an accompanying more-deeply-bound \(L\)-even spin-triplet state, to which they can decay, but this is not necessarily true for the states with binding energies \(9/n^{2}\). A state with \(L^{\prime}=L-1\) will be available if \(9/n^{2}<6.25/m^{2}\) for some \(m\) with \(L\leq m<n\), _i.e._ if \(m<n\sqrt{6.25/9}=n/1.2\) is consistent with \(m\geq L\). This will be true for \(n>1.2L\), so the dangerous range is states with \(L<n\leq 1.2L\). In order for this range to include an integer, we must have \(L>5\). For example, consider the spin-triplet state with \(L=7\) and \(n=8\), with dimensionless binding energy \(9/8^{2}\simeq 0.14\) in the Coulombic limit. The lowest-lying \(L=6\) spin-triplet state that is accessible via \(\Delta L=1\), \(\Delta S=0\) transitions has \(n=7\), and consequently binding energy \(6.25/7^{2}=0.13\) in the Coulombic limit; thus the \(L=7\) state cannot decay through such a transition. For the \(L=5\) case, in the Coulombic limit the states are degenerate, and so we would need to perform a more careful calculation.
If we now consider the case of even-\(L\) states, the situation is reversed between the spin-singlet and the spin-triplet; in the spin-triplet case we expect the even-\(L\) states will always be able to decay to their accompanying, more deeply bound state with \(L^{\prime}=L-1\), with the exception of the \(L=0\) case where no such state exists (we will consider the \(L=0\) case below). In the spin-singlet case, the same argument as previously tells us that for \(L>0\), a state with \(L^{\prime}=L-1\) will be available except in the case where \(L<n\leq 1.2L\), which in the case of even \(L\) is potentially relevant for \(L\geq 6\).
Therefore, based on the Coulombic limit, we would predict that the only possible (meta)stable states with \(L>0\) are \(L+S\)-even states (spin-triplet with \(L\) odd or spin-singlet with \(L\) even)
with \(L\geq 5\), \(L<n\leq 1.2L\), and the eigenstate structure corresponding to the \(Z=6\) eigenvalue. These high-\(L\) states may not even be bound for masses of interest to us, and in any case the capture rate into them is likely to be very small.
### Results for general representations
Let us now consider the more general situation where the DM is the lightest component of an SU(2) multiplet in a real representation of odd dimension \(N\). Larger representations require higher DM masses to obtain the correct relic density (_e.g._ Ref. [3]), and hence the unbroken-SU(2) approximation is likely to be better at their thermal masses. Recall, however, that the condition to be in the adiabatic regime is mass-independent, \(v\lesssim\delta/m_{ W}\), so while this is a feature of the broken SU(2) symmetry, we expect it to be retained at sufficiently low velocities even for very heavy DM.
In the unbroken-SU(2) limit we can use the results of Ref. [2] for general representations. They proceed by decomposing the two-particle state into eigenstates of isospin \(I\) (\(I=1,3,\cdots,2N-1\)); this corresponds to identifying the eigenstates of the potential in our language. They find the eigenvalue associated with the state with isospin \(I\) is \(\lambda=(2N^{2}-1-I^{2})/8\) (where positive eigenvalues correspond to attracted states, as per our previous convention) and so the most-attracted channel is the singlet, where \(\lambda=(N^{2}-1)/4\). The isospin singlet corresponds to an \(L+S\)-even state with total charge \(Q=0\), and for the quintuplet \(\lambda=6\), as discussed above. In general, states with \(I<\sqrt{2N^{2}-1}\) can support bound states; for the quintuplet this means we have \(I=1,3,5\) bound states.
In the adiabatic regime, SU(2)-breaking effects cause the lowest-energy state at large distances (_i.e._ the \(L+S\)-even, \(Q=0\) state of two identical DM particles) to smoothly transition into the isospin-singlet state at short distances. Thus in this regime, we expect both the Sommerfeld-enhanced direct annihilation and the bound state capture rates to be consistent with an initial isospin-singlet state. Since the dominant bound-state capture process (dipole emission of a gauge boson) changes isospin by 2, the final state must then have \(I=3\), _i.e._ it is a SU(2) adjoint (this state is \(L+S\)-odd and the three components have \(Q=0,\pm 1\)).
Consequently, within the adiabatic approximation, we only need to concern ourselves with (Sommerfeld enhanced) direct annihilation from the isospin-singlet state, and capture from an isospin-singlet initial state to an isospin-triplet final state, with the relevant eigenvalues being \(\lambda_{i}=(N^{2}-1)/4\) and \(\lambda_{f}=(N^{2}-5)/4\). (Transitions amongst bound states can involve higher-\(I\) states; in particular \(I=5\) for the quintuplet, with \(\lambda=3\).)
We will thus focus in this appendix on singlet-to-adjoint transitions. We reiterate that this approximation is _not_ appropriate if the gauge symmetry is actually unbroken or at the high velocities associated with thermal freezeout in the early universe (as studied in e.g. Ref. [115]), where other transitions can also contribute significantly and may dominate. The quality of this approximation--_i.e._ the degree to which the incoming state retains non-singlet components at small \(r\), which could contribute significantly to the capture rate--is an interesting question, but we ignore it here, as our main purpose is simply to develop some simple intuition for the importance of bound-state effects for the gamma-ray endpoint signal. The corresponding
approximation for the quintuplet appears to do a reasonable job of estimating the relative size of bound-state capture and annihilation, as we see in Fig. 4.
For these singlet-to-adjoint transitions, we can write the group theory coefficients for bound state formation from Ref. [2] in the simplified form:
\[\begin{split}& C^{a1b}_{\mathcal{J}}=\frac{1}{\sqrt{T_{R}d_{R}}} \text{Tr}(T^{b}T^{a}),\\ & C^{a1b}_{\tau}=i\frac{1}{\sqrt{T_{R}d_{R}}}\text{Tr}(T^{b}T^{c} T^{d})f^{acd}=-\frac{1}{\sqrt{T_{R}d_{R}}}\text{Tr}(T^{b}T^{c}T^{d})(T^{a}_{ \text{adj}})^{cd}.\end{split} \tag{112}\]
We can now use \(\text{tr}(T^{a}T^{b})=T_{R}\delta^{ab}\), and also
\[\text{Tr}(T^{b}T^{c}T^{d})(T^{a}_{\text{adj}})^{cd}=\frac{1}{2}T_{R}T_{\text{ adj}}\delta^{ab}. \tag{113}\]
Thus, finally we obtain:
\[C^{a1b}_{\mathcal{J}}=\sqrt{\frac{T_{R}}{d_{R}}}\delta^{ab},\quad C^{a1b}_{ \tau}=-\frac{1}{2}\sqrt{\frac{T_{R}}{d_{R}}}T_{\text{adj}}\delta^{ab}, \tag{114}\]
where we will show how the \(C^{a1b}_{\mathcal{J}}\) and \(C^{a1b}_{\tau}\) coefficients enter the bound state capture rate in Secs. F.2.2 and F.2.3.
Now if \(R\) is the representation of size \(N\), for SU(2) we have \(T_{R}=N(N^{2}-1)/12\), and so \(T_{\text{adj}}=2\), while \(d_{R}=N\) (and in particular \(d_{\text{adj}}=3\)). Thus for SU(2) we obtain the coefficients:
\[C^{a1b}_{\mathcal{J}}=\sqrt{\frac{N^{2}-1}{12}}\delta^{ab},\quad C^{a1b}_{ \tau}=-C^{a1b}_{\mathcal{J}}. \tag{115}\]
Let us also note that we can now extend the argument given in App. F.1.2 to general representations. A bound state of isospin \(I\) will generally have open decay channels to states with lower isospin (by 2 units) which are hence more deeply bound due to the larger eigenvalue \(\lambda\). The exception is \(I=1\) states, which must decay to \(I=3\) states which are more shallowly bound for the same principal quantum number. For this reason, excited \(I=1\) states can be metastable if they have sufficiently large \(L\) that all the \(I=3\) states differing by only one unit in \(L\) are more shallowly bound. This can occur for a general representation if \(L<n\leq\frac{N^{2}-1}{N^{2}-5}L=\left(1+\frac{4}{N^{2}-5}\right)L\); this range will contain an integer if \(L>(N^{2}-5)/4\). Thus the threshold \(L\) at which this effect can occur increases as the representation size goes up.
#### f.2.1 Direct annihilation
In this case, if we can evaluate the tree-level cross section for annihilation from an isospin-singlet initial state to any desired SM final state, we can account for the Sommerfeld enhancement by simply multiplying the tree-level cross section by \(S=2\pi\alpha_{W}\lambda_{i}/v\), in the low-velocity
limit. This cross section is given for Majorana fermion DM by Ref. [2] as:
\[(\sigma v)_{\rm tree,I=1} = \frac{\pi\alpha_{ W}^{2}}{M_{\chi}^{2}}\frac{T_{R}^{2}d_{\rm adj}}{d_{R}} \tag{111}\] \[= \frac{\pi\alpha_{ W}^{2}}{2^{4}\times 3\times M_{\chi}^{2}}N(N^{2}-1)^{2}.\]
Multiplying by the Sommerfeld factor gives:
\[(\sigma v)_{\rm I=1}=\frac{\pi^{2}\alpha_{ W}^{3}}{2^{5}\times 3\times M_{\chi}^{2}v}N(N^{2}-1)^{3}\to \frac{\pi^{2}\alpha_{ W}^{3}}{M_{\chi}^{2}v}\frac{N^{7}}{96}, \tag{112}\]
where in the final step we have assumed \(N\gg 1\).
Checking, for the wino and quintuplet this yields:
\[(\sigma v)_{\rm I=1}=\frac{\pi^{2}\alpha_{ W}^{3}}{M_{\chi}^{2}v}\begin{cases}16,&N=3,\\ 720,&N=5.\end{cases} \tag{113}\]
For the quintuplet this agrees with our calculation above. This also agrees with the wino result from Ref. [68], accounting for our assumption that the adiabatic condition holds.
#### f.2.2 Capture to the ground state
At small velocities, from Ref. [2] we can read off the low-velocity bound state capture cross section to the \(n=1\) state as:
\[(\sigma v)_{\rm bsf}^{n=1,I=0}=\frac{\pi\alpha_{ W}^{2}}{M_{\chi}^{2}}\frac{2S+1}{g_{\chi}^{2}}\frac{2^{11}\pi}{3}\sum_{ab}|C_{ \mathcal{J}}^{a1b}+(1/\lambda_{f})C_{\tau}^{a1b}|^{2}\frac{\lambda_{i}^{3} \alpha_{ W}}{\lambda_{f}v}e^{-4\lambda_{i}/\lambda_{f}}. \tag{114}\]
This expression involves an average over initial states, with degrees of freedom denoted by \(g_{\chi}\); since we are interested in the case where 100% of the DM captures from the singlet state, we only need to average over spin degrees of freedom, so \(g_{\chi}=2\). Our initial state must be \(L+S\)-even and thus for capture to an \(L=0\) final state, it must have \(S=1\). Thus we obtain:
\[(\sigma v)_{\rm bsf}^{n=1,I=0} = \frac{\pi\alpha_{ W}^{2}}{M_{\chi}^{2}}\frac{3}{4}\frac{2^{11}\pi}{3}d_{\rm adj} \left(\frac{N^{2}-1}{12}\right)|1-(1/\lambda_{f})|^{2}\frac{\lambda_{i}^{3} \alpha_{ W}}{\lambda_{f}v}e^{-4\lambda_{i}/\lambda_{f}} \tag{115}\] \[= \frac{8\pi^{2}\alpha_{ W}^{3}}{M_{\chi}^{2}v}\frac{(N^{2}-9)^{2 }(N^{2}-1)^{4}}{(N^{2}-5)^{3}}e^{-4(N^{2}-1)/(N^{2}-5)}.\]
where \(d_{\rm adj}=3\) counts the number of generators and arises from \(\sum_{ab}|\delta^{ab}|^{2}=d_{\rm adj}\).
In the limit of \(N\gg 3\), which is helpful for comparison against direct annihilation, we obtain the simplified result:
\[(\sigma v)_{\rm bsf}^{n=1,I=0}\to\frac{\pi^{2}\alpha_{ W}^{3}}{M_{\chi}^{2}v}8N^{6}e^{-4}\simeq\frac{\pi^{2}\alpha_{ W}^{3}}{M_{\chi}^{2}v}\frac{N^{6}}{6.8},\quad N\gg 3. \tag{116}\]
(Note in the quintuplet case this approximate value is larger than the truth by about a factor of 3; it will be a better approximation for larger \(N\).)
#### f.2.3 Capture to the \(n=2\) states
Now let us consider capture to the \(n=2\), \(L=1\) states, as their subsequent decays and annihilations can give rise to endpoint photons, unlike capture directly to the \(n=1\) state. In this case the final state must have \(S=0\) (so the initial state has \(L+S\) even). From Ref. [2] in the low-velocity limit the \(s\)-wave (1st line) and \(d\)-wave (2nd line) contributions are:
\[(\sigma v)^{n=2,l=1}_{\rm bsf} =\frac{\pi\alpha_{ W}^{2}}{M_{\chi}^{2}}\left(\frac{2S+1}{g_{\chi}^{2}}\right)\frac{2^{12} \pi\lambda_{i}}{9\lambda_{f}^{5}}\frac{\alpha_{ W}}{v}\left[\sum_{ab}|C^{a1b}_{\cal J}( \lambda_{f}\lambda_{i}(3\lambda_{f}-4\lambda_{i})+C^{a1b}_{\tau}(-3\lambda_{f} ^{2}+12\lambda_{f}\lambda_{i}-8\lambda_{i}^{2})|^{2}\right.\] \[\left.+\sum_{ab}2^{5}\lambda_{i}^{4}|C^{a1b}_{\cal J}\lambda_{f} +2C^{a1b}_{\tau}|^{2}\right]e^{-8\lambda_{i}/\lambda_{f}}\] \[=\frac{\pi^{2}\alpha_{ W}^{3}}{M_{\chi}^{2}v}\frac{2^{8} \lambda_{i}}{9\lambda_{f}^{5}}(N^{2}-1)\left[|(\lambda_{f}\lambda_{i}(3 \lambda_{f}-4\lambda_{i})-(-3\lambda_{f}^{2}+12\lambda_{f}\lambda_{i}-8\lambda _{i}^{2})|^{2}\right.\] \[\left.+2^{5}\lambda_{i}^{4}|\lambda_{f}-2|^{2}\right]e^{-8\lambda _{i}/\lambda_{f}}\] \[=\frac{\pi^{2}\alpha_{ W}^{3}}{M_{\chi}^{2}v}\frac{2^{4}(N^{2}-1)^{2}}{9(N^{2}-5)^{5}}\left[(N^{6}+9N^{4}- 165N^{2}-37)^{2}\right.\] \[\left.+2^{5}(N^{2}-1)^{4}(N^{2}-13)^{2}\right]e^{-8(N^{2}-1)/(N^{2 }-5)}. \tag{112}\]
The first line here agrees with the \(s\to p\) quintuplet result computed in Eq. (101), once we multiply that result (which was for photon-mediated capture into a specific \(n=2\)\(L=1\) bound state) by a factor of 3 to account for \(W\)- and \(Z\)-mediated capture and a second factor of 3 to account for the \(m=0,\pm 1\) states.
In the limit of large \(N\), this expression has the scaling:
\[(\sigma v)^{n=2,l=1}_{\rm bsf}\rightarrow\frac{\pi^{2}\alpha_{ W}^{3}}{M_{\chi}^{2}v}\frac{2^{4}N^{6}}{9}\left[1+2^{5}\right]e^{-8}\simeq \frac{\pi^{2}\alpha_{ W}^{3}}{M_{\chi}^{2}v}\frac{N^{6}}{1700}\left[1+32\right], \quad N\gg 5 \tag{113}\]
So we see that compared to direct annihilation, in the large-\(N\) limit we expect the various contributions to scale as:
* \(p\to s\) capture to the \(n=1,L=0,S=1\) state (contributions to endpoint photons are power-suppressed): direct annihilation rate \(\times 14/N\), in addition to (when SU(2) is broken) any kinematic suppression of \(W/Z\) emission (by a factor as small as \(\alpha/(3\alpha_{ W})\)) or velocity suppression due to the \(p\)-wave initial state (parametrically \({\cal O}(M_{\chi}^{2}v^{2}/m_{ W}^{2})\) for \(v\lesssim m_{ W}/M_{\chi}\)),28
Footnote 28: As mentioned above, the large-\(N\) approximation overestimates this ratio for the quintuplet by about a factor of 3, and so the rates are actually very comparable.
* \(s\to p\) capture to the \(n=2,L=1\) states collectively: direct annihilation rate \(\times 0.06/N\), in addition to any kinematic suppression of \(W/Z\) emission (up to a factor of \(3\alpha_{{}_{W}}/\alpha\)),
* \(d\to p\) capture to the \(n=2,L=1\) states collectively: direct annihilation rate \(\times 1.8/N\), in addition to any kinematic suppression of \(W/Z\) emission (up to a factor of \(3\alpha_{{}_{W}}/\alpha\)) or velocity suppression due to the \(d\)-wave initial state (parametrically \(\mathcal{O}(M_{\chi}^{4}v^{4}/m_{{}_{W}}^{4})\) for \(v\lesssim m_{{}_{W}}/M_{\chi}\)).
We see that the only contribution that is not suppressed at low velocities and that gives rise to leading-power contributions to the endpoint photon spectra (via its decay and subsequent annihilation) is generically expected to have a cross section 2 or more orders of magnitude below direct annihilation. Consequently, it is quite plausible for bound state formation to be a large or even dominant contribution to the inclusive annihilation rate when the velocity suppression for higher partial waves is mild or absent and the gauge bosons are massless (as in the case of freezeout), while simultaneously having a generically small effect on the endpoint spectrum for indirect detection, especially at low velocities (\(v\ll m_{{}_{W}}/M_{\chi}\)) or where the \(n=2\) states are too loosely bound to allow \(W\)- or \(Z\)-mediated capture.
One might ask about the contribution from capture into states with \(n>2\). While in the unbroken limit bound states with large \(L\) and \(n\) may play a large role in the capture rate (_e.g._[115]), for the parameter space we have considered in this paper, the number of bound states is always truncated by the non-zero SU(2) breaking scale, preventing large enhancements from the proliferation of high-\(n\) states. Furthermore, we expect velocity suppressions (of order \((M_{\chi}v/m_{{}_{W}})^{2L}\)) for all capture rates with initial \(L>0\). Finally, within our adiabatic approximation both initial and final states always experience attractive interactions, with couplings that obey \(\lambda_{i}/\lambda_{f}>1\), leading to increasingly strong exponential suppression for large-\(n\) states and avoiding the potentially unitarity-violating region of parameter space identified in Ref. [115].
|
2302.14768 | Easy Maximum Empirical Likelihood Estimation of Linear Functionals Of A
Probability Measure With Infinitely Many Constraints | In this article, we construct semiparametrically efficient estimators of
linear functionals of a probability measure in the presence of side information
using an easy empirical likelihood approach. We use estimated constraint
functions and allow the number of constraints to grow with the sample size.
Considered are three cases of information which can be characterized by
infinitely many constraints: (1) the marginal distributions are known, (2) the
marginals are unknown but identical, and (3) distributional symmetry. An
improved spatial depth function is defined and its asymptotic properties are
studied. Simulation results on efficiency gain are reported. | Shan Wang, Hanxiang Peng | 2023-02-28T17:09:53Z | http://arxiv.org/abs/2302.14768v1 | Easy Maximum Empirical Likelihood Estimation of Linear Functionals Of A Probability Measure With Infinitely Many Constraints
###### Abstract
In this article, we construct semiparametrically efficient estimators of linear functionals of a probability measure in the presence of side information using an easy empirical likelihood approach. We use estimated constraint functions and allow the number of constraints to grow with the sample size. Considered are three cases of information which can be characterized by infinitely many constraints: (1) the marginal distributions are known, (2) the marginals are unknown but identical, and (3) distributional symmetry. An improved spatial depth function is defined and its asymptotic properties are studied. Simulation results on efficiency gain are reported.
Empirical likelihood; Infinitely many constraints; Maximum empirical likelihood estimator; Semiparametric efficiency; Spatial median.
Primary 62G05; secondary 62G20, 62H11.
## 1 Introduction
Suppose that \(Z_{1},\ldots,Z_{n}\) are independent and identically distributed (i.i.d.) random variables with a common distribution \(Q\) taking values in a measurable space \(\mathcal{Z}.\) In this article, we are interested in efficient estimation of the linear functional \(\boldsymbol{\theta}=\int\boldsymbol{\psi}\,dQ\) of \(Q\) for some square-integrable function \(\boldsymbol{\psi}\) from \(\mathcal{Z}\) to \(\mathcal{R}^{r}\) when side information is available through a vector function (constraint) \(\mathbf{u}\) which satisfies
* \(\mathbf{u}\) is measurable from \(\mathcal{Z}\) to \(\mathcal{R}^{m}\) such that \(\int\mathbf{u}\,dQ=0\) and the variance-covariance matrix \(\int\mathbf{u}\mathbf{u}^{\top}\,dQ\) is nonsingular.
The commonly used sample mean \(\bar{\boldsymbol{\psi}}=\frac{1}{n}\sum_{j=1}^{n}\boldsymbol{\psi}(Z_{j})\) of \(\boldsymbol{\theta}=E(\boldsymbol{\psi}(Z))\) does not use the information, and is not efficient in the sense of least dispersed regular
###### Abstract
We consider the empirical likelihood estimation of the mean of a probability measure for the mean of a given distribution of the unknown parameters. The empirical likelihood estimation is a method to estimate the probability of a given distribution of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters parameters. The method is based on the empirical likelihood estimation of the unknown parameters. The method is based on the empirical likelihood estimation of the unknown parameters.
when the marginal distributions are unknown but identical, and constructed an efficient estimator based on the criterion of least squares objective. Peng and Schick (2018) constructed empirical likelihood tests of stochastic independence and distributional symmetry. Each of independence, symmetry, known or equal marginal distributions is equivalent to infinitely many equations (constraints), and can be used to improve estimation efficiency. Here we construct the EL-weighted estimators and demonstrate the semiparametric efficiency. Note the simple analytic form of our estimators, and the property of easy incorporation of side information to improve efficiency.
The efficiency criteria used are that of a least dispersed regular estimator or that of a locally asymptotic minimax estimator, and are based on the convolution theorems and on the lower bounds of the local asymptotic risk in LAN and LAMN families, see the monograph by Bickel, et al. (1993) among others.
In what follows, we will summarize some results from Wang and Peng (2022) for the convenience of our use. Meanwhile, we provide the proof of the semiparametric efficiency. In many semiparametric models, the constraint vector function \(\mathbf{u}=(u_{1},...,u_{m})^{\top}\) is usually unknown and must be estimated by some measurable function \(\hat{\mathbf{u}}=(\hat{u}_{1},...,\hat{u}_{m})^{\top}\). Using it, we now work with the EL-weights,
\[\hat{\pi}_{j}=\frac{1}{n}\frac{1}{1+\hat{\mathbf{u}}(Z_{j})^{\top}\hat{ \boldsymbol{\zeta}}},\quad j=1,\ldots,n, \tag{1.3}\]
where \(\hat{\boldsymbol{\zeta}}\) solves Eqt (1.2) with \(\mathbf{u}=\hat{\mathbf{u}}\). A natural estimate \(\hat{\boldsymbol{\theta}}\) of \(\boldsymbol{\theta}\) now is
\[\hat{\boldsymbol{\theta}}=\sum_{j=1}^{n}\hat{\pi}_{j}\boldsymbol{\psi}(Z_{j}) =\frac{1}{n}\sum_{j=1}^{n}\frac{\boldsymbol{\psi}(Z_{j})}{1+\hat{\mathbf{u}}( Z_{j})^{\top}\hat{\boldsymbol{\zeta}}}. \tag{1.4}\]
We now allow the number of constraints to depend on the sample size \(n\), \(m=m_{n}\), and tend to infinity slowly with \(n\). To stress the dependence, write
\[\mathbf{u}_{n}=(u_{1},\ldots,u_{m_{n}})^{\top},\quad\hat{\mathbf{u}}_{n}=(\hat {u}_{1},\ldots,\hat{u}_{m_{n}})^{\top},\]
and \(\boldsymbol{\tilde{\theta}}_{n}=\boldsymbol{\tilde{\theta}}\), \(\boldsymbol{\hat{\theta}}_{n}=\boldsymbol{\hat{\theta}}\) for the corresponding estimators of \(\boldsymbol{\theta}\), that is,
\[\boldsymbol{\tilde{\theta}}_{n}=\frac{1}{n}\sum_{j=1}^{n}\frac{\boldsymbol{ \psi}(Z_{j})}{1+\mathbf{u}_{n}(Z_{j})^{\top}\tilde{\boldsymbol{\zeta}}_{n}} \quad\text{and}\quad\boldsymbol{\hat{\theta}}_{n}=\frac{1}{n}\sum_{j=1}^{n} \frac{\boldsymbol{\psi}(Z_{j})}{1+\hat{\mathbf{u}}_{n}(Z_{j})^{\top}\hat{ \boldsymbol{\zeta}}_{n}}, \tag{1.5}\]
where \(\tilde{\boldsymbol{\zeta}}_{n}\) and \(\hat{\boldsymbol{\zeta}}_{n}\) solve Eqt (1.2) with \(\mathbf{u}=\tilde{\mathbf{u}}_{n}\) and \(\mathbf{u}=\hat{\mathbf{u}}_{n}\), respectively,.
The ASN of \(\boldsymbol{\tilde{\theta}}_{n}\) and \(\boldsymbol{\hat{\theta}}_{n}\) are, respectively, given in Theorems 3 and 4 of Wang and Peng (2022), and we now prove the semiparametric efficiency of \(\boldsymbol{\tilde{\theta}}_{n}\) and quote Theorem 4 in the Appendix for convenience of our use. For \(\mathbf{a}\in\mathcal{R}^{m}\), write \(\|\mathbf{a}\|\) the euclidean norm. For \(\mathbf{a},\mathbf{b}\in\mathcal{R}^{m}\), write \(\mathbf{a}\otimes\mathbf{b}\) the kronecker product. Let \(L_{2}^{m}(Q)=\big{\{}\mathbf{f}=(f_{1},\ldots,f_{m})^{\top}:\int\|\mathbf{f}\| ^{2}\,dQ<\infty\big{\}}\), and let \(L_{2,0}^{m}(Q)=\big{\{}\mathbf{f}\in L_{2}^{m}(Q):\int\mathbf{f}\,dQ=0\big{\}}\). For \(\mathbf{f}\in L_{2}^{m}(Q)\), write \(\bar{\mathbf{f}}=n^{-1}\sum_{j=1}^{n}\mathbf{f}(Z_{j})\) the sample average of \(\mathbf{f}(Z_{1}),\ldots,\mathbf{f}(Z_{n})\), and \([\mathbf{f}]\) the closed linear span of the components \(f_{1},\ldots,f_{r}\) in \(L_{2}(Q)\). Let \(Z\) be an i.i.d. copy of \(Z_{1}\). Denote by \([\mathbf{u}_{\infty}]\) the
closed linear span of \(\mathbf{u}_{\infty}=(u_{1},u_{2},\dots)\) in \(L_{2,0}(Q)\). Set
\[\mathbf{W}_{n}=\operatorname{Var}(\mathbf{u}_{n}(Z)),\quad\tilde{\mathbf{W}}_{n} =\frac{1}{n}\sum_{j=1}^{n}(\mathbf{u}_{n}\mathbf{u}_{n}^{\top})(Z_{j}),\quad \hat{\mathbf{W}}_{n}=\frac{1}{n}\sum_{j=1}^{n}(\hat{\mathbf{u}}_{n}\hat{ \mathbf{u}}_{n}^{\top})(Z_{j}).\]
Following Peng and Schick (2013), a sequence \(\mathbf{W}_{n}\) of \(m_{n}\times m_{n}\) dispersion matrices is said to be _regular_ if
\[0<\inf_{n}\inf_{\|\mathbf{u}\|=1}\mathbf{u}^{\top}\mathbf{W}_{n}\mathbf{u}\leq \sup_{n}\sup_{\|\mathbf{u}\|=1}\mathbf{u}^{\top}\mathbf{W}_{n}\mathbf{u}<\infty.\]
**Theorem 1.1**.: _Suppose that \(\mathbf{u}_{n}\) satisfies (C) for each \(m=m_{n}\) such that_
\[\max_{1\leq j\leq n}\|\mathbf{u}_{n}(Z_{j})\|=o_{p}(m_{n}^{-3/2}n^{1/2}), \tag{1.6}\]
_the sequence of \(m_{n}\times m_{n}\) dispersion matrices \(\mathbf{W}_{n}\) is regular and satisfies_
\[|\tilde{\mathbf{W}}_{n}-\mathbf{W}_{n}|_{o}=o_{p}(m_{n}^{-1}), \tag{1.7}\]
\[\frac{1}{n}\sum_{j=1}^{n}\big{(}\boldsymbol{\psi}(Z_{j})\otimes\mathbf{u}_{n} (Z_{j})-E\big{(}\boldsymbol{\psi}(Z_{j})\otimes\mathbf{u}_{n}(Z_{j})\big{)} \big{)}=o_{p}(m_{n}^{-1/2}). \tag{1.8}\]
_Then \(\tilde{\boldsymbol{\theta}}_{n}\) is semiparametrically efficient as \(m_{n}\to\infty\). Moreover,_
\[\sqrt{n}(\tilde{\boldsymbol{\theta}}_{n}-\boldsymbol{\theta}){\Longrightarrow} \mathcal{N}(0,\Sigma_{0}),\]
_where \(\Sigma_{0}=\operatorname{Var}(\boldsymbol{\psi}(Z))-\operatorname{Var}( \boldsymbol{\varphi}_{0}(Z))\) with \(\boldsymbol{\varphi}_{0}=\Pi(\boldsymbol{\psi}|[\mathbf{u}_{\infty}])\)._
Proof.: We only need to show the efficiency. It suffices to prove that the orthonormal complement \(\mathcal{T}=[\mathbf{u}_{\infty}]^{\perp}\) in \(L_{2,0}(Q)\) is the tangent space. To this end, let \(Q_{t}:|t|\leq t_{0}\) with \(Q_{0}=Q\) be a regular parametric submodel with the score function \(a\). By (C),
\[\int u\,dQ_{t}=0,\quad u\in[\mathbf{u}_{\infty}].\]
Differentiating both sides of the equality with respect to \(t\) at \(t=0\) yields
\[\int ua\,dQ=0,\quad u\in[\mathbf{u}_{\infty}].\]
This shows \(a\in\mathcal{T}\). For any bounded \(a\in\mathcal{T}\), consider \(q_{t}=dQ_{t}/dQ=1+at,|t|\leq t_{0}\) for sufficient small \(t_{0}\). It is clear that \(q_{t}\) is a density and the submodel with the density has the score function \(a\) which satisfies \(\int ua\,dQ=0\). Since bounded functions in \(\mathcal{T}\) are dense, it follows that the above conclusion holds for any \(a\in\mathcal{T}\). This shows \(\mathcal{T}\) is the tangent space.
The article is organized as follows. In Section 2, the EL-weighted spatial depth function is constructed, and its ASN and efficiency are established in the presence of distributional symmetry. The ASN and efficiency of the EL-weighted estimators of linear functionals are proved when the marginal distribution functions are known in Section 3, and when the marginal distributions are unknown but equal in Section 4. The simulation results are reported in Section 5. Section 6 contains Theorem 4 of Wang and Peng (2022).
## 2 The EL-weighted spatial median
In this section, we introduce the EL-weighted spatial depth function, exhibit efficiency and give the asymptotic normality.
The statistical depth functions provide a center-outward ordering of a point in \(\mathcal{R}^{p}\) with respect to a distribution. High depth values correspond to centrality while low values to "outlyingness". Depth functions possess robustness property, and can be used to define multivariate medians, which are robust location estimators. Common depth functions include the Tukey depth (halfspace depth), the simplicial depth, the projection depth, and the spatial depth. Here we shall use the easy EL-approach to constructing improved depths, and illustrate it with the spatial depth. The (population) spatial depth function \(D(\mathbf{x})\) with respect to a distribution \(F\) is defined as
\[D(\mathbf{x})=1-\|E\big{(}\mathbb{S}(\mathbf{x}-\mathbf{X})\big{)}\|,\quad \mathbf{x}\in\mathcal{R}^{p},\]
where \(\mathbb{S}(\mathbf{x})=\mathbf{x}/\|\mathbf{x}\|\) if \(\mathbf{x}\neq 0\) (\(\mathbb{S}(0)=0\)) is the spatial sign function and \(\mathbf{X}\) has the distribution function (DF) \(F(\mathbf{x})\), denoted by \(\mathbf{X}\sim F(\mathbf{x})\). The depth function \(D(\mathbf{x})\) can be estimated by the sample depth function given by
\[D_{n}(\mathbf{x})=1-\Big{\|}\frac{1}{n}\sum_{i=1}^{n}\mathbb{S}_{\mathbf{x}}( \mathbf{X}_{i})\Big{\|}.\]
where \(\mathbb{S}_{\mathbf{x}}(\mathbf{t})=\mathbb{S}(\mathbf{t}-\mathbf{x})\). The sample spatial median \(\mathbf{m}_{n}\) is defined as the value which maximizes the depth function, that is,
\[\mathbf{m}_{n}=\arg\max_{\mathbf{x}\in\mathcal{R}^{p}}D_{n}(\mathbf{x})=\arg \min_{\mathbf{x}\in\mathcal{R}^{p}}\Big{\|}\frac{1}{n}\sum_{i=1}^{n}\mathbb{S }_{\mathbf{x}}(\mathbf{X}_{i})\Big{\|}.\]
Suppose that there is available additional information that can be expressed by a constraint function \(\mathbf{u}\). While the sample depth \(D_{n}(\mathbf{x})\) does not utilize the information, the _EL-weighted depth function_\(\widetilde{D}_{n}(\mathbf{x})\) makes use of it and is defined by
\[\widetilde{D}_{n}(\mathbf{x})=1-\Big{\|}\frac{1}{n}\sum_{i=1}^{n}\frac{ \mathbb{S}_{\mathbf{x}}(\mathbf{X}_{i})}{1+\mathbf{u}(\mathbf{X}_{i})^{\top} \boldsymbol{\tilde{\zeta}}}\Big{\|},\quad\mathbf{x}\in\mathcal{R}^{p}, \tag{2.1}\]
where \(\boldsymbol{\tilde{\zeta}}\) is the solution to the equation
\[\sum_{j=1}^{n}\frac{\mathbf{u}(\mathbf{X}_{j})}{1+\mathbf{u}(\mathbf{X}_{j}) ^{\top}\boldsymbol{\zeta}}=0. \tag{2.2}\]
The EL-weighted spatial median \(\widetilde{\mathbf{m}}\) is defined as the value which maximizes the EL-weighted depth function, that is,
\[\widetilde{\mathbf{m}}=\arg\max_{\mathbf{x}\in\mathcal{R}^{p}}\widetilde{D}_{ n}(\mathbf{x})=\arg\min_{\mathbf{x}\in\mathcal{R}^{p}}\Big{\|}\frac{1}{n}\sum_{i=1}^{n} \frac{\mathbb{S}(\mathbf{x}-\mathbf{X}_{i})}{1+\mathbf{u}(\mathbf{X}_{i})^{ \top}\boldsymbol{\tilde{\zeta}}}\Big{\|}. \tag{2.3}\]
The EL-weighted estimator of \(\mathbf{\theta}(\mathbf{x})=E(\mathbb{S}_{\mathbf{x}}(\mathbf{X}))\) is given by
\[\mathbf{\tilde{\theta}}(\mathbf{x})=\frac{1}{n}\sum_{i=1}^{n}\frac{\mathbb{S}_{ \mathbf{x}}(\mathbf{X}_{i})}{1+\mathbf{u}(\mathbf{X}_{i})^{\top}\mathbf{\tilde{ \zeta}}},\quad\mathbf{x}\in\mathbf{R}^{p}. \tag{2.4}\]
**Remark 2.1**.: The sample spatial \(D_{n}(\mathbf{x})\) is robust with the breakdown point \(1/2\). The EL-weighted \(\widetilde{D}_{n}(\mathbf{x})\) improves efficiency but reduces robustness resulted from the zero value of the EL-weights. One can robustify \(\widetilde{D}_{n}(\mathbf{x})\) by truncating the EL-weights from below by a fixed constant. Truncation is commonly used in the inverse probability weighing method. Obviously, truncation leads to certain loss of efficiency.
**Known marginal medians**. In our simulation study, we looked at the side information that the bivariate random vector \(\mathbf{X}=(X_{1},X_{2})^{\top}\) has _known_ marginal medians \(m_{10}\) and \(m_{20}\). That is, the componentwise median \((m_{10},m_{20})^{\top}\) is known. In this case, \(\mathbf{u}(x_{1},x_{2})=(\mathbf{1}[x_{1}\leq m_{10}]-1/2,\mathbf{1}[x_{2} \leq m_{20}]-1/2)^{\top}\). We are motivated as follows. It is well known that the spatial median is a better location estimator than the componentwise median because the former takes into account the correlation of the components while the latter ignores it, see Chen, Dang, Peng and Bart (2009). We are interested in how much information is lost when the componentwise median is used by looking at how much efficiency of the EL-weighted spatial median \(\tilde{\mathbf{m}}\) (when the marginal medians are known) gains over the sample spatial median (when the marginal medians are unknown).
**Growing number of constraints**. Suppose that there exists some constant vector \(\mathbf{c}\) such that \(T=\mathbf{c}^{\top}\mathbf{X}\) is _symmetric_ about some known value \(\tau_{0}\). Let \(\varepsilon_{j}=\mathbf{c}^{\top}\mathbf{X}_{j}-\tau_{0},j=1,\ldots,n\). Then \(\varepsilon_{j}\)'s are i.i.d. random variables which are symmetric about zero. Let \(\varepsilon\) be an i.i.d. copy of \(\varepsilon_{j}\)'s, and let \(F\) be the distribution function of \(\varepsilon\). Let \(L_{2,0}(F,\text{odd})\) be the subspace of \(L_{2,0}(F)\) consisting of the odd functions. Symmetry of \(\varepsilon\) about \(0\) implies
\[E(a(\varepsilon))=0,\quad a\in L_{2,0}(F,\text{odd}).\]
Let \(s_{k}(t)=\sin(k\pi t),t\in[-1,1],k=1,2,...\) be the orthonormal trigonometric basis. Define \(G(t)=2F(t)-1,t\in\mathcal{R}\). Then \(G(t)\) is an odd function in \(L_{2,0}(F,\text{odd})\), and \(s_{k}(G(t)),k=1,2,...\) form a basis of the space.
In this case, the constraints are \(\mathbf{u}_{n}(\mathbf{X}_{j})=(s_{1}(G(\varepsilon_{j})),...,s_{m_{n}}(G( \varepsilon_{j})))^{\top}\), where we allow \(m_{n}\) to grow to infinity slowly with \(n\). The EL-weighted depth function is calculated by (2.1) with \(\mathbf{u}=\mathbf{u}_{n}\) and \(\mathbf{\tilde{\zeta}}=\mathbf{\tilde{\zeta}}_{n}\) which solves Eq (2.2) with \(\mathbf{u}=\mathbf{u}_{n}\). The EL-weighted estimator of \(\mathbf{\theta}(\mathbf{x})=E(\mathbb{S}_{\mathbf{x}}(\mathbf{X}))\) then is
\[\mathbf{\tilde{\theta}}_{n}(\mathbf{x})=\frac{1}{n}\sum_{i=1}^{n}\frac{\mathbb{S} _{\mathbf{x}}(\mathbf{X}_{i})}{1+\mathbf{u}(\mathbf{X}_{i})^{\top}\mathbf{\tilde {\zeta}}_{n}},\quad\mathbf{x}\in\mathbf{R}^{p}. \tag{2.5}\]
**Theorem 2.1**.: _Suppose that \(F\) is continuous. Then for arbitrary but fixed \(\mathbf{x}\in\mathbf{R}^{p}\), as \(m_{n}\rightarrow\infty\) such that \(m_{n}^{4}/n\to 0\), \(\mathbf{\tilde{\theta}}_{n}(\mathbf{x})\) in (2.5) satisfies_
\[\mathbf{\tilde{\theta}}_{n}(\mathbf{x})=\bar{\mathbb{S}}_{\mathbf{x}}-\bar{\mathbf{ \varphi}}_{\mathbf{x}0}+o_{p}(n^{-1/2}),\]
_where \(\boldsymbol{\varphi}_{\mathbf{x}0}=\Pi(\mathbb{S}_{\mathbf{x}}(\mathbf{X})|L_{2,0}( F,\mathrm{odd}))\) is the projection of \(\mathbb{S}_{\mathbf{x}}(\mathbf{X})\) onto \(L_{2,0}(F,\mathrm{odd})\). As a consequence, if \(\Sigma_{0}(\mathbf{x})=\mathrm{Var}(\mathbb{S}_{\mathbf{x}}(\mathbf{X}))- \mathrm{Var}(\boldsymbol{\varphi}_{\mathbf{x}0}(\mathbf{X}))\) is nonsingular,_
\[\sqrt{n}(\boldsymbol{\tilde{\theta}}_{n}(\mathbf{x})-\boldsymbol{\theta}( \mathbf{x})){\Longrightarrow}\mathscr{N}(0,\Sigma_{0}(\mathbf{x})).\]
Proof of Theorem 2.1.: We shall apply Theorem 1.1 to prove the result. Since \(\mathbf{W}_{n}=E((\mathbf{u}\mathbf{u}^{\top})(\mathbf{X}))=\mathbf{I}_{m_{n}}\) is the identity matrix, it follows that (C) holds and \(\mathbf{W}_{n}\) is regular. As \(\|\mathbf{u}_{n}(\mathbf{X}_{j})\|\leq m_{n}^{1/2}\) for each \(j\) and \(m_{n}^{4}/n=o(1)\), (1.6) is satisfied, while (1.7) holds in view of the inequalities
\[nE(|\mathbf{\bar{W}}_{n}-\mathbf{W}_{n}|_{o}^{2})\leq E(\|\mathbf{u}_{n}( \mathbf{X})\|^{4})\leq m_{n}^{2}.\]
Let \(\mathbf{K}_{n}\) be the left hand side of (1.8). Then (1.8) follows from
\[nE(\|\mathbf{K}_{n}\|^{2})\leq E(\|\mathbb{S}_{\mathbf{x}}(\mathbf{X})\otimes \mathbf{u}_{n}(\mathbf{X})\|^{2})\leq m_{n}E(\|\mathbb{S}_{\mathbf{x}}(\mathbf{ X})\|^{2})=m_{n}.\]
We now apply Theorem 1.1 to complete the proof.
**Efficiency gain and ASN for \(\tilde{\mathbf{m}}\)**. By the properties of empirical likelihood, one concludes that \(\widetilde{D}_{n}(\mathbf{x})\) is a valid depth function at least for large \(n\) as all \(1+\mathbf{u}(\mathbf{X}_{i})^{\top}\tilde{\boldsymbol{\zeta}}>0\). Fix \(\mathbf{x}\in\mathcal{R}^{p}\), let \(\mathbf{P}_{\mathbf{x}}\) be the projection of \(\mathbb{S}_{\mathbf{x}}(\mathbf{X})\) onto the closed linear span \([\mathbf{u}_{\infty}]=L_{2,0}(F,odd)\). Then \(\Sigma_{0}(\mathbf{x})=\mathrm{Var}(\mathbb{S}_{\mathbf{x}}(\mathbf{X}))- \mathrm{Var}(\mathbf{P}_{\mathbf{x}}(\mathbf{X}))\). Clearly,
\[\mathrm{Var}(\mathbf{P}_{\mathbf{x}}(\mathbf{X}))=E\big{(}\mathbb{S}_{ \mathbf{x}}(\mathbf{X})v\otimes\mathbf{u}(\mathbf{X})^{\top}\big{)}\mathbf{W }^{-1}E\big{(}\mathbb{S}_{\mathbf{x}}(\mathbf{X})\otimes\mathbf{u}(\mathbf{X} )\big{)}. \tag{2.6}\]
Let \(\mathbb{S}_{2}(\mathbf{x})=\mathbb{S}\big{(}E(\mathbb{S}_{\mathbf{x}}(\mathbf{ X}))\big{)}\). If \(\mathbf{W}_{0}(\mathbf{x}):=\mathbb{S}_{2}(\mathbf{x})\mathbf{V}_{0}(\mathbf{x}) \mathbb{S}_{2}(\mathbf{x})^{\top}\) is nonsingular, then by Theorem 2.1 for fixed \(\mathbf{x}\in\mathbf{R}\),
\[\sqrt{n}(\widetilde{D}_{n}(\mathbf{x})-D(\mathbf{x})){\Longrightarrow} \mathscr{N}(0,\mathbf{W}_{0}(\mathbf{x})).\]
Note that the sample depth \(D_{n}(\mathbf{x})\) satisfies
\[\sqrt{n}(D_{n}(\mathbf{x})-D(\mathbf{x})){\Longrightarrow}\mathscr{N}(0, \mathbf{W}(\mathbf{x})),\]
where \(\mathbf{W}(\mathbf{x})=\mathbb{S}_{2}(\mathbf{x})\,\mathrm{Var}(\mathbb{S}_{ \mathbf{x}}(\mathbf{X}))\mathbb{S}_{2}(\mathbf{x})^{\top}\). Thus the reduction of the asymptotic variance-covariance of the EL-weighted depth \(\widetilde{D}_{n}(\mathbf{x})\) is
\[\mathbb{S}_{2}(\mathbf{x})\,\mathrm{Var}(\mathbf{P}_{\mathbf{x}}(\mathbf{X})) \mathbb{S}_{2}(\mathbf{x})^{\top}.\]
We now use the Delta method to drive the ASN of the EL-weighted spatial median \(\tilde{\mathbf{m}}\). To this end, we need some results from Chaudhuri (1992) in the case of \(m=1\) for which the spatial median corresponds to his multivariate Hodges-Lehmann type location estimate. The following is his Assumption 3.1.
* \(\mathbf{X}_{1}\), \(\ldots\), \(\mathbf{X}_{n}\) are i.i.d random vectors in \(\mathcal{R}^{d}\) with an absolutely continuous (with respect to the Lebesgue measure) distribution having a density \(f\) that is bounded on every bounded subset of \(\mathcal{R}^{d}\).
Assume (PC) and \(d\geq 2\). Let \(\mathbf{H}(\mathbf{x})=\|\mathbf{x}\|^{-1}(\mathbf{I}_{d}-\mathbf{x}\mathbf{x}^{ \top}/\|\mathbf{x}\|^{2})\) if \(\mathbf{x}\neq 0\) and \(\mathbf{H}(0)=0\). Note that \(\mathbb{S}(\mathbf{x})\) and \(\mathbf{H}(\mathbf{x})\) are the first and second order partial derivatives of \(\|\mathbf{x}\|\). Under (PC), the underlying distribution is absolutely continuous with respect to the Lebesgue measure on \(\mathcal{R}^{p}(d\geq 2)\), hence the (population) spatial median \(\mathbf{m}_{0}\) uniquely exists and satisfies the equation \(E(\mathbb{S}(\mathbf{m}_{0}-\mathbf{X}))=0\). The spatial median \(\mathbf{m}_{n}\) satisfies
\[\sum_{i=1}^{n}\mathbb{S}(\mathbf{m}_{n}-\mathbf{X}_{i})=0.\]
Let \(\mathbf{J}=E\big{(}(\mathbb{S}\mathbb{S}^{\top})(\mathbf{m}_{0}-\mathbf{X}) \big{)}\) and \(\mathbf{K}=E\big{(}\mathbf{H}(\mathbf{m}_{0}-\mathbf{X})\big{)}\). Chaudhuri (1992) showed in his Theorem 3.3 and its corollary that if (PC) holds then the matrices \(\mathbf{J}\) and \(\mathbf{K}\) are positive definite and \(\mathbf{m}_{n}\) satisfies
\[\sqrt{n}(\mathbf{m}_{n}-\mathbf{m}_{0})\Longrightarrow\mathscr{N}(0,\, \mathbf{K}^{-1}\mathbf{J}\mathbf{K}^{-\top}).\]
Note that the EL-weighted spatial median \(\widetilde{\mathbf{m}}_{n}\) satisfies the equation,
\[\sum_{i=1}^{n}\frac{\mathbb{S}(\mathbf{m}-\mathbf{X}_{i})}{1+\mathbf{u}( \mathbf{X}_{i})^{\top}\widehat{\boldsymbol{\zeta}}}=0.\]
Using the Delta method, we derive, with \(\mathbf{V}_{0}(\mathbf{m}_{0})=\mathbf{J}-\mathrm{Var}(\mathbf{P}_{\mathbf{m} _{0}}(\mathbf{X}))\),
\[\sqrt{n}(\widetilde{\mathbf{m}}_{n}-\mathbf{m}_{0})\Longrightarrow\mathscr{N} (0,\,\mathbf{K}^{-1}\mathbf{V}_{0}(\mathbf{m}_{0})\mathbf{K}^{-\top}),\]
where \(\mathrm{Var}(\mathbf{P}_{\mathbf{m}_{0}}(\mathbf{X}))\) is calculated by (2.6).
**Growing number of estimated constraints**. For unknown \(F(x)\), we estimate it by the symmetrized empirical distribution function,
\[\mathbb{F}(x)=\frac{1}{n}\sum_{j=1}^{n}\frac{\mathbf{1}[\varepsilon_{j}\leq x ]+\mathbf{1}[-\varepsilon_{j}\leq x]}{2},\quad x\in\mathcal{R}.\]
Let \(\mathbb{G}(x)=2\mathbb{F}(x)-1\). We thus obtain computable functions \(s_{j}(\mathbb{G}(x))\). Write \(\mathbf{u}_{n}\) for \(\mathbf{u}\), and estimate it by \(\hat{\mathbf{u}}_{n}(x)=(s_{1}(\mathbb{G}(x)),...,s_{m_{n}}(\mathbb{G}(x)))^{\top}\). The EL-weighted estimator of \(\boldsymbol{\theta}(\mathbf{x})=E(\mathbb{S}_{\mathbf{x}}(X))\) is now given by
\[\hat{\boldsymbol{\theta}}_{n}(\mathbf{x})=\frac{1}{n}\sum_{i=1}^{n}\frac{ \mathbb{S}_{\mathbf{x}}(\mathbf{X}_{i})}{1+\hat{\mathbf{u}}_{n}(\mathbf{X}_{i })^{\top}\widehat{\boldsymbol{\zeta}}_{n}},\quad\mathbf{x}\in\mathcal{R}^{p}, \tag{2.7}\]
where \(\widehat{\boldsymbol{\zeta}}_{n}\) solves Eqt (2.2) with \(\mathbf{u}=\hat{\mathbf{u}}_{n}\). We have
**Theorem 2.2**.: _Suppose that \(F\) is continuous. Then \(\hat{\boldsymbol{\theta}}_{n}\) defined in (2.7) satisfies the conclusions of Theorem 2.1 as \(m_{n}\to\infty\) such that \(m_{n}^{6}/n\to 0\)._
Proof of Theorem 2.2.: We shall use Theorem 6.1 of Wang and Peng (2022) for the proof. First, (C) is satisfied with \(\mathbf{W}_{n}\) regular as \(\mathbf{W}_{n}=I_{m_{n}}\). Next, (6.1) follows from \(\|\hat{\mathbf{u}}_{n}(Z_{j})\|^{2}\leq m_{n}\) and \(m_{n}^{4}/n=o(1)\). Let
\[\hat{\mathbf{W}}_{n}=\frac{1}{n}\sum_{j=1}^{n}(\hat{\mathbf{u}}_{n}\hat{ \mathbf{u}}_{n}^{\top})(\mathbf{X}_{j}),\quad\bar{\mathbf{W}}_{n}=\frac{1}{n} \sum_{j=1}^{n}(\mathbf{u}_{n}\mathbf{u}_{n}^{\top})(\mathbf{X}_{j})^{\top}.\]
Then \(\bar{\mathbf{W}}_{n}-\mathbf{W}_{n}=o_{p}(m_{n}^{-1})\) follows from \(m_{n}^{4}/n=o(1)\) and
\[nE(|\bar{\mathbf{W}}_{n}-\mathbf{W}_{n}|_{o}^{2})\leq E(\|\mathbf{u}_{n}( \mathbf{Z}_{1})\|^{4})\leq m_{n}^{2}.\]
Let \(D_{n}=n^{-1}\sum_{j=1}^{n}\|\hat{\mathbf{u}}_{n}(\mathbf{X}_{j})-\mathbf{u}_{n }(\mathbf{X}_{j})\|^{2}\). It is easy to see
\[|\bar{\mathbf{W}}_{n}-\bar{\mathbf{W}}_{n}|_{o}\leq D_{n}+2|\bar{\mathbf{W}}_{ n}|_{o}^{1/2}D_{n}^{1/2}.\]
Thus (6.2) follows from \(D_{n}=o_{p}(m_{n}^{-2})\) to be shown next. To this end, let \(\mathbf{s}_{n}=(s_{1},...,s_{m_{n}})^{\top}\). Then \(\|\mathbf{s}_{n}(t)\|\leq m_{n}^{1/2}\). One verifies \(\|\boldsymbol{\psi}_{n}^{\prime}(t)\|\leq am_{n}^{3/2}\) for some constant \(a\). Therefore, \(D_{n}=o_{p}(m_{n}^{-2})\) follows from \(D_{n}=O_{p}(m_{n}^{3}/n)\) and \(m_{n}^{5}/n=o(1)\), in view of
\[\frac{1}{n}\sum_{j=1}^{n}\|\mathbf{s}_{n}(\mathbb{G}(\mathbf{X}_{j}))- \mathbf{s}_{n}(G(\mathbf{X}_{j}))\|^{2}\leq am_{n}^{3}\sup_{t\in\mathcal{R}}| \mathbb{G}(t)-G(t)|^{2}=O_{p}(m_{n}^{3}/n).\]
Denoting \(\boldsymbol{\psi}(\mathbf{y})=\mathbb{S}_{\mathbf{x}}(\mathbf{y})\), we break
\[\frac{1}{n}\sum_{j=1}^{n}\Big{(}\boldsymbol{\psi}(\mathbf{X}_{j})\otimes\hat{ \mathbf{u}}_{n}(\mathbf{X}_{j})-E\big{(}\boldsymbol{\psi}(\mathbf{X}_{j}) \otimes\mathbf{u}_{n}(\mathbf{X}_{j}))\Big{)}=\mathbf{J}_{n}+\mathbf{K}_{n}, \quad\text{where}\]
\[\mathbf{J}_{n}=\frac{1}{n}\sum_{j=1}^{n}\boldsymbol{\psi}(\mathbf{X}_{j}) \otimes\big{(}\hat{\mathbf{u}}_{n}(\mathbf{X}_{j})-\mathbf{u}_{n}(\mathbf{X}_ {j})\big{)},\]
\[\mathbf{K}_{n}=\frac{1}{n}\sum_{j=1}^{n}\Big{(}\boldsymbol{\psi}(\mathbf{X}_{j })\otimes\mathbf{u}_{n}(\mathbf{X}_{j})-E\big{(}\boldsymbol{\psi}(\mathbf{X}_ {j})\otimes\mathbf{u}_{n}(\mathbf{X}_{j})\big{)}\Big{)}.\]
By Cauchy inequality,
\[E(\|\mathbf{J}_{n}\|^{2}) \leq E(\|\boldsymbol{\psi}(\mathbf{X}_{1})\|^{2})\frac{1}{n}\sum_ {j=1}^{n}E(\|\hat{\mathbf{u}}_{n}(\mathbf{X}_{j})-\mathbf{u}_{n}(\mathbf{X}_ {j})\|^{2})\] \[=E(D_{n})=O(m_{n}^{3}/n)=o(m_{n}^{-1}),\]
as \(m_{n}^{4}/n=o(1)\). We now bound the variance by the second moment to get
\[E(\|\mathbf{K}_{n}\|^{2})\leq\frac{1}{n}E(\|\boldsymbol{\psi}(\mathbf{X}_{1}) \otimes\mathbf{u}_{n}(\mathbf{X}_{1})\|^{2})\leq 4\frac{m_{n}}{n}=o(m_{n}^{-1})\]
as \(m_{n}^{2}/n=o(1)\). Taken together we prove (6.3) - (6.4).
We now show that (6.5) holds with \(\mathbf{v}_{n}=\mathbf{u}_{n}\). To this end, using Taylor expansion we write
\[\frac{1}{n}\sum_{j=1}^{n}(\hat{\mathbf{u}}_{n}(\mathbf{X}_{j})-\mathbf{u}_{n}( \mathbf{X}_{j}))=\mathbf{L}_{n}+\mathbf{m}_{n},\quad\text{where}\]
\[\mathbf{L}_{n}=\frac{1}{n}\sum_{j=1}^{n}\boldsymbol{\psi}_{n}^{\prime}(G( \varepsilon_{j}))\big{(}\mathbb{G}(\varepsilon_{j})-G(\varepsilon_{j})\big{)},\,\mathbf{m}_{n}=\frac{1}{n}\sum_{j=1}^{n}\boldsymbol{\psi}_{n}^{\prime \prime}(G_{nj}^{*})\big{(}\mathbb{G}(\varepsilon_{j})-G(\varepsilon_{j})\big{)} ^{2},\]
where \(G_{nj}^{*}\) lies in between \(\mathbb{G}(\varepsilon_{j})\) and \(G(\varepsilon_{j})\). It thus follows
\[E\big{(}\|\mathbf{L}_{n}\|^{2}\big{)} \leq\frac{1}{n}E\Big{(}\|\boldsymbol{\psi}_{n}^{\prime}(G( \varepsilon_{1}))\|^{2}\big{(}\mathbb{G}(\varepsilon_{j})-G(\varepsilon_{j}) \big{)}^{2}\Big{)}\] \[\leq a\frac{m_{n}^{3}}{n}\sup_{t\in\mathcal{R}}\big{(}\mathbb{G}( t)-G(t)\big{)}^{2}\] \[=O_{p}(m_{n}^{3}/n^{2})=o_{p}((m_{n}n)^{-1})\]
as \(m_{n}^{4}/n=o(1)\). This shows \(\mathbf{L}_{n}=o_{p}((m_{n}n)^{-1/2})\). One has \(\|\boldsymbol{\psi}^{\prime\prime}(t)\|=O_{p}(m_{n}^{5/2})\). Using this, we get
\[\|\mathbf{m}_{n}\|\leq O(m_{n}^{5/2})\sup_{t\in\mathcal{R}}|\mathbb{G}(t)-G(t )|^{2}=O_{p}(m_{n}^{5/2}/n)=o_{p}((m_{n}n)^{-1/2})\]
as \(m_{n}^{6}/n=o(1)\). This yields \(\mathbf{m}_{n}=o_{p}((m_{n}n)^{-1/2})\). Taken together the desired (6.5) follows. We now apply Theorem 6.1 to finish the proof.
## 3 Efficient estimation of linear functionals with known marginals
Suppose that there is available the information that the marginal distributions \(F\) and \(G\) of \(Q\) are _known_. This can be characterized by
\[\int c(x)\,dQ(x,y) =\int c(x)\,dF(x)=0,\quad c\in L_{2,0}(F),\] \[\int d(y)\,dQ(x,y) =\int d(y)\,dG(y)=0,\quad d\in L_{2,0}(G).\]
Bickel, et al. (1991) and Peng and Schick (2002) constructed efficient estimators of the linear functional \(\theta=\int\psi\,dQ\), and proved the ASN under the assumption,
* There exists \(\rho>0\) such that for arbitrary measurable sets \(A\) and \(B\), \[P(X\in A,Y\in B)\geq\rho F(A)G(B).\]
Bickel, et al. (1991) showed that the project of \(\psi\in L_{2}(Q)\) onto the sum space \(L_{2,0}(F)+L_{2,0}(G)\) uniquely exists. They demonstrated that the asymptotic variance of the efficient estimator \(\tilde{\theta}\) of \(\theta\) can be substantially less than that of the empirical estimator \(n^{-1}\sum_{j=1}^{n}\psi(X_{j},Y_{j})\). For example, they showed that the empirical DF \(n^{-1}\sum_{j=1}^{n}\mathbf{1}[X_{j}\leq 1/2,Y_{j}\leq 1/2]\) of \(\theta=P(X\leq 1/2,Y\leq 1/2)\) (taking \(\psi_{s,t}(x,y)=\mathbf{1}[x\leq s,y\leq t]\)) has three times the asymptotic variance of the efficient estimator \(\tilde{\theta}\) of \(\theta\) in the case that \(F\) and \(G\) are uniform distributions over \([0,1]\) and \(X,Y\) are independent
Here we propose an efficient estimator based on maximum empirical likelihood. Employing a basis \(\{c_{k}\}\) of \(L_{2,0}(F)\) and \(\{d_{k}\}\) of \(L_{2,0}(G)\), we can reduce the uncountably many characterizing equations to countably many ones,
\[\int c_{k}(x)\,dF(x)=0,\quad\int d_{k}(y)\,dG(y)=0,\quad k=1,2,\ldots. \tag{3.1}\]
Suppose that \(F\) and \(G\) are continuous. This allows us to take \(c_{k}=b_{k}(F)\) and \(d_{k}=b_{k}(G)\), where \(b_{k}(t)\) are the trigonometric basis,
\[b_{k}(t)=\sqrt{2}\cos(k\pi t),\quad t\in[0,1],k=1,2,\ldots. \tag{3.2}\]
That is, \(\{c_{k}\}\) and \(\{d_{k}\}\) are bases of \(L_{2,0}(F)\) and \(L_{2,0}(G)\), respectively. Using the first \(2m_{n}\) terms as constraints, the EL-weighted estimator of \(\theta\) is
\[\hat{\theta}_{n}=\frac{1}{n}\sum_{j=1}^{n}\frac{\psi(\mathbf{Z}_{j})}{1+ \boldsymbol{\zeta}_{n}^{\top}\mathbf{u}_{n}(\mathbf{Z}_{j})}, \tag{3.3}\]
where \(\mathbf{u}_{n}(x,y)=(\mathbf{b}_{n}(F(x))^{\top},\mathbf{b}_{n}(G(y))^{\top} )^{\top}\) with \(\mathbf{b}_{n}=(b_{1},...,b_{m_{n}})^{\top}\). Using Theorem 1.1, we prove
**Theorem 3.1**.: _Suppose that \(F\) and \(G\) are continuous. Assume (K). Then, as \(m_{n}\to\infty\) such that \(m_{n}^{4}/n\to 0\),_
\[\hat{\theta}_{n}=\bar{\psi}-\bar{\varphi}_{0}+o_{p}(n^{-1/2}),\]
_where \(\varphi_{0}\) is the projection of \(\psi\) onto the sum space \(L_{2,0}(F)+L_{2,0}(G)\). Hence,_
\[\sqrt{n}(\hat{\theta}_{n}-\theta){\Longrightarrow}{\mathcal{N}}(0,\Sigma),\]
_where \(\Sigma=\operatorname{Var}(\psi(\mathbf{Z}))-\operatorname{Var}(\varphi_{0}( \mathbf{Z}))\)._
**Remark 3.1**.: By Bickel, et al. (1991) (pp. 1328-29), the estimator \(\tilde{\theta}_{n}\) in (3.3) of \(\theta=\int\psi(x,y)\,dQ(x,y)\) is semiparametrically efficient.
Proof of Theorem 3.1.: We shall rely on Theorem 1.1. Since \(\|\mathbf{u}_{n}\|\leq 2\sqrt{m_{n}}\) and \(m_{n}^{4}/n=o(1)\), it follows that (1.6) holds. Thus
\[nE(|\mathbf{\tilde{W}}_{n}-\mathbf{W}_{n}|_{o}^{2})\leq E(\|\mathbf{u}_{n}( \mathbf{Z})\|^{4})\leq 16m_{n}^{2}=o_{p}(m_{n}^{-2})\]
as \(m_{n}^{4}/n=o(1)\). This shows (1.7). Let
\[\mathbf{K}_{n}=\frac{1}{n}\sum_{j=1}^{n}\Big{(}\psi(\mathbf{Z}_{j})\otimes \mathbf{u}_{n}(\mathbf{Z}_{j})-E\big{(}\psi(\mathbf{Z}_{j})\otimes\mathbf{u}_ {n}(\mathbf{Z}_{j})\big{)}\Big{)}. \tag{3.4}\]
It follows from \(m_{n}^{2}/n=o(1)\) that (1.8) holds in view of
\[E(\|\mathbf{K}_{n}\|^{2})\leq\frac{1}{n}E(\|\psi(\mathbf{Z}_{1})\otimes \mathbf{u}_{n}(\mathbf{Z}_{1})\|^{2})\leq 4\frac{m_{n}}{n}E(|\psi(\mathbf{Z}_{1}) |^{2})=o(m_{n}^{-1}).\]
We are now left to prove the regularity of \(\mathbf{W}_{n}\). Since \(\mathbf{b}_{n}\) are the first \(m_{n}\) terms of the orthonormal basis \(\{b_{k}\}\), it follows that \(E(\mathbf{b}_{n}(F(X))\mathbf{b}_{n}(F(X))^{\top})=\mathbf{I}_{m_{n}}\). The same holds for \(G\). Let \(\mathbf{C}_{n}=E(\mathbf{b}_{n}(F(X))\mathbf{b}_{n}(G(Y))^{\top})\). Then \(\mathbf{W}_{n}\) is the \(2m_{n}\times 2m_{n}\) dispersion matrix whose (1,1)- and (2, 2)-blocks are equal to
and the (1,2)-block equal to \(\mathbf{C}_{n}\). For \(\mathbf{s},\mathbf{t}\in\mathcal{R}^{m_{n}}\) with \(\|\mathbf{s}\|^{2}+\|\mathbf{t}\|^{2}=1\), set \(\mathbf{r}=(\mathbf{s}^{\top},\mathbf{t}^{\top})^{\top}\). We have
\[\mathbf{r}^{\top}\mathbf{W}_{n}\mathbf{r}=\|\mathbf{s}\|^{2}+\|\mathbf{t}\|^{2 }+2\mathbf{s}^{\top}\mathbf{C}_{n}\mathbf{t}. \tag{3.5}\]
By Cauchy inequality,
\[(\mathbf{s}^{\top}\mathbf{C}_{n}\mathbf{t})^{2} \leq\mathbf{s}^{\top}E(\mathbf{b}_{n}(F(X))\mathbf{b}_{n}(F(X))^ {\top})\mathbf{s}\,\mathbf{t}^{\top}E(\mathbf{b}_{n}(G(Y))\mathbf{b}_{n}(G(Y) )^{\top})\mathbf{t}\] \[=\|\mathbf{s}\|^{2}\|\mathbf{t}\|^{2}\leq 1.\]
It thus follows from (3.5) that \(\mathbf{r}^{\top}\mathbf{W}_{n}\mathbf{r}\leq 4\) uniformly in \(n\) and the above \(\mathbf{r}\). For \(a\in L_{2,0}(F)\) and \(b\in L_{2,0}(G)\), (K) implies
\[\int(a(x)-b(y))^{2}\,dQ(x,y) \geq\rho\int(a(x)-b(y))^{2}dF(x)dG(y)\] \[=\rho\big{(}\int a^{2}\,dF+\int b^{2}\,dG\big{)}.\]
Thus
\[2\int ab\,dQ\leq(1-\rho)\big{(}\int a^{2}\,dF+\int b^{2}\,dG\big{)}.\]
Replacing \(a\) with \(-a\) yields
\[2\int ab\,dQ\geq-(1-\rho)\big{(}\int a^{2}\,dF+\int b^{2}\,dG\big{)}\]
Taking \(a=\mathbf{s}^{\top}\mathbf{b}_{n}(F)\) and \(b=\mathbf{b}_{n}(G)^{\top}\mathbf{t}\) and noticing
\[\int a^{2}\,dF=\|\mathbf{s}\|^{2},\quad\int b^{2}\,dG=\|\mathbf{t}\|^{2},\]
we derive
\[2\mathbf{s}^{\top}\mathbf{C}_{n}\mathbf{t}=2\int\mathbf{s}^{\top}\mathbf{b}_{ n}(F(x))\mathbf{b}_{n}(G(y))^{\top}\mathbf{t}\,dQ(x,y)\geq-(1-\rho)(\| \mathbf{s}\|^{2}+\|\mathbf{t}\|^{2}).\]
By (3.5), we thus arrive at
\[\mathbf{r}^{\top}\mathbf{W}_{n}\mathbf{r}\geq\|\mathbf{s}\|^{2}+\|\mathbf{t} \|^{2}-(1-\rho)(\|\mathbf{s}\|^{2}+\|\mathbf{t}\|^{2})=\rho>0.\]
Taken together we prove the regularity of \(\mathbf{W}_{n}\), and apply Theorem 1.1 to complete the proof.
## 4 Efficient estimation of linear functionals with equal marginals
Suppose that the marginal distributions \(F\) and \(G\) of \(X\) and \(Y\) are _equal but unknown_. This is equivalent to the assertion that
\[E(a_{k}(X)-a_{k}(Y))=0,\quad k=1,2,\ldots, \tag{4.1}\]
where \(\{a_{k}\}\) is an orthonormal basis of \(L_{2,0}(H)\) with \(H=(F+G)/2\). Assume that \(F\) and \(G\) are continuous. This allows us take \(a_{k}(x)=b_{k}(H(x))\) under the assumption \(F=G=H\), where \(\{b_{k}\}\) is the trigonometric basis in (3.2). As \(H\) is unknown, we estimate it by the pooled empirical distribution function,
\[\mathbb{H}(x)=\frac{1}{n}\sum_{j=1}^{n}\frac{1}{2}(\mathbf{1}[X_{j}\leq x]+ \mathbf{1}[Y_{j}\leq x]),\quad x\in\mathcal{R}.\]
This gives us computable functions \(b_{k}(\mathbb{H}(x))\). Let \(\mathbf{u}_{n}(x,y)=\mathbf{b}_{n}(H(x))-\mathbf{b}_{n}(H(y)),x,y\in\mathcal{R}\). This is unknown and can be estimated by \(\hat{\mathbf{u}}_{n}(x,y)=\mathbf{b}_{n}(\mathbb{H}(x))-\mathbf{b}_{n}( \mathbb{H}(y))\). Using the first \(m_{n}\) terms as constraints, the EL-weighted estimator of \(\theta=E(\psi(X,Y))\) is given by
\[\hat{\theta}_{n}=\frac{1}{n}\sum_{j=1}^{n}\frac{\psi(X_{j},Y_{j})}{1+\boldsymbol {\hat{\zeta}}_{n}^{\top}\hat{\mathbf{u}}_{n}(X_{j},Y_{j})}, \tag{4.2}\]
where \(\boldsymbol{\hat{\zeta}}_{n}\) is the solution to Eqt (1.2) with \(\mathbf{u}=\hat{\mathbf{u}}_{n}\).
Peng and Schick (2005) constructed efficient estimators of linear functionals of a bivariate distribution with equal marginals under the condition,
\[\inf_{a\in A}E[(a(X)-a(Y))^{2}]>0, \tag{4.3}\]
where \(\mathbb{A}=\{a\in L_{2,0}(H):\int a^{2}\,dH=1\}\) is the unit sphere in \(L_{2,0}(H)\). They exhibited that the asymptotic variance of an efficient estimator of \(\theta\) is about \(1/3\) of that of the empirical estimator or smaller.
Applying Theorem 6.1, we show that \(\hat{\theta}_{n}\) is efficient.
**Theorem 4.1**.: _Suppose that the distribution functions \(F\) and \(G\) are equal and continuous. Assume (4.3). Then, as \(m_{n}\to\infty\) such that \(m_{n}^{6}/n\to 0\), \(\hat{\theta}_{n}\) given in (4.2) satisfies_
\[\hat{\theta}_{n}=\bar{\psi}-\bar{\varphi}+o_{p}(n^{-1/2}),\]
_where \(\varphi\) is the projection of \(\psi\) onto \(\mathbb{A}\). Thus_
\[\sqrt{n}(\hat{\theta}_{n}-\theta)\Longrightarrow\mathscr{N}(0,\Sigma),\]
_where \(\Sigma=\operatorname{Var}(\psi)-\operatorname{Var}(\varphi)\)._
**Remark 4.1**.: By Theorem 3 of Peng and Schick (2005), the estimator \(\hat{\theta}_{n}\) given in (4.2) of \(\theta=\int\psi(x,y)\,dQ(x,y)\) is semiparametrically efficient.
Proof of Theorem 4.1.: We shall apply Theorem 6.1. Recalling the trigonometric basis \(\{b_{k}\}\) in (3.2), one readily verifies that \(\mathbf{b}_{n}=(b_{1},\ldots,b_{m_{n}})^{\top}\) has the properties,
\[\|\mathbf{b}_{n}\|\leq(2m_{n})^{1/2},\quad\|\mathbf{b}_{n}^{\prime}\|\leq \sqrt{2}\pi m_{n}^{3/2},\quad\|\mathbf{b}_{n}^{\prime\prime}\|\leq\sqrt{2}\pi ^{2}m_{n}^{5/2}, \tag{4.4}\]
where \({\bf b}_{n}^{\prime}\) and \({\bf b}_{n}^{\prime\prime}\) denote the first and second order derivatives of \({\bf b}\).
Recalling \({\bf u}_{n}(x,y)={\bf b}_{n}(H(x))-{\bf b}_{n}(H(y))\) and \(\hat{\bf u}_{n}(x,y)={\bf b}_{n}(\mathbb{H}(x))-{\bf b}_{n}(\mathbb{H}(y))\), one gets by the first inequality in (4.4) that
\[\|{\bf u}_{n}\|\leq 2\sqrt{2}\sqrt{m_{n}},\quad\|\hat{\bf u}_{n}\|\leq 2\sqrt{2} \sqrt{m_{n}}. \tag{4.5}\]
Hence (6.1) holds as \(m^{4}/n=o(1)\). Noting \({\bf W}_{n}=E({\bf u}_{n}({\bf Z}){\bf u}_{n}({\bf Z})^{\top})\), one has by (4.3) that
uniformly in \(n\) and \(\|\boldsymbol{\lambda}\|=1\) as both \(\boldsymbol{\lambda}^{\top}{\bf b}_{n}(H(X))\) and \(\boldsymbol{\lambda}^{\top}{\bf b}_{n}(H(Y))\) live in \(A\). Moreover,
\[\boldsymbol{\lambda}^{\top}{\bf W}_{n}\lambda\leq 4E\big{(}(\boldsymbol{ \lambda}^{\top}{\bf b}_{n}(H(X)))^{2}\big{)}=4.\]
Thus \({\bf W}_{n}\) is regular. Let
\[\hat{\bf W}_{n}=\frac{1}{n}\sum_{j=1}^{n}\hat{\bf u}_{n}({\bf Z}_{j})\hat{\bf u }_{n}({\bf Z}_{j})^{\top},\quad\bar{\bf W}_{n}=\frac{1}{n}\sum_{j=1}^{n}{\bf u }_{n}({\bf Z}_{j}){\bf u}_{n}({\bf Z}_{j})^{\top}.\]
Then by the first equality in (4.5),
\[nE(|\bar{\bf W}_{n}-{\bf W}_{n}|_{o}^{2})\leq E(|{\bf u}_{n}({\bf Z}_{1})|^{4} )\leq 64m_{n}^{2}.\]
Hence \(\bar{\bf W}_{n}-{\bf W}_{n}=o_{p}(m_{n}^{-1})\) as \(m_{n}^{4}/n=o(1)\). It can be seen
\[|\bar{\bf W}_{n}-{\bf W}_{n}|_{o}\leq D_{n}+2|\bar{\bf W}_{n}|_{o}^{1/2}D_{n}^ {1/2},\]
where \(D_{n}=n^{-1}\sum_{j=1}^{n}\|\hat{\bf u}_{n}({\bf Z}_{j})-{\bf u}_{n}({\bf Z}_{ j})\|^{2}\). Thus (6.2) is implied by
\[D_{n}=o_{p}(m_{n}^{-2}). \tag{4.6}\]
Using the second inequality in (4.4), we derive
\[\frac{1}{n}\sum_{j=1}^{n}\big{|}{\bf b}_{n}(\mathbb{H}({\bf Z}_{j}))-{\bf b}_ {n}(H({\bf Z}_{j}))\big{|}^{2}\leq 2\pi^{2}m_{n}^{3}\sup_{t\in\mathcal{R}}| \mathbb{H}(t)-H(t)|=O_{p}(m_{n}^{3}/n).\]
Hence \(D_{n}=O_{p}(m_{n}^{3}/n)\) and (4.6) holds as \(m_{n}^{5}/n=o(1)\). We break
\[\frac{1}{n}\sum_{j=1}^{n}\Big{(}\psi({\bf Z}_{j})\otimes\hat{\bf u}_{n}({\bf Z }_{j})-E\big{(}\psi({\bf Z}_{j})\otimes{\bf u}_{n}({\bf Z}_{j}))\Big{)}=J_{n}+ K_{n},\]
where
\[{\bf J}_{n}=\frac{1}{n}\sum_{j=1}^{n}\psi({\bf Z}_{j})\otimes\big{(}\hat{\bf u }_{n}({\bf Z}_{j})-{\bf u}_{n}({\bf Z}_{j})\big{)},\]
\[{\bf K}_{n}=\frac{1}{n}\sum_{j=1}^{n}\Big{(}\psi({\bf Z}_{j})\otimes{\bf u}_{n} ({\bf Z}_{j})-E\big{(}\psi({\bf Z}_{j})\otimes{\bf u}_{n}({\bf Z}_{j})\big{)} \Big{)}.\]
By Cauchy inequality,
\[E(\|\mathbf{J}_{n}\|^{2}) \leq E(|\psi(\mathbf{Z}_{1})|^{2})\frac{1}{n}\sum_{j=1}^{n}E(\| \hat{\mathbf{u}}_{n}(\mathbf{Z}_{j})-\mathbf{u}_{n}(\mathbf{Z}_{j})\|^{2})=E(\| \mathbf{J}_{n}\|^{2})E(D_{n})\] \[=O(m_{n}^{3}/n)=o(m_{n}^{-1})\]
where the last equality holds as \(m_{n}^{4}/n=o(1)\). We now bound the variance by the second moment and by the first equality in (4.5) to get
\[E(\|\mathbf{K}_{n}\|^{2})\leq\frac{1}{n}E(|\psi(Z_{1})\otimes\mathbf{u}_{n}(Z_ {1})|^{2})\leq 8\frac{m_{n}}{n}E(|\psi(Z_{1})|^{2})=o(m_{n}^{-1})\]
as \(m_{n}^{2}/n=o(1)\). Taken together (6.3) follows. We now show (6.5) holds with \(\mathbf{v}_{n}=\mathbf{u}_{n}\). Using Taylor's expansion, we write
\[\frac{1}{n}\sum_{j=1}^{n}\big{(}\mathbf{b}_{n}(\mathbb{H}(X_{j}))-\mathbf{b}_ {n}(H(X_{j}))\big{)}=\mathbf{L}_{n}+\mathbf{M}_{n},\]
where
\[\mathbf{L}_{n} =\frac{1}{n}\sum_{j=1}^{n}\mathbf{b}_{n}^{\prime}(H(X_{j}))\big{(} \mathbb{H}(X_{j})-H(X_{j})\big{)},\] \[\mathbf{M}_{n} =\frac{1}{n}\sum_{j=1}^{n}\mathbf{b}_{n}^{\prime\prime}(H_{nj}^{ *})\big{(}\mathbb{H}(X_{j})-H(X_{j})\big{)}^{2},\]
where \(H_{nj}^{*}\) lies in between \(\mathbb{H}(X_{j})\) and \(H(X_{j})\). Using the second inequality in (4.4), we get
\[E\big{(}\|\mathbf{L}_{n}\|^{2}\big{)} \leq\frac{1}{n}E\Big{(}\|\mathbf{b}_{n}^{\prime}(H(X_{1}))\|^{2} \big{(}\mathbb{H}(X_{1})-H(X_{1})\big{)}^{2}\Big{)}\] \[\leq 2\pi^{2}\frac{m_{n}^{3}}{n}\sup_{t\in\mathcal{R}}\big{(} \mathbb{H}(X_{1})-H(X_{1})\big{)}^{2}\] \[=O_{p}(m_{n}^{3}/n^{2})=o_{p}((m_{n}n)^{-1})\]
as \(m_{n}^{4}/n=o(1)\). This shows \(\mathbf{L}_{n}=o_{p}((m_{n}n)^{-1/2})\). Using the third inequality in (4.4), one has as \(m_{n}^{6}/n=o(1)\) that
\[\|\mathbf{M}_{n}\|\leq\sqrt{2}\pi^{2}m_{n}^{5/2}\sup_{t\in\mathcal{R}}| \mathbb{H}(t)-H(t)|^{2}=O_{p}(m_{n}^{5/2}/n)=o_{p}((m_{n}n)^{-1/2}).\]
This yields \(\mathbf{M}_{n}=o_{p}((m_{n}n)^{-1/2})\). Taken together one proves (6.5). This and (4.6) imply (6.4) as \(m_{n}^{4}/n=o(1)\). Clearly, \(\mathbf{U}_{n}=\mathbf{I}_{m_{n}}\) satisfies \(|\mathbf{U}_{n}|_{o}=1=O(1)\). Peng and Schick (2005) showed that the projection of any \(h\in L_{2}(Q)\) onto \(\mathbb{A}\) uniquely exists under the assumption (4.3). Moreover, it is clear that \(b_{k}(H(x))-b_{k}(H(y)),k=1,2,\dots\) is a basis of \(\mathbb{A}\), so that \([\mathbf{u}_{\infty}]=\mathbb{A}\). We now apply Theorem 6.1 to complete the proof.
## 5 Simulations
We ran a simulation study to compare the efficiency of the EL-weighted spatial median \(\widetilde{\mathbf{m}}_{n}\) with the sample spatial median \(\mathbf{m}_{n}\) in the presence of a variety of side information. Reported on Tables 1-5 are the maximum eigenvalues of the asymptotic variance-covariance matrices and their ratios. Random samples were generated from 2- and 3- dimensional Cauchy distributions, Student \(t(3)\) with 3 degrees of freedom (df), the copula distributions (see the details in the Appendix) and the asymmetric Laplace for sample sizes \(n=50,100,200,500\). Based on repetitions \(M=2000\), we calculated the averages of the maximum eigenvalues \(\lambda\) and \(\tilde{\lambda}\) (i.e. the spectral norms) of the asymptotic variance-covariance matrices of \(\mathbf{m}_{n}\) and \(\widetilde{\mathbf{m}}_{n}\), and the ratio \(\tilde{\lambda}/\lambda\). A ratio less than one indicates a reduction in the norm of the variance-covariance matrix of the EL-weighted spatial median from that of the sample spatial median.
For Table 1, the side information is that the componentwise medians are known. For Tables 2-5, the information is that one marginal is symmetric about the origin (\(m=1,3,5\) constraints considered), for which we looked at both known and unknown marginal (estimated by the symmetrizied EDF).
Observe that for the case of known componentwise medians, the efficiency gain of the EL-weighted spatial median over the sample spatial median exceeded 80%; for the case of known or estimated symmetric marginal, the efficiency gain is more than 30%. All the ratios considered are substantially smaller than one, indicating substantial efficiency gains of the EL-weighted spatial depth over the sample depth. The simulation results indicated that the componentwise median is less efficient than the spatial median but not that much for the case considered.
## 6 Declaration of interest statement
_The authors report there are no competing interests to declare._
|
2308.16642 | High-Precision Observable Estimation with Single Qubit Quantum Memory | The estimation of multi-qubit observables is a key task in quantum
information science. The standard approach is to decompose a multi-qubit
observable into a weighted sum of Pauli strings. The observable can then be
estimated from projective single qubit measurements according to the Pauli
strings followed by a classical summation. As the number of Pauli strings in
the decomposition increases, shot-noise drastically builds up, and the accuracy
of such estimation can be considerably compromised. Access to a single qubit
quantum memory, where measurement data may be stored and accumulated can
circumvent the build-up of shot noise. Here, we describe a many-qubit
observable estimation approach to achieve this with a much lower number of
interactions between the multi-qubit device and the single qubit memory
compared to previous approaches. Our algorithm offers a reduction in the
required number of measurements for a given target variance that scales
$N^{\frac{2}{3}}$ with the number of Pauli strings $N$ in the observable
decomposition. The low number of interactions between the multi-qubit device
and the memory is desirable for noisy intermediate-scale quantum devices. | L. A. Markovich, J. Borregaard | 2023-08-31T11:32:32Z | http://arxiv.org/abs/2308.16642v1 | # High-Precision Observable Estimation With Single Qubit Quantum Memory
###### Abstract
The estimation of multi-qubit observables is a key task in quantum information science. The standard approach is to decompose a multi-qubit observable into a weighted sum of Pauli strings. The observable can then be estimated from projective single qubit measurements according to the Pauli strings followed by a classical summation. As the number of Pauli strings in the decomposition increases, shot-noise drastically builds up, and the accuracy of such estimation can be considerably compromised. Access to a single qubit quantum memory, where measurement data may be stored and accumulated can circumvent the build-up of shot noise. Here, we describe a many-qubit observable estimation approach to achieve this with a much lower number of interactions between the multi-qubit device and the single qubit memory compared to previous approaches. Our algorithm offers a reduction in the required number of measurements for a given target variance that scales \(N^{\frac{2}{3}}\) with the number of Pauli strings \(N\) in the observable decomposition. The low number of interactions between the multi-qubit device and the memory is desirable for noisy intermediate-scale quantum devices.
**observable estimation; quantum decoherence; single qubit quantum memory; multi-qubit states.**
## 0 Introduction
Determining the expectation value of multi-qubit observables within a quantum system is a fundamental subject in numerous domains of quantum science. Notably, in condensed matter physics, materials science, quantum chemistry, and combinatorial optimization [1], the objective revolves around identifying spectral characteristics, such as the ground state energy or the lowest eigenvalue of a Hamiltonian. However, the direct estimation of the observable expectation value is not a straightforward task in these scenarios. The quantum phase estimation (QPE) algorithm [2, 3, 4, 5, 6, 7], offers a potential solution for ideal quantum processing units with extended coherence times. However, in the Noisy Intermediate-Scale Quantum (NISQ) era, QPE's implementation is hindered by coherence time limitations. To overcome this, the quantum energy (expectation) estimation (QEE) method is widely used, especially within the variational quantum eigensolver framework [8].
QEE requires just single qubit measurements and thus a minimum coherence time of the quantum device. However, it is not without its limitations. One such drawback is the accumulation of
shot noise during the estimation process, which leads to a reduction in the overall variance level. In QEE, the individual Pauli strings are estimated separately, and subsequently, a linear combination of these estimates is utilized to determine the value of the observable.
Consequently, in order to estimate an observable comprised of \(N\) Pauli strings with a variance of \(\eta\), each Pauli string should be estimated with an variance of \(O(\eta/N)\). The resulting sample complexity therefore scales as \(O(N^{2})\). This can pose a significant challenge since the total number of measurements will eventually be limited by the available run-time of the quantum device. It is important to note that the measurement process itself is often one of the most time-consuming operations in current quantum devices [9, 10, 11].
Recent studies have proposed strategies to minimize sample complexity by aggregating commuting sets of Pauli strings [12, 13, 14], serving as alternatives between QPE and QEE. While these approaches improve efficiency and reduce quantum resource requirements for obtaining the observable's expectation value, they face challenges. Notably, they do not address the fundamental scaling issue related to noise accumulation with the increasing number of Pauli strings in the observable decomposition [14, 15].
In our recent paper [16], we introduced a novel algorithm known as the Coherent Pauli Summation (CPS) method, which effectively mitigates the issue of shot-noise accumulation by leveraging a single-qubit quantum memory (QM). The pivotal aspect of the CPS method lies in the utilization of Quantum Signal Processing (QSP) techniques [17], to encode the mean value of Pauli strings within the phase of a single qubit QM. By circumventing the accumulation of shot noise, the CPS method achieves a significant improvement in the variance of the estimate compared to the Quantum Energy Estimation (QEE) method, scaling at \(O(N)\), where \(N\) is the number of Pauli strings in the observable decomposition. However, the QSP step necessitates multiple controlled many-qubit unitaries between the single qubit quantum memory and the many-qubit NISQ device for the encoding of each Pauli string. This can have a detrimental effect on the coherence of the quantum memory given the complexity of such operations. In general, it is desirable to limit the number of interactions with the quantum memory as much as possible to uphold the coherence of the stored information.
A single-qubit QM typically consists of a physical system, such as an individual ion [18, 19] or Rydberg atom [20], capable of storing and maintaining the quantum state of a qubit. Such QM performs well while isolated from the environment to have low decoherence rates and long coherence times to preserve information. Another possibility is an error-corrected QM [21, 22]. Its basic idea is to use a larger number of physical qubits to represent each logical qubit. These physical qubits are entangled with each other and form an error-correcting code, which allows the system to detect and correct errors without losing the encoded quantum information. Even if some of the physical qubits experience errors, the encoded logical qubit can be recovered with high fidelity. While error-corrected QM holds significant promise in mitigating errors and enhancing the reliability of quantum information processing, it also comes with several disadvantages like increasing complexity and reducing the processing speed. Moreover, the process of error correction itself introduces a possibility of logical errors due to imperfect gate operations or residual errors in the encoding. The CPS requires \(N\log{(N/\sqrt{\eta})}/\log{(\log{(N/\sqrt{\eta})})}\) controlled unitary operations between the NISQ qubits and the memory qubit that is a challenging demand for the current QMs.
In this article, we present an alternative approach to our CPS method. Instead of relying on QSP techniques for encoding the mean value of Pauli strings in the phase, we propose the utilization of a Taylor series approach. By employing the Taylor-based CPS method (TCPS), we achieve a variance improvement of the final estimate compared to the QEE method, scaling as \(N^{2/3}\). While the coherence time of the memory qubit required for both TCPS and the original CPS method scales linearly with \(N\), the TCPS method only requires a single controlled unitary between the memory qubit and the multi-qubit device for the encoding of a Pauli string compared to the
\(O(\log(1/\eta))\) number of unitaries required for the CPS method, where \(\eta\) is the target precision of final multi-qubit observable estimate. Although the TCPS approach exhibits a slight scaling disadvantage compared to the original CPS method, it offers the advantage of fewer interactions between the single qubit memory and the multi-qubit device which can important for practical implementations.
The paper is organised as follows. In Section 1 we briefly outline the QEE method. In Section 2 the TCPS method is studied in details. We compare it with the QEE method in terms of variance of the resulting estimate and the resources. All technical details are given in Appendix.
## 1 QEE Technique
We start by going through the main steps of the QEE technique. Decomposition of an observable \(O\) into a weighted sum of Pauli strings
\[O=\sum_{j=1}^{N}a_{j}P_{j},\quad a_{j}\in\mathrm{R} \tag{1}\]
is a crucial step in the QEE for determining the expected value of a given observable for a particular quantum state \(|\Psi\rangle\). Here \(a_{j}\in R\) are the decomposition coefficients and \(P_{j}\) are the Pauli strings composed as tensor products of single qubit Pauli matrices and the identity. Since a set of \(d^{2}\) Pauli strings form a complete operator basis for a Hilbert space with dimension \(d\) this decomposition is always possible.
The Pauli strings are measured sequentially or in parallel (if the strings are commuting), using single qubit projective measurements in order to provide an estimate of each Pauli string \(\langle P_{j}\rangle\equiv\langle\Psi|P_{j}|\Psi\rangle\)[8, 23]. The mean value of the observable \(O\) is than calculated by the classical summation of the later estimates (see Fig. 1):
\[\langle O\rangle=\sum_{j=1}^{N}a_{j}\langle P_{j}\rangle. \tag{2}\]
It follows that the shot noise from the individually estimated mean values of the Pauli strings accumulates in the final estimate of \(\langle O\rangle\). If every \(\langle P_{j}\rangle\) is estimated with a variance \(\sigma^{2}(\langle\hat{P}_{j}\rangle)\) then, assuming equal weights of the Pauli strings in (1), the variance of estimate of \(O\) is \(\sigma^{2}(\langle\hat{O}\rangle)\sim N\sigma^{2}(\langle\hat{P}_{j}\rangle)\).
Since the dimension of the Hilbert space increases exponentially with the number of qubits, the number of Pauli stings \(N\) in the decomposition (1) can be very large. In particular, the encoding of fermionic Hamiltonians in a qubit lattice poses a problem for quantum simulators of fermionic systems. In [24], local fermionic Hamiltonian systems are mapped to local spin Hamiltonians (fermion-to-qubit mapping) suitable for analog quantum simulation applications. However, such transformations introduce long-range many-body interaction terms, which are beyond the scope of the current implementation and necessitate special reduction algorithms [25].
To overcome the accumulation of shot noise we employ _phase-kick back_ techniques to encode the mean value of each Pauli string into the phase of a single qubit QM, allowing the encoding of \(\langle O\rangle\) which can be directly measured.
## 2 Main Results
Let \(|\Psi_{0}\rangle\equiv V|\mathbf{0}\rangle\) be a quantum state of the NISQ device, where \(V\) is an invertible preparation circuit. By \(|\mathbf{0}\rangle\) we denote the state where all qubits are prepared in their ground states \(|0\rangle\). Our
target is to estimate the expectation value \(\langle\Psi_{0}|O|\Psi_{0}\rangle\) with a variance of \(\eta\). To this end, we define \(|\Psi_{1}\rangle_{j}\equiv P_{j}|\Psi_{0}\rangle\) and introduce the unitary operator
\[U_{P_{j}}=V\Pi_{0}V^{\dagger}P_{j}, \tag{3}\]
where \(\Pi_{0}=I-2\left|\mathbf{0}\right\rangle\left\langle\mathbf{0}\right|\) is a projector, and \(I\) is the identity operator. This operator defines a rotation by a principal angle
\[\phi_{j}=\arccos|\langle\Psi_{0}|\Psi_{j}\rangle| \tag{4}\]
between two closed subspaces \(|\Psi_{0}\rangle\) and \(|\Psi_{j}\rangle\) of a Hilbert space. It holds that the state \(|\Psi_{0}\rangle\) is an equal superposition of eigenstates \(|\phi^{\pm}\rangle\) of \(U_{P_{j}}\) with eigenvalues \(e^{\pm i\phi}\), respectively [12, 26].
We introduce the state
\[\left(\sqrt{(1-\epsilon_{j}^{\prime})}|0\rangle_{p}+\sqrt{\epsilon_{j}^{ \prime}}|1\rangle_{p}\right)\otimes|\Psi_{0}\rangle,\quad\epsilon_{j}^{\prime} \in(0,1), \tag{5}\]
where the first register describes the processing ancillary. Acting on this state with a unitary operator \(|0\rangle_{p}\langle 0|\otimes I+|1\rangle_{p}\langle 1|\otimes cP_{j}\), we get the following \(\epsilon^{\prime}\)-dependent state
\[|\tilde{\Psi}_{0}\rangle_{j}\equiv\sqrt{1-\epsilon_{j}^{\prime}}|0\rangle_{p }|\Psi_{0}\rangle+\sqrt{\epsilon_{j}^{\prime}}|1\rangle_{p}|\Psi_{j}\rangle. \tag{6}\]
Let us introduce \(|\tilde{\Psi}_{1}\rangle\equiv(\sigma_{x}\otimes I)|\tilde{\Psi}_{0}\rangle\) and calculate the overlap
\[|\langle\tilde{\Psi}_{0}|\tilde{\Psi}_{1}\rangle_{j}|=2\sqrt{\epsilon^{\prime }(1-\epsilon^{\prime})}|\langle\Psi_{0}|\Psi_{j}\rangle|. \tag{7}\]
To encode the weighted sum of the Pauli strings we select \(\sqrt{\epsilon_{j}^{\prime}(1-\epsilon_{j}^{\prime})}\equiv|a_{j}|\sqrt{\epsilon}\), where the parameter \(\epsilon\in(0,1)\), holds.
Figure 1: The high-level picture of QEE and TCPS methods. In QEE the expectation value of each Pauli string is estimated by a series of projective measurements. Then all \(\langle\hat{P}_{j}\rangle\), \(j=1,\ldots,N\) are summed to obtain an estimate of the observable \(\langle O\rangle\). In the TCPS method all \(\langle\hat{P}_{j}\rangle\) are encoded in the single qubit QM by a sequential phase kick-back algorithm. Also a small amount of projective measurements of each Pauli string is done to estimate the sign of the Pauli strings and classical correction of the final estimate. After encoding of all Pauli strings, a projective measurement on the QM qubit is performed and the whole process is repeated to obtain an estimate of \(\langle O\rangle\) to the variance \(\eta\).
The rotation operators \((\tilde{U})_{P_{j}}(\epsilon)=\tilde{V}\Pi_{0}\tilde{V}^{\dagger}P_{j}\), \(\tilde{V}(P_{j}):\tilde{V}(P_{j})|0\rangle=|\tilde{\Psi}_{0}\rangle_{j}\) can be introduced to encode the Pauli strings in the phases. The operator \((\tilde{U})_{P_{j}}(\epsilon)\) defines the rotation by a principal angle
\[\tilde{\phi}_{j}=\arccos{(2\sqrt{\epsilon}|a_{j}\langle P_{j}\rangle|)} \tag{8}\]
between two closed subspaces \(|\tilde{\Psi}_{0}\rangle\) and \(|\tilde{\Psi}_{1}\rangle\) of a Hilbert space. Since this encoding method requires preparation of the eigenstates \(|\phi^{\pm}\rangle\) of the rotation operators at each iteration, it is important to mention, that we can write \(|\tilde{\Psi}_{0}\rangle\) as a superposition
\[|\tilde{\Psi}_{0}\rangle=\frac{1}{\sqrt{2}}(|\tilde{\phi}_{0}^{+} \rangle+|\tilde{\phi}_{0}^{-}\rangle), \tag{9}\] \[|\tilde{\phi}_{0}^{+}\rangle\equiv(\sqrt{1-\epsilon^{\prime}}|0 \rangle+\sqrt{\epsilon^{\prime}}|1\rangle)|\phi^{+}\rangle,\quad|\tilde{\phi}_ {0}^{-}\rangle\equiv(\sqrt{1-\epsilon^{\prime}}|0\rangle+\sqrt{\epsilon^{ \prime}}e^{i\phi}|1\rangle)|\phi^{-}\rangle.\]
Hence, projecting on \(|\tilde{\phi}_{0}^{\pm}\rangle\) states is equivalent to projecting on \(|\phi^{\pm}\rangle\), respectively.
To encode the information about the mean value of the Pauli string in the phase we define a memory qubit in the state \(|\psi\rangle_{m}=|+\rangle_{m}\). The controlled version \(\text{c}(\tilde{U})_{P_{j}}(\epsilon)\) can be implemented by substituting \(P_{j}\) with its controlled version, \(\text{c}P_{j}\). Applying \(\text{c}(\tilde{U})_{P_{j}}(\epsilon)\) to the state \(|\psi\rangle_{m}|\tilde{\Psi}_{0}\rangle\), we get
\[cU_{P_{j}}\left|\psi\right\rangle_{m}|\Psi_{0}\rangle\longrightarrow\frac{1}{ \sqrt{2}}\left(\frac{|0\rangle_{m}+e^{i\phi_{j}}|1\rangle_{m}}{\sqrt{2}}|\phi_ {j}^{+}\rangle+\frac{|0\rangle_{m}+e^{-i\phi_{j}}|1\rangle_{m}}{\sqrt{2}}|\phi_ {j}^{-}\rangle\right), \tag{10}\]
with the information about the mean value of the Pauli string encoded in the phase, i.e.
\[|\langle\Psi_{0}|P_{j}|\Psi_{0}\rangle|=\cos{(\phi_{j})}. \tag{11}\]
Using this encoding method, we can encode the correct linear combination of the \(N\) mean values of Pauli strings sequentially in the phase of the memory qubit as depicted in Fig. 2. The final state of the memory qubit will be \((|0\rangle+e^{i\tilde{\Phi}}|1\rangle)/\sqrt{2}\), where
\[\tilde{\Phi}\equiv\sum_{j=1}^{N}\tilde{\phi}_{j}=\sum_{j=1}^{N}\arccos{(2 \sqrt{\epsilon}|a_{j}\langle P_{j}\rangle|)}. \tag{12}\]
Finally, we perform a projective measurement on the QM qubit to estimate the phase following the standard Kitaev phase estimation procedure [2]. To obtain a good estimate of \(\tilde{\Phi}\), the whole encoding process and measurement process is repeated \(M_{q}\) times.
The complication arise from the fact, that since \(|\tilde{\Psi}_{0}\rangle\) is an equal superposition of \(|\phi^{\pm}\rangle\) and \(\cos{(\cdot)}\) is an even function, we need to project onto one of the eigenstates \(|\phi^{+}\rangle\) or \(|\phi^{-}\rangle\) prior to encoding to control the sing of the phase. Note that both eigenstates work since the sign can be controlled by using either \((\tilde{U})_{P_{j}}(\epsilon)\) or \((\tilde{U})_{P_{j}}^{\dagger}(\epsilon)\), depending on the eigenstate projection. This is necessary since the end goal is to encode a given linear combination of Pauli strings in the phase of the memory qubit through sequential application of the encoding step with different Pauli strings, \(P_{j}\).
To perform the eigenstate projection, a small series of partial projections (Kitaev's quantum phase estimation (KQPE) steps [2]) are performed prior to the encoding to efficiently resolve between \(|\phi^{+}\rangle\) and \(|\phi^{-}\rangle\). This can be achieved using an auxiliary qubit as the control qubit [12] instead of the memory qubit. It suffices to do enough measurements to accurately determine the sign of the encoded phase in the auxiliary qubit while the actual phase is not necessary to estimate. Consequently, this can be done efficiently with the total number of QPE steps, \(n_{QPE}\), scaling logarithmic with the total number of Pauli strings to be encoded assuming that the mean value of each Pauli string is bounded away from zero. For details, we refer to the Appendices B-D.
Importantly, the eigenstate projection is performed using an additional auxiliary qubit and not the memory qubit. This is what allows us to reduce the interactions between the memory qubit and the many-qubit device compared to the CPS method.
The steps of our method are the following (see Fig. 1):
* First, we roughly estimate the mean values of every Pauli string \(\langle\Psi_{0}|P_{j}|\Psi_{0}\rangle\), \(j=\overline{1,N}\) by a small series of projective measurements like it is done in the QEE method. This step provides information about the sign and magnitude (bounded away from zero) of the mean values of the Pauli strings. For more details see Appendix B.
* Second, for every \(P_{j}\) (that is bounded away from zero), we do \(n_{QPE}\) of KQPE steps on \(|\tilde{\Psi}_{0}\rangle\), projecting on either \(|\phi^{+}\rangle\) and \(|\phi^{-}\rangle\). If we project on \(|\phi^{+}\rangle\), we encode the mean value the Pauli string in the phase of the memory qubit using \(c(\tilde{U})_{P_{j}}(\epsilon)\) while for \(|\phi^{-}\rangle\), we use \(c(\tilde{U})_{P_{j}}^{\dagger}(\epsilon)\). For more details see Appendix C.
Figure 2: The phase encoding in the QM is shown. The QM qubit is prepared in \(\ket{\psi}_{m}=\ket{+}_{m}\) state. We prepare every \(\ket{\tilde{\Psi}_{0}}_{j}\) state and do \(n_{QPE}\) steps of KQPE to resolve between \(|\phi^{+}\rangle\) and \(|\phi^{-}\rangle\) eigenstates. After we act on the state with \(\tilde{U}_{P_{j}}\) or \(\tilde{U}_{P_{j}}^{\dagger}\), respectively to encode the phase \(\tilde{\phi}_{j}\) into the QM. We address the QM \(N\) times to do the whole round of encoding. Finally the projective measurement is performed.
For \(\epsilon\ll\,1\), the Taylor expansion is used to rewrite Eq. (12). To lowest order in \(\epsilon\), we get:
\[O\approx\frac{N\pi-2\tilde{\Phi}}{4\sqrt{\epsilon}}. \tag{13}\]
The error due to higher order terms in \(\epsilon\) can, however, be corrected using additional information from the rough estimates of the mean values used to estimate the signs and magnitudes of the Pauli strings. In Appendices D-E, the correction procedure is described in detail and the final variance of the estimate \(\hat{O}\) is deduced assuming that \(M_{c}^{cor}\) projective measurements are used for the rough estimates. The variance of the corrected estimate scales as \(\sigma^{2}(\hat{O}^{TCPS})\sim N^{\frac{4}{3}}T^{-1}\), where \(T\) is the total amount of controlled unitaries used in the TCPS method. This scaling results as a trade-off between the inaccuracy of the estimate due to the Taylor expansion and the added noise from the correction procedure.
Note that in our method, the phase estimation is limited to the modulus of \(2\pi\). To address the issue of phase wrapping, we adopt the _sampling_ approach introduced in [3, 27]. Instead of encoding \(\phi\) with a fixed \(\epsilon\), we employ multiple orders of sampling, denoted as \(\tilde{\epsilon}_{l}\equiv 2^{l}\sqrt{\epsilon_{0}}\), where \(l=1,2,\ldots,d_{L}\), to gradually zoom in on the value of \(\phi\). By appropriately selecting \(d_{L}\) and the total number of measurement repetitions, we achieve the same variance rate. The initial parameter \(\epsilon_{0}\) is chosen in such a way that \(\tilde{\Phi}(\epsilon_{0})<2\pi\) holds. Further detailed analyses and discussions can be found in Appendix G.
We performed a comprehensive comparison of resources between the TCPS, the QEE and CPS, as detailed in Table 1. The number of state preparation circuits required and the coherence time of the necessary processing ancillas are compared. To implement \(\tilde{U}_{P_{j}}(\epsilon)\) in the TCPS technique, the controlled sign flip operator \(\Pi_{0}\) and two state preparation circuits \(\tilde{V}\) and \(\tilde{V}^{\dagger}\) are needed. The controlled \(\Pi_{0}\) can be implemented using a series of two-qubit CNOT gates, which scales linearly with the number of qubits in the NISQ device [12, 28] and we therefore view this as having the same cost as the state preparation circuit.
In terms of time consumption, the state preparation step \(t_{prep}\) is considered the most time-consuming, which is also used to quantify the time of the encoding operations. In the QEE method, the coherence time of the NISQ device is \(t_{prep}\) as each qubit is measured after each state preparation. On the other hand, the TCPS method necessitates a modest increase in the coherence time of the NISQ device. This increase scales logarithmically with the number of Pauli strings and the target variance \(\eta\). Additionally, the TCPS method relies on a long coherence time memory qubit,
\begin{table}
\begin{tabular}{c|c||c|c||c} \hline
**Method** & **Number of state preparations** & **Qubits** & **Coherence times** & **NISQ to QM** \\ \hline QEE & \(\frac{N^{2}}{\eta}\) & Processing qubits & \(t_{prep}\) & - \\ & & of NISQ & & \\ \hline TCPS & \((\frac{N^{\frac{4}{3}}}{\eta})O(\log\left(\frac{N}{\sqrt{\eta}}\right))\) & Processing qubits & \(t_{prep}O(\log\left(\frac{N}{\sqrt{\eta}}\right))\) & \(N\) \\ & & Memory qubit & \(Nt_{prep}O(\log\left(\frac{N}{\sqrt{\eta}}\right))\) & \\ \hline CPS & \(\frac{N}{\eta})O\left(\frac{\log\left(\frac{N}{\sqrt{\eta}}\right)}{\log\log \left(\frac{N}{\sqrt{\eta}}\right)}\right)\) & Processing qubits & \(t_{prep}O(\log\left(\frac{N}{\sqrt{\eta}}\right))\) & \(NO\left(\frac{\log\left(\frac{N}{\sqrt{\eta}}\right)}{\log\log\left(\frac{N}{ \sqrt{\eta}}\right)}\right)\) \\ & & of NISQ & \\ & & Memory qubit & \(Nt_{prep}O\left(\frac{\log\left(\frac{N}{\sqrt{\eta}}\right)}{\log\log\left( \frac{N}{\sqrt{\eta}}\right)}\right)\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of resources for QEE, CPS and TCPS methods; \(t_{prep}\) - state preparation time; \(\eta\) - estimation variance.
whose coherence time is linearly dependent on the number of Pauli strings \(N\). In cases where the available resources cannot provide the required coherence time, the total Pauli sum must be divided into sub-sums and encoded in QM separately.
In return, the TCPS method provides a much better estimate of \(\langle O\rangle\) than the QEE method for a fixed number of state preparations. Comparing it with the variance of QEE method in the case of equal amount of resources used in both methods, we get
\[\frac{\sigma^{2}(\hat{O}^{TCPS})}{\sigma^{2}(\hat{O}^{QEE})}\sim\frac{1}{N^{ \frac{2}{3}}}. \tag{14}\]
which shows that the TCPS method achieves a better scaling of the variance in the number of Pauli strings compared to the scaling of the QEE approach.
As we mentioned in [16] both the QEE and CPS methods are subject to the accumulation of operational errors, which poses a limitation on the accuracy of the final estimate of the observable. The presence of gate errors significantly affects the reliability of both approaches, ultimately impacting the quality of the observables estimation. However, in contrast to the CPS, the TCPS method has less interactions of the NISQ qubits with the QM. In TCPS method the encoding of every Pauli string in the QM is done by only one unitary gate. That means, that the amount of operations between NISQ qubits and the QM is \(N\) that is significantly less then in the CPS method. This fact makes TCPS method more robust to the QM corruption due to the interactions, however in terms of robustness to gate errors the TCPS and the CPS are similar.
## 3 Summary
We proposed an alternative variation of our _Coherent Pauli Summation_ method recently provided in [16] called the _Taylor Coherent Pauli Summation_ method to estimate the expectation values of multi-qubit observables. The method uses phase-kick back techniques and Taylor decomposition to encode information from a multi-qubit processor into a single qubit quantum memory. In this way the accumulation of the shot noise that arise in the conventional QEE approach is avoided. The variance of the TCPS method outperforms the QEE approach as \(\sim 1/N^{2/3}\). For observables with a large Pauli string decomposition our method gives a significant advantage, especially if there are many non-commuting Pauli sets. If the Pauli stings can be efficiently sorted in commuting sets they could be measured in parallel using the QEE method while the TCPS method in the current form in not parallelizable. Thus, our method can be combined with known QEE approaches and applied in cases where sorting into commuting sets is hard (in general it is NP hard) or simply not possible.
In comparison to the CPS, TCPS approach exhibits slightly higher variance but is more robust to the logical errors of the QM. We have demonstrated that TCPS demands \(N\) controlled unitary interactions of the NISQ qubits with the QM, making it more experimentally feasible on NISQ devices then CPS.
## Funding
L.M. was partly supported by the Netherlands Organisation for Scientific Research (NWO/OCW), as part of the Quantum Software Consortium program (project number 024.003.037 / 3368). This research work was partly supported by the Roadmap for the Development of Quantum Technologies in Russian Federation, contract No. 868-1.3-15/15-2021.
Hadamard Test
Following the line in [12], one of the ways to efficiently collapse the state \(|\tilde{\Psi}_{0}\rangle\) into one of the eigenstates \(|\phi^{\pm}\rangle\) is to run the Hadamard test (Kitaev's QPE) circuit [2] shown in Fig. 3. We don't need to do the full QPE, but as it is shown in Appendix C, just a small amount of rounds, so the state wont be ruined, but we will extract the information about the \(\text{sign}(\phi)\) to resolve between two eigenstates with a good confidence rate.
The output state of the Hadamard circuit with \(S=I\) can be written as follows
\[\frac{1}{2}((1+e^{i\phi})|0\rangle_{p}+(1-e^{i\phi})|1\rangle_{p})|\tilde{\Psi }_{0}\rangle.\] (A15)
The probability of measuring \(|0\rangle\) and \(|1\rangle\) are:
\[P(X=0|\phi)=\frac{1}{2}(1+\cos{(\phi)}),\quad P(X=1|\phi)=\frac{1}{2}(1-\cos{( \phi)}),\] (A16)
that gives a precise estimates of the modulus of the phase for a sufficient amount of iterations. However, _cosine_ does not allow us to distinguish between \(\phi\) and \(-\phi\). Hence, the same Hadamard test is used, but with \(S=R_{z}(\pi/2)\), namely:
\[\frac{1}{2}((1+ie^{i\phi})|0\rangle_{p}+(1-ie^{i\phi})|1\rangle_{p})|\tilde{ \Psi}_{0}\rangle.\] (A17)
The probabilities of measuring \(|0\rangle\) and \(|1\rangle\) are:
\[P(Y=0|\phi)=\frac{1}{2}(1-\sin{(\phi)}),\quad P(Y=1|\phi)=\frac{1}{2}(1+\sin{( \phi)}),\] (A18)
respectively. Repeated measurements of these quantum circuits allow to approximate \(P(X=0|\phi)\), \(P(Y=0|\phi)\) from which the _sine_ and _cosine_ values and then the \(\phi\) itself can be estimated. We denote the estimates of \(\sin{(\phi)}\) and \(\cos{(\phi)}\) by \(\hat{s}\) and \(\hat{c}\), respectively. The _tangent_ function is more robust to error then the inverse _sine_ and _cosine_ functions (see [5] for details). We write the estimate as follows:
\[\hat{\phi}=\arctan{\left(\frac{\hat{s}}{\hat{c}}\right)}.\] (A19)
Since we have two outcomes (\(0\) and \(1\)), we get a Bernoulli distributed independently identically distributed (i.i.d.) sample with probability of success \(P(0|\phi)\). The probability terms are estimated by frequency \(\hat{\nu}\) to accuracy:
\[|\hat{\nu}-P(0|\phi)|<\epsilon_{0}.\] (A20)
It is straightforward to verify that
\[|\hat{s}-\sin{(\phi)}|\leq 2\epsilon_{0},\quad|\hat{c}-\cos{(\phi)}|\leq 2 \epsilon_{0},\] (A21)
hold. We want to estimate our angle with high variance, namely
\[|\phi-\arctan{\left(\frac{\hat{s}}{\hat{c}}\right)}|\leq\epsilon_{tan}.\] (A22)
To find the connection between \(\epsilon_{tan}\) and \(\epsilon_{0}\), we consider the case \(|\hat{s}-\sin{(\phi)}|=2\epsilon_{0}\), \(|\hat{c}-\cos{(\phi)}|=2\epsilon_{0}\) when \(\phi=0\). Substituting it in Eq. (A22), we get the following bound:
\[\epsilon_{0}\leq\frac{\tan\left(\epsilon_{tan}\right)}{2(1\mp\tan\left(\epsilon_{ tan}\right))}.\] (A23)
Following [2], where \(\epsilon_{\tan}=1/16\), holds, we can conclude that \(\epsilon_{0}<1/2(1-1/\sqrt{2})\), holds. To estimate the amount of measurements we need to guaranty Eq. (A20), we use Chernoff's inequality (see for example [4]):
\[1-2\exp\left(-2n_{0}\epsilon_{0}^{2}\right)\leq P\left(|\hat{\nu}_{0}-P(0| \phi)|<\epsilon_{0}\right).\] (A24)
In many applications, it is important to know what the sample size must be in order that, with probability at least \((1-\eta_{0})\), one could assert that the estimate differ from the corresponding value by an amount less than \(\epsilon_{0}\). In other words, beginning with what value \(n_{0}\), does the following inequality \(2\exp\left(-2n_{0}\epsilon_{0}^{2}\right)\leq\eta_{0}\), \(\eta_{0}\in[0,1]\), holds? One can deduce that the amount of measurements that guarantee this is equal to:
\[n_{0}\geq\frac{\log\left(2/\eta_{0}\right)}{2\epsilon_{0}^{2}}.\] (A25)
Taking into account that each Hadamard test has to be done twice and \(m\) angle estimations are done to get enough statistical samples, we get in total \(2m\) estimations. Hence, in total we do
\[M_{K}\geq\frac{m\log\left(2/\eta_{0}\right)}{\epsilon_{0}^{2}}.\] (A26)
One can see, that due to the _sine_ and _cosine_ functions the method performs bad in the boundary points of \(0\) and \(1\). That is why we need to do a preliminary check that every \(\langle P_{j}\rangle\) is in the interval bounded away from \(0\) and \(1\) done by a small amount of projective measurements.
## Appendix B On Selecting the Amount of Projective Measurements for the Boundary Condition Check-up
It is known that QPE performs best when the phase is bounded away from \(0\) and \(2\pi\). Thus, we have to be sure that every \(\langle P\rangle\in\mathcal{I}=[0+\delta,1-\delta]\), where \(\delta>0\). That is why we roughly estimate the mean values of every Pauli string \(\langle P_{j}\rangle\), \(j=1,\ldots,N\) by a small series of projective measurements like it is done in the QEE method. Let \(\langle\hat{P}\rangle\) be an estimate of \(\langle P\rangle\), using \(n_{1}\) samples. For the independent random variables bounded by the interval \([0,1]\) the Hoeffding's inequality
\[P(|\langle P\rangle-\langle\hat{P}\rangle|\geq g_{1})\leq 2\exp\left(-2n_{1}g_{ 1}^{2}\right),\] (B1)
holds, where \(g_{1}>0\). The parameters \(g_{1}\) and \(n_{1}\) can be selected according to our needs of the estimation variance. Reversing Eq. (B1), we get:
\[1-2\exp\left(-2n_{1}g_{1}^{2}\right)\leq P(|\langle P\rangle-\langle\hat{P} \rangle|<g_{1}).\] (B2)
Finally, we deduce that
\[2\exp\left(-2n_{1}g_{1}^{2}\right)\leq\eta_{1},\] (B3)
holds, where \(\eta_{1}\in[0,1]\) is the desired probability of being inside of \(\langle P\rangle\in\mathcal{I}\). The total amount of projective measurements we need for \(N\) Paulis is \(M_{1}=Nn_{1}\).
On \(\text{sign}(\phi)\) Estimation
In the previous section we selected \(n_{1}\) in such a way that with \(1-\eta_{1}\) probability \(\langle P\rangle\) is in \(\mathcal{I}\). Using the part of Kitaev's QPE circuit whose action is given by Eq. (A17), we find that the probability of measuring \(|0\rangle\) and \(|1\rangle\) is given by Eq. (A18). If \(\phi>0\) the probability \(P(Y=0|\phi)<1/2\), holds. We will use this fact to find out the \(\text{sign}(\phi)\).
Since \(\phi=2\arccos\left(|\langle P\rangle|\right)\), holds, we can write:
\[P(Y=0|\phi)=\frac{1}{2}(1-\sin\phi)\in[p_{0min},p_{0max}],\quad \text{where}\] \[p_{0min}=\frac{1}{2}(1-\sin\left(2\arccos\left(|1-\delta| \right)\right)),\quad p_{0max}=\frac{1}{2}(1-\sin\left(2\arccos\left(|\delta |\right)\right)),\quad\text{if}\quad\delta<1/2,\] (C1)
and
\[P(Y=1|\phi)=\frac{1}{2}(1+\sin\phi)\in[p_{1min},p_{1max}],\quad \text{where}\] \[p_{1min}=\frac{1}{2}(1+\sin\left(2\arccos\left(|\delta|\right) \right)),\quad p_{1max}=\frac{1}{2}(1+\sin\left(2\arccos\left(|1-\delta| \right)\right)),\quad\text{if}\quad\delta<1/2.\] (C2)
For example, if we select \(\delta=0.2\), then the probabilities are defined on the distant intervals \(P(Y=0|\phi)\in[0.02,0.3]\), \(P(Y=1|\phi)\in[0.7,0.98]\). Thus, for the reasonable selection of \(\delta\) the probabilities are spaced apart from \(1/2\).
Since our goal is to find \(P(Y=0|\phi)\lessgtr 1/2\), we use the Hoeffding's inequality with the following conditions:
\[1-2\exp\left(-2g_{3}^{2}(\delta)n_{QPE}\right)\leq P\left(|\hat{\nu}_{Y=0}-P(Y =0|\phi)|<g_{3}(\delta)\right),\] (C3)
where \(g_{3}(\delta)\equiv\frac{1}{2}-p_{0max}(\delta)\) and \(\hat{\nu}_{Y=0}\) is a frequency of the event \(Y=0\). Beginning with what value \(n_{QPE}\), we would like that
\[2\exp\left(-2g_{3}^{2}(\delta)n_{QPE}\right)\leq\eta_{3},\quad\eta_{3}\in[0,1],\] (C4)
holds? One can deduce that it is bounded as follows
\[n_{QPE}\geq\frac{\log\left(2/\eta_{3}\right)}{2g_{3}^{2}(\delta)}.\] (C5)
If all \(\langle P_{j}\rangle\in\mathcal{I}\) we run the algorithm for \(N\) phases and get the total amount of QPE rounds bounded by
\[M_{QPE}\geq\frac{N\log\left(2/\eta_{3}\right)}{2g_{3}^{2}(\delta)}.\] (C6)
Finally, setting \(M_{QPE}\) rounds, with probability \(1-\eta_{3}\) we will estimate \(P(Y=0|\phi)\) by \(\hat{\nu}_{Y=0}\) with enough variance to determine \(P(Y=0|\phi)\lessgtr\frac{1}{2}\).
## Appendix D Variance Estimation
After the sign estimation done by \(n_{QPE}\) rounds of QPE we do the phase encoding in the memory qubit \(|\psi\rangle_{m}=|+\rangle_{m}\), applying \(\text{c}\tilde{U}_{P}\) or \(\text{c}\tilde{U}_{P}^{\dagger}\) as it is depicted in Fig. 4 with \(S=I\). This process
is repeated \(N\) times until all the phases connected with the Pauli strings are encoded in the long coherence time QM. The accumulated state encoded in \(|\psi\rangle_{m}\) is
\[|\tilde{\Phi}\rangle=\frac{|0\rangle_{m}+e^{i\tilde{\Phi}}|1\rangle_{m}}{\sqrt{2 }}|\phi_{1}\rangle|\phi_{2}\rangle\ldots|\phi_{N}\rangle,\quad\tilde{\Phi}= \sum_{i=1}^{N}\arccos{(2\sqrt{\epsilon}|a_{j}\langle P_{j}\rangle|)}.\] (D1)
Then we apply the Hadamard gate
\[H\otimes I|\tilde{\Phi}\rangle=\frac{(1+e^{i\tilde{\Phi}})|0\rangle_{m}+(1-e^{ i\tilde{\Phi}})|1\rangle_{m}}{2}|\phi_{1}\rangle|\phi_{2}\rangle\ldots|\phi_{N}\rangle.\] (D2)
Similarly to the previous sections, the probabilities to measure \(|0\rangle\) and \(|1\rangle\) are
\[P(X=0|\tilde{\Phi})=\frac{1}{2}(1+\cos{(\tilde{\Phi})}),\quad P(X=1|\tilde{ \Phi})=\frac{1}{2}(1-\cos{(\tilde{\Phi})}).\] (D3)
If use the encoding with \(S=R_{z}(\pi/2)\), we get
\[|\tilde{\Phi}\rangle=\frac{|0\rangle_{m}+ie^{i\tilde{\Phi}}|1\rangle_{m}}{ \sqrt{2}}|\phi_{1}\rangle|\phi_{2}\rangle\ldots|\phi_{N}\rangle.\] (D4)
Then we apply the Hadamard gate on the latter state, we get the following result
\[H\otimes I|\tilde{\Phi}\rangle=\frac{(1+ie^{i\tilde{\Phi}})|0\rangle+(1-ie^{ i\tilde{\Phi}})|1\rangle}{2}|\phi_{1}\rangle|\phi_{2}\rangle\ldots|\phi_{N}\rangle\] (D5)
and the probabilities to measure \(|0\rangle_{m}\) and \(|1\rangle_{m}\) are
\[P(Y=0|\tilde{\Phi})=\frac{1}{2}(1+\sin{(\tilde{\Phi})}),\quad P(Y=1|\tilde{ \Phi})=\frac{1}{2}(1-\sin{(\tilde{\Phi})}).\] (D6)
Since we can't prepare the reversed preparation circuit of the state \(|\tilde{\Phi}\rangle\), we can't use QPE to estimate the phase directly. To estimate \(P(X=0|\tilde{\Phi})\), \(P(Y=0|\tilde{\Phi})\) and then find the total accumulated phase we need to repeat all the procedure of encoding \(M_{q}\) times, every time doing the projective measurement on the state encoded in the QM. Then, using the probability estimates, we estimate _sine_ and _cosine_ that will determine the estimate of \(\tilde{\Phi}\). To this end, we introduce the notation
\[\hat{Q}\equiv\frac{1-2\hat{P}(Y=0|\tilde{\Phi})}{2\hat{P}(X=0|\tilde{\Phi})-1 }=\hat{\tan}(\tilde{\Phi}),\] (D7)
and the variance of the estimate of \(\Phi\) is the following
\[\sigma^{2}\hat{\tilde{\Phi}}\approx\left(\frac{\partial\tilde{\Phi}}{ \partial Q}\right)^{2}\sigma^{2}\hat{Q}=\left(\frac{1}{1+Q^{2}}\right)^{2} \sigma^{2}\hat{Q}.\] (D8)
Figure 4: The phase-kick back process, encoding the mean values of the Pauli strings in the phase of the state stored in the memory qubit. Here \(S\) is identity or \(R_{z}(\pi/2)\) gate.
The variance of \(\hat{Q}\) can be written as follows
\[\sigma^{2}\hat{Q}\approx\sigma^{2}\hat{P}(X=0|\tilde{\Phi})\left(\frac{2(1-2\hat{ P}(Y=0|\tilde{\Phi}))}{(2\hat{P}(X=0|\tilde{\Phi})-1)^{2}}\right)^{2}+\sigma^{2} \hat{P}(Y=0|\tilde{\Phi})\left(\frac{2}{2\hat{P}(X=0|\tilde{\Phi})-1}\right)^{2}.\] (D9)
Since we have a Bernoulli distributed random variables, the variances are
\[\sigma^{2}\hat{P}(X=1|\tilde{\Phi})=\frac{\hat{P}(X=0|\tilde{\Phi})\hat{P}(X=1 |\tilde{\Phi})}{M_{q}},\quad\sigma^{2}\hat{P}(Y=1|\tilde{\Phi})=\frac{\hat{P}( Y=0|\tilde{\Phi})\hat{P}(Y=1|\tilde{\Phi})}{M_{q}},\] (D10)
where \(M_{q}\) is the amount of repreparations of \(|\tilde{\Phi})\). Finally, the variance (D8) can be written as follows:
\[\sigma^{2}\hat{\tilde{\Phi}}\approx\frac{3+\cos{(8\tilde{\Phi})}}{4M_{q}} \sim\frac{1}{M_{q}}.\] (D11)
Then the variance of the estimate of \(O\) is the following
\[\sigma^{2}\hat{O}\approx\left(\frac{\partial O}{\partial\phi}\right)^{2} \sigma^{2}\hat{\tilde{\Phi}}\sim\frac{1}{M_{q}\epsilon}.\] (D12)
## Appendix E Error Correction
Since we used the Taylor expansion to get rid of _arc-cosine_ function in (D1), we inserted an error by neglecting the \(O\)-big term.Hence, we use the QEE strategy to estimate the correction terms. The corrected value is the following
\[\hat{O}_{TCPS}=\hat{O}+\frac{1}{2\sqrt{\epsilon}}\sum_{n=1}^{\infty}\frac{(2n )!(2\sqrt{\epsilon})^{2n+1}}{4^{n}(n!)^{2}(2n+1)}\hat{P}_{cor}^{2n+1},\] (E1)
where
\[\hat{P}_{cor}^{2n+1}=\sum_{i=1}^{N}|a_{i}^{2n+1}\langle\hat{P}_{i}\rangle^{2n +1}|,\] (E2)
is the sum of the Pauli strings estimates done by the projective measurements (like it is done in QEE method). The variance of the correction term can be straightforwardly written as follows
\[\sigma^{2}P_{cor}^{2n+1}=\frac{(2n+1)}{n_{c}^{cor}}\sum_{j=1}^{N}(a_{j}^{2n+1 }\langle P_{j}\rangle^{2n})^{2}(1-\langle P_{j}\rangle^{2}),\] (E3)
where \(n_{c}^{cor}\) is the amount of the projective measurements done for every Pauli string. Then \(M_{c}^{cor}=Nn_{c}^{cor}\) is the amount of the projective measurement done to estimate \(N\) correction terms. Hence, the variance of the estimate of \(O\) is the following
\[\sigma^{2}\hat{O}_{TCPS}=\sigma^{2}\hat{O}+\frac{1}{n_{c}^{cor}}\sum_{j=1}^{N }\sum_{n=1}^{\infty}\frac{((2n)!)^{2}2^{4n}\sqrt{\epsilon}^{1n}}{2^{4n}(n!)^{ 4}}a_{j}^{2(2n+1)}\langle P_{j}\rangle^{4n}(1-\langle P_{j}\rangle^{2}).\] (E4)
The series in the right hand side is of the type:
\[\sum_{n=1}^{\infty}\frac{((2n)!)^{2}(2\sqrt{\epsilon}a_{j}\langle P_{j} \rangle)^{4n}}{2^{4n}(n!)^{4}}=\frac{2}{\pi}K\left(16\sqrt{\epsilon}^{4}a_{j} ^{4}\langle P_{j}\rangle^{4}\right)-1\quad\text{if}\quad\sqrt{\epsilon}^{2}a_ {j}^{2}\langle P_{j}\rangle^{2}<1/4,\] (E5)
where
\[K(t)=\int_{0}^{\pi/2}\frac{d\theta}{\sqrt{1-t\sin^{2}\theta}}, \tag{10}\]
is the complete elliptic integral of the first kind1. Finally, we can rewrite Eq. (11) as follows
Footnote 1: One must be careful with the notation when using these functions, because various reputable references and software packages use different conventions in the definitions of the elliptic functions. On Wikipedia \(K(k)\) is used, \(k^{2}=t\), holds.
\[\sigma^{2}\hat{O}_{TCPS}=\frac{1}{\epsilon M_{q}}+\frac{1}{n_{c}^{cor}}\sum _{j=1}^{N}\Big{[}\frac{2}{\pi}K\left(16\epsilon^{2}a_{j}^{4}\langle P_{j} \rangle^{4}\right)-1\Big{]}a_{j}^{2}(1-\langle P_{j}\rangle^{2}), \tag{11}\]
if
\[\epsilon<\left(4a_{j}^{2}\langle P_{j}\rangle^{2}\right)^{-1}, \tag{12}\]
holds for all \(j=1,\ldots,N\). Let us use the following asymptotic expression:
\[K\left(t\right)\approx\frac{\pi}{2}+\frac{\pi}{8}\frac{t}{1-t}-\frac{\pi}{16} \frac{t^{2}}{1-t}. \tag{13}\]
This approximation has a relative variance better than \(3\times 10^{-4}\) for \(t<1/4\) (\(k<1/2\)). Keeping only the first two terms is correct to \(0.01\) variance for \(t<1/4\) (\(k<1/2\)).
We can conclude that one can use (13) if \(t\equiv\epsilon^{2}a_{j}^{4}\langle P_{j}\rangle^{4}<1/4\). Since Eq. (12), holds, it is always true. Using this assumption, we can rewrite Eq. (11) as follows:
\[\sigma^{2}\hat{O}_{TCPS}\sim\frac{1}{\epsilon M_{q}}+\frac{\epsilon^{2}}{n_{c }^{cor}}\sum_{j=1}^{N}a_{j}^{6}\langle P_{j}\rangle^{4}(1-\langle P_{j}\rangle ^{2}). \tag{14}\]
We select \(\epsilon<<1\). Taking the derivative on \(\epsilon\) and equating it to zero, we get the optimal \(\epsilon\) that minimizes the variance (14):
\[\epsilon=\sqrt[3]{\frac{n_{c}^{cor}}{M_{q}\sum\limits_{j=1}^{N}a_{j}^{6} \langle P_{j}\rangle^{4}(1-\langle P_{j}\rangle^{2})}}\sim\left(\frac{n_{c}^ {cor}}{M_{q}}\right)^{\frac{1}{3}}\frac{1}{N^{\frac{1}{3}}}=\left(\frac{M_{c} ^{cor}}{M_{q}}\right)^{\frac{1}{3}}\frac{1}{N^{\frac{2}{3}}}. \tag{15}\]
The standard deviation of our estimate should be bigger or equal to the error value that can be done on the sign estimation and QPE steps, namely \(\sum\limits_{j=1}^{N}2a_{j}\langle P_{j}\rangle((1-\eta_{1})\eta_{3}+\eta_{1}/2)\). The parameters \(\eta_{1}\) and \(\eta_{3}\) are defined in (10) and (11) as
\[\eta_{1}=\frac{2}{e^{2n_{1}g_{1}^{2}}},\quad\eta_{3}=\frac{2}{e^{2n_{QPE}g_{3} ^{2}(\delta)}}. \tag{16}\]
Let us assume \(\eta_{1}\) being the same rate as \(\eta_{3}\). Since \(\eta_{1}<<1\), we can write
\[(1-\eta_{1})\eta_{3}+\eta_{1}/2\sim\eta_{3}+\eta_{1}/2\sim e^{-2n_{QPE}g_{3}^{ 2}(\delta)}. \tag{17}\]
If we assume that all Paulis are equal with equal weights, then we can write the condition
\[\sigma^{2}\hat{O}_{TCPS}\geq 36a^{2}\langle P\rangle^{2}N^{2}e^{-4n_{QPE}g_{3}^{ 2}(\delta)}. \tag{18}\]
Finally if we define the target variance \(\sigma^{2}\hat{O}_{TCPS}=\eta\), we can conclude
\[n_{QPE}=O\left(\log\left(\frac{N}{\sqrt{\eta}}\right)\right) \tag{19}\]
that gives the scaling of QPE steps required for our method.
Comparison of TCPS Method and QEE Method
In this section we compare the TCPS with the QEE. In QEE every Pauli string is measured independently and then all the results are summed up. Hence, the variance of the estimate of \(O\) is the following:
\[\sigma^{2}\hat{O}_{QEE}\equiv\sum_{i=1}^{N}\frac{a_{i}^{2}\sigma^{2}\hat{P}_{i} }{n_{c}}=\frac{N}{M_{c}}\sum_{i=1}^{N}a_{i}^{2}\left(1-\langle\Psi_{0}|P_{i}| \Psi_{0}\rangle^{2}\right)\sim\frac{N^{2}}{M_{c}},\] (F1)
where \(M_{c}=Nn_{c}\) is the amount of projective measurements done in QEE. Then
\[\frac{\sigma^{2}\hat{O}_{TCPS}}{\sigma^{2}\hat{O}_{QEE}}\sim\frac{\frac{1}{M_{ q}^{\frac{2}{3}}(n_{c}^{cor})^{\frac{1}{3}}}\left(\sum_{j=1}^{N}a_{j}^{6} \langle P_{j}\rangle^{4}(1-\langle P_{j}\rangle^{2})\right)^{\frac{1}{3}}}{ \frac{1}{n_{c}}\sum_{j=1}^{N}a_{j}^{2}(1-\langle P_{j}\rangle^{2})}.\] (F2)
Let us assume that the amount of resources needed for both methods are equal. For TCPS we have the total amount of resources \(M_{T}=M_{c}^{cor}+M_{q}(1+M_{QPE})\). Hence we select \(M_{c}=M_{T}\) and, substituting (E11) in (F2), we get
\[\frac{\sigma^{2}\hat{O}_{TCPS}}{\sigma^{2}\hat{O}_{QEE}}\sim\frac{M_{q}(1+M_{ QPE})+M_{c}^{cor}}{N^{\frac{2}{3}}M_{q}^{\frac{2}{3}}(M_{c}^{cor})^{\frac{1}{3}}} \frac{\left(\sum_{j=1}^{N}a_{j}^{6}\langle P_{j}\rangle^{4}(1-\langle P_{j} \rangle^{2})\right)^{\frac{1}{3}}}{\sum_{j=1}^{N}a_{j}^{2}(1-\langle P_{j} \rangle^{2})}.\] (F3)
Since we are interested in the rate, we assume the case when all \(P_{j}\) and \(a_{j}\) are equal for all \(j\). Then the latter ratio will reduce to
\[\frac{\sigma^{2}\hat{O}_{TCPS}}{\sigma^{2}\hat{O}_{QEE}}\sim\frac{M_{q}(1+M_{ QPE})+M_{c}^{cor}}{N^{\frac{4}{3}}M_{q}^{\frac{2}{3}}(M_{c}^{cor})^{\frac{1}{3}}} \frac{\langle P\rangle^{\frac{4}{3}}}{(1-\langle P\rangle^{2})^{\frac{2}{3}}}.\] (F4)
Since (E15), holds, and selecting \(M_{c}^{cor}=M_{q}(1+M_{QPE})\) and \(M_{c}^{cor}=Nm_{c}^{cor}\), we can conclude
\[\frac{\sigma^{2}\hat{O}_{TCPS}}{\sigma^{2}\hat{O}_{QEE}}\sim\frac{1}{N^{\frac{ 2}{3}}}.\] (F5)
## Appendix G Control of Phase Wrapping
The state (D1) encoded in the memory qubit contains the phase
\[\hat{\Phi}=\sum_{j=1}^{N}\arccos\left(2\sqrt{\epsilon}|a_{j}\langle P_{j} \rangle|\right)\mod 2\pi,\] (G1)
accumulated by the \(N\) rounds of encoding. However, the phase in our method is estimated only up to modulus \(2\pi\). With the growth of \(N\) the phase \(\hat{\Phi}\notin[-\pi,\pi)\) and one has no information about how many times the phase wraps around \(2\pi\). To overcome this problem we can use method
introduced in [27], and improved in [3] for the purpose of gate calibration. We select \(\tilde{\epsilon}_{0}\equiv 2\sqrt{\epsilon_{0}}\) and introduce the notation
\[\phi_{0}=\tilde{\epsilon}_{0}P_{sum},\quad P_{sum}\equiv\sum_{j=0}^{N}|a_{j} \langle P_{j}\rangle|,\] (G2)
such that \(\phi_{0}<2\pi\), holds. Given a targeted variance \(\eta>0\), and numbers \(\alpha,\gamma\in Z^{+}\), the algorithm outputting an estimate \(\hat{P}_{sum}\) as an estimate for \(P_{sum}\) proceeds as follows. We fix \(d_{L}=[\log_{2}1/\eta]\) and for all \(l=1,2,3,\ldots,d_{L}\) obtain estimates \(\hat{\phi}_{l}\) of \(\phi_{l}=2^{l}\tilde{\epsilon}_{0}P_{sum}\mod 2\pi\) from \(M_{l}=\alpha+\gamma(d_{L}+1-l)\) repetitions of Hadamard test circuit with \(S=I\) and \(S=R\). For \(l=1\) we set \(\hat{P}_{sum}^{(1)}\equiv\frac{\hat{\phi}_{0}}{\epsilon_{0}}\). For all other \(l\) parameters we set \(\hat{P}_{sum}^{(l)}\) to be the (unique) number in \([\hat{P}_{sum}^{(l-1)}-\frac{\pi}{2^{l}},\hat{P}_{sum}^{(l-1)}+\frac{\pi}{2^{ l}}]\) such that
\[2^{l}\tilde{\epsilon}_{0}\hat{P}_{sum}^{(l)}\equiv\hat{\phi}_{l-1},\] (G3)
holds. Finally, after \(l\) steps, the \(\hat{P}_{sum}=\hat{P}_{sum}^{(d_{L})}\) returns an estimate of \(P_{sum}\).
In [3] it is shown that choosing \(\alpha>2\) and \(\gamma>0\), the variance \(\eta\) of the final estimate scales as \(\sim cM^{-1}\), where where \(c\) is a constant and \(M\) is a total cost.
Then, the total amount of repetitions of the Hadamard test is \(M_{q}=2\sum\limits_{l=0}^{d_{L}}M_{l}\) and the variance of the estimate is of the rate
\[\sigma^{2}(\hat{O}_{TCPS})\sim\frac{1}{2^{2(d_{L}+1)}\epsilon M_{q}}+\frac{ \epsilon^{2}N^{2}}{M_{c}^{cor}}.\] (G4)
The optimal \(\epsilon\) selection that minimizes the variance is
\[\epsilon=\left(\frac{M_{c}^{cor}}{2^{2(d_{L}+1)}M_{q}}\right)^{\frac{1}{3}} \frac{1}{N^{\frac{2}{3}}}.\] (G5)
Then the variance (G4) can be written as follows
\[\sigma^{2}(\hat{O}_{TCPS})\sim\frac{N^{\frac{2}{3}}}{M_{q}^{\frac{2}{3}}(M_{c }^{cor})^{\frac{1}{3}}2^{\frac{4}{3}d_{L}}}.\] (G6)
The total amount of the state preparations used in the Taylor based method is \(T=(1+n_{QPE})NM_{q}+M_{c}^{cor}\) and then the final expression is the following
\[\sigma^{2}(\hat{O}_{TCPS})\sim\frac{N^{\frac{4}{3}}n_{3}^{\frac{2}{3}}}{T2^{ \frac{4}{3}d_{L}}},\] (G7)
where we still get \(1/N^{\frac{2}{3}}\) overhead in variance comparable to QEE method.
## Appendix H Implementing TCPS
As we mentioned, the TCPS algorithm requires multi-qubit controlled gates/complex connectivity's to perform. To begin with, the error rate of the controlled reflection gate \(\Pi_{0}\) and the controlled Pauli operations required for \(\hat{U}_{P_{j}}\) must be smaller then the errors of the state preparation gate \(\tilde{V}\).
The multi-qubit controlled gates can be achieved for example on Rydberg or trapped ion devices. In ion trap systems, the controlled reflection operator can be implemented using various
techniques to achieve quantum control. One common approach involves applying laser-induced vibrational sideband transitions, where the internal states of the ions are coupled to their motional modes. By carefully engineering the laser pulses and controlling the ion trap parameters, it is possible to achieve controlled reflections [29, 30]. Moreover, the Rydberg atom array platform provides a promising avenue for the implementation of the controlled reflection operator and controlled Pauli operations, with a favorable scaling characteristic that grows linearly with the system size [31, 32].
Next, TCPS requares a long coherence QM. The ion-qubits have an ultra-long coherence time [18, 19] and near perfect qubit state initialization and detection. In [33] a single ion \({}^{171}Yb^{+}\) QM with the coherence time more then one hour (5500 s) is implemented. The experimental demonstration shows its applicability on NISQ devices and robustness to the various noises. For example, \(99.99\%\) detection fidelities are demonstrated in [34] and the shortest detection time \(\sim 11\mu s\) is provided with \(99.93\%\) fidelity in [35]. For the single qubit gates, it has been shown that the duration of the gates approach scales picoseconds and the fidelities are much higher than the typical error correction requirements with both microwaves and laser beams [36]. In [37] the \(35\mu s\) duration two-qubit gates with a fidelity of \(99.94\%\) are realised on optical ions of \({}^{40}Ca^{+}\). In [38] the \(1.6\mu s\) duration two-qubit gates with a fidelity of \(99.8\%\) are realised on hyperfine ions of \({}^{43}Ca^{+}\).
Unfortunately, the ion traps have strong interaction with environmental and control noises that is the source of decoherence of qubit states and gate operations. For example, in a fully connected ion trap system where every ion can interact with any other ion, the number of elementary operations required to implement a two-qubit gate typically scales quadratically with the number of ions. This means that the gate time or the number of physical operations required for the gate execution increases as the square of the system size.
That is why at that moment atomic systems involving highly excited Rydberg states are an attractive system for our method. In [20] it is suggested to employ Rydberg levels for interactions and ground levels for storage to achieve both fast quantum operations and long-lived memory (\(\sim 70-80\mu s\)). In [39] an architecture for quantum computing, which takes small, high-fidelity, local quantum processors, places them inside an optical cavity, and quickly connects them using, heralded single-photon transfers is proposed. This idea was applied to Rydberg atoms and multiple chains of trapped ions. This architecture looks promising for our protocol since contains good Rydberg atom based gates connected to long-coherence quantum ion memory.
|
2309.07466 | Codec Data Augmentation for Time-domain Heart Sound Classification | Heart auscultations are a low-cost and effective way of detecting valvular
heart diseases early, which can save lives. Nevertheless, it has been difficult
to scale this screening method since the effectiveness of auscultations is
dependent on the skill of doctors. As such, there has been increasing research
interest in the automatic classification of heart sounds using deep learning
algorithms. However, it is currently difficult to develop good heart sound
classification models due to the limited data available for training. In this
work, we propose a simple time domain approach, to the heart sound
classification problem with a base classification error rate of 0.8 and show
that augmentation of the data through codec simulation can improve the
classification error rate to 0.2. With data augmentation, our approach
outperforms the existing time-domain CNN-BiLSTM baseline model. Critically, our
experiments show that codec data augmentation is effective in getting around
the data limitation. | Ansh Mishra, Jia Qi Yip, Eng Siong Chng | 2023-09-14T06:47:21Z | http://arxiv.org/abs/2309.07466v1 | # Codec Data Augmentation for Time-domain Heart Sound Classification
###### Abstract
Heart auscultations are a low-cost and effective way of detecting valvular heart diseases early, which can save lives. Nevertheless, it has been difficult to scale this screening method since the effectiveness of auscultations is dependent on the skill of doctors. As such, there has been increasing research interest in the automatic classification of heart sounds using deep learning algorithms. However, it is currently difficult to develop good heart sound classification models due to the limited data available for training. In this work, we propose a simple time domain approach, to the heart sound classification problem with a base classification error rate of 0.8 and show that augmentation of the data through codec simulation can improve the classification error rate to 0.2. With data augmentation, our approach outperforms the existing time-domain CNN-BiLSTM baseline model. Critically, our experiments show that codec data augmentation is effective in getting around the data limitation.
heart sound classification, heart association, phonocardiogram, deep learning, audio classification
## I Introduction
Cardiovascular diseases are a leading cause of death around the world [1]. Out of the many cardiovascular diseases, valvular heart disease is a common type of life-threatening disease [2] and early detection plays a key role in improving patient outcomes [3]. Many cardiac conditions, especially valvular heart diseases, are first picked up on cardiac auscultation. The purpose of cardiac auscultation is to characterize heart sounds and murmurs which can indicate CVDs. With the rise of digital stethoscopes that can convert heart sounds into digital phonocardiogram (PCG) signals for storage and analysis, there have also been efforts to perform automated classification of heart sounds. Compared to other techniques for detecting heart murmurs such as Echocardiogram, Cardiac magnetic resonance imaging, and computed tomography scans, collecting a PCG signal through a digital stethoscope has significant cost advantages [4]. As such, PCG-based classification of heart sounds remains an important avenue of research. Despite recent advances, heart sound classification research has been held back by the limited amount of clean, annotated heart murmur PCG data available to the public [6]. Although there have been some attempts to address this in recent years, with new murmur datasets being made public [7] and repositories like PhysioNet [8] that hosts this data, the amount of PCG data available pales in comparison to other audio datasets such as AudioSet [9] and speech-specific ones like VoxCeleb [10], where impressive performance has been achieved.
One of the ways the problem of limited data can be overcome is through data augmentation. Our data augmentation approach is outlined in Figure 1 and the details of the implementation of the data augmentation is outlined in Section II-B.
This work focuses on improving the performance of time-domain classifiers on the Yaseen 2018 (Y-18) dataset [5] which is popular due to its quality and balanced representation of various murmurs. Since the dataset was published, the classification error rate (CER) of models on the Y-18 dataset under a 10-fold cross-validation (CV) approach has reached as low as 0.10 in the frequency domain case. However, the best model under the time-domain approach remains at 0.68 CER.
In this paper, we report the heart sound classification using the model M5 [11] through a time-domain approach which achieves a CER of 0.8 without data augmentation. Then we use the codec simulation data augmentation approach reported in [12] and see an improvement in performance to a CER of 0.2. This outperforms the existing baseline of 0.68 and validates the use of codec simulation in augmenting PCG data.
Fig. 1: The codec data augmentation strategy. The original data set, Yaseen 2018 [5] (Y-18) is passed through a codec simulation at high compression to introduce distortions in the data to produce the Augmented Y-18 dataset.
## II Methodology
### _Yaseen Dataset_
The Yaseen Dataset [5] is a public dataset that consists of 1000 recordings of heart sounds evenly distributed across 5 categories, as shown in table I. The 5 categories are: normal (N) aortic stenosis (AS) mitral stenosis (MS) mitral regurgitation (MR) and mitral valve prolapse (MVP). The data was collected by the authors of [5] from myriad online sources and processed aligned through downsampling to 8kHz and conversion to single channel. Some of these sources include medical textbooks and online websites. The length of the audio files ranges from 1 second to 4 seconds.
Compared to other public datasets such as the PASCAL 2011 [13] and CirCor Digiscope 2022 [7] datasets, the Y-18 dataset offers the advantage of being a balanced dataset across each of the categories of heart murmurs. The different categories and the differences in their waveforms are shown in Figure 2). Each of the categories has a distinct waveform which can be seen in the plot. A heartbeat consists of two peaks in the audio waveform, forming a "lub" and "dub" sound. These are referred to as the S1 and S2 peaks respectively. The S1 and S2 peaks can be clearly seen in the plot of the normal recording, while they are harder to spot in the abnormal cases.
### _Codec data augmentation_
The use of codec simulation to improve audio classification accuracy was first reported on an automatic speech recognition task by the authors of [12]. It was found that by running an audio recording through a codec simulation, Word-Error-Rate (WER) could be improved by 7.28% - 12.78% when compared to a strong baseline baseline [12].
In this work, codec augmentation is performed using the ffmpeg package to simulate the codec. The settings for the codec simulation used is the Opus (OGG) format with bitrates of 4.5k, 5.5k, and 7.7k. We make use of high compression codec with low bitrate to increase the level of distortion in the training data so that we can improve the overall classification accuracy. The codec simulation is implemented in a two-step process in the command line as follows:
ffmpeg -i <input_file>.wav -c:a libopus -b:a <bitrate> temp_file.ogg ffmpeg -i temp_file.ogg -ar 8000 <output_file>.wav
Fig. 3: Comparison of a sample MVVP PCG signal before and after the codec data augmentation. The most compressed ogg 4.5k bitrate codec is used here for illustration and the spectrogram is plotted for an easier visualization of the differences. The original spectrogram is shown in the top image while the spectrogram after the codec simulation is shown in the bottom image. After passing through the code there is more smearing in the spectrogram observed throughout the spectrogram. However, it is most obvious in the area highlighted in the red box, where the initial banding pattern can almost no longer be seen due to the increase in the noise.
Fig. 2: Waveform plots of the 5 categories of heart sounds. Each of the 5 categories, Normal (Top Row), Mitral Valve Prolapse (Middle Row Left), Mitral Stenosis (Middle Row Right), Mitral Regurgiation (Bottom Row Left), and Aortic Stenosis (Bottom Row Right) have features that distinguish them from each other. From the waveforms, we can also see that in the normal heart sound the S1 and S2 sounds are clearly visible, but for the abnormal heart sounds these S1 and S2 peaks cannot always easily be visually identified, especially in the case of AS and MR shown in the bottom row.
The process can also be performed within a Python script by using the subprocess package. In this case, we used a Python script to loop over the list of chosen bitrates for all files in the Y-18 dataset to create the final augmented Y-18 dataset. While we only used the ogg codec in this study, this process can also be generalized to include more codecs.
The distortion created by the codec simulation can be visualized using a spectrogram. In Figure 3 we show the spectrogram of an MVP PCG signal in its original form compared to its distorted form. We can see that the codec simulation does indeed result in some smearing on the spectrogram and the loss of some of the PCG signals. This makes the task of the classifier more difficult and thus should guide it toward extracting more general features that are not impacted by the distortions.
Overall, using the codec simulation at 3 different bitrates, we create 3 additional copies of the Y-18 training dataset resulting in 1000 original PCG recordings and 3000 augmented PCG recordings. All 4000 PCG recordings are used in the training of the model under the augmentation training regime. This set of data augmentations were selected based on the hardest settings reported by the authors of [12] which we believe serves as the strongest augmentation.
### _Model_
The model used in this work is a simple time-domain convolutional model. The model consists of 4 convolutional blocks, followed by a linear classifier label as shown in Figure 4. This architecture was first reported in [11] but we have adapted the number of channels and the output size of the model to be suitable for the heart sound classification task.
The output layers was set to 5 to match the number of classes in the dataset. The stride of the first convolutional layer was set to 16 with a kernel size of 80 to reduce the size of the input into the rest of the model and act as a time-domain encoder. The channel dimension of the first layer was set to 32 which is increased to 64 in the third layer. The pooling and batch normalization layers are implemented as per [11]. Despite the simplicity of this model, it achieves impressive performance even when compared to a previously reported CNN-BiLSTM [14] model.
### _Training Methodology_
For all results, we perform 10-fold cross-validation and report the average CER across all 10 folds, in alignment with the relevant comparison models in the literature. The models are trained on softmax and cross-entropy loss. Additionally, we downsample the recordings to 2kHz as a pre-processing step before passing the PCG signal to the model. The batch size used in the training is 5 and the optimizer used is the Adam optimizer [15] with a learning rate of 0.0005 and a weight decay of 0.0001.
On the use of data, this study compares the performance of two different training regimes: one without data augmentation and one with data augmentation. Firstly, in the unaugmented training regime, we use only the 1000 original PCG recordings from the Y-18 dataset, while in the augmented training regime, as mentioned in section II-B we used a combination of the original Y-18 dataset and the Y-18 dataset after running through the codec at 3 different bitrates.
During testing, the classification results on the original data and the data that has been run through the codec are calculated separately, to ensure compatibility with the results that have already been reported in the literature.
## III Experiments
### _Comparison with baseline_
In this section, we compare the results of the M5 models under the two training configurations with the results reported in the literature. These comparisons are reported in Table II.
The most common data sampling approach is cross-validation, with the original Y-18 authors [5] using 5-fold cross-validation for their initial baselines, although subsequent authors have used a 10-fold cross-validation approach. Aside from cross-validation, the authors of [16] use a simple split of 70% training and 30% validation data split (70-30), which is probably not advisable for a small dataset like Y18. In contrast, [14] uses an innovative multi-round transfer learning (MRTL) approach and performs numerous comparisons across multiple computer vision models and achieves very good results across all of them.
Among the models that make use of 10-fold comparisons, where the results are comparable, there are two types of approaches. The time-domain approach uses the raw audio waveform for the classification. The frequency-domain approach first converts the audio waveform into a series of spectrograms, which can be a beneficial feature engineering step to improve the model performance. However, the frequency domain approach has a small disadvantage during implementation as the Fast Fourier Transform operation can sometimes be costly depending on the approach. Nevertheless, the frequency-domain approach currently outperforms the time-domain approach. The best time-domain approach using a CNN-BiLSTM model [18] has a
best frequency-domain approach has a CER of only 0.1 using a Vision Transformer [14].
The M5 model using the baseline training configuration underperforms the CNN-BiLSTM model with a CER of 0.80, however, with the codec simulation augmented training dataset the M5 model can outperform the CNN-BiLSTM model. This result thus shows the importance of data augmentation and the effectiveness of our codec simulation data augmentation approach.
### _Analysis Across Codec and Original testing datasets_
In this section, we report the performance of the M5 model under both training configurations and their respective validation sets. While training is performed with both the original and augmented data, the testing is done on the original data and the codec data separately to maintain comparability with the previous models. The results of these experiments are shown in Table III.
In the training configuration with the M5 model and no data augmentation, we obtain an original CER of 0.8 and codec CER of 1.63. In this case, the model has seen the original Y-18 data but not the codec augmented data. The performance difference between these two CERs is likely due to the distortions introduced by the codec simulation.
In the training configuration with the M5 model and data augmentation, we see that both the original CER and the codec CER improves. On the codec CER, the large performance improvement from 1.63 to 0.57 is likely due to the model having now seen the codec data during its training as well. On the other hand, the improvement of the original CER shows that the codec-augmented data in the training of the model can help guide the model towards using better and more general features in the classification, which improves its generalization performance.
## IV Discussion
The result of the M5 model on the Y-18 dataset as reported in Table II shows that a simple deep convolutional neural network using the time-domain approach can be competitive with much more complicated models like the Vision Transformer. This also brings the time-domain approach to a level that is competitive with the frequency-domain approach like Vision Transformer. In future work, we intend to attempt classification using transformer-based time-domain approaches such as ACA-Net [20].
The high performance of all models across the literature suggests that there is room for further increasing the dataset size to make the task more difficult. One way this can be done is through training the model on the Y-18 dataset and testing the model on other heart sound datasets that have been collected. This however creates some significant issues due to out-of-domain noise profiles, but can be an avenue for further research as well.
Furthermore, it would be beneficial to evaluate the models on real-world, clinical data to assess their performance in practical settings. Clinical data often presents additional challenges such as varying recording conditions, patient demographics, and the presence of other pathological conditions. Evaluating the models under these conditions would provide a more realistic assessment of their effectiveness.
## V Conclusion
In this work we have shown that data augmentation of heart sounds through codec simulation is an effective method for improving the classification of heart sounds on the Y-18 dataset. Using the M5 model, we also show that it is possible to improve the accuracy of the time-domain classification approach to be competitive with the frequency-domain models. Specifically, our data augmentation strategy improves the CER of the M5 model from 0.8 to 0.2. On transmitted audio segments the improvement is even greater from 1.63 to 0.57. Overall this validates the codec simulation approach as an effective data augmentation approach towards addressing the problem of limited data availability in the field of heart sound classification.
Fig. 4: Detailed view of the model used in the work. The M5 model consists of 4 convolution blocks followed by the classification later. Each convolution block consists of a single 1D convolutional layer, followed by batch normalization, a Rectified Linear Unit activation (ReLU), and max pooling. The parts of the model that have trainable parameters are indicated in blue with the non-trainable functions are indicated in grey.
## VI Acknowledgements
This research is supported by ST Engineering Mission Software & Services Pte. Ltd under a collaboration programme (Research Collaboration No: REQ0149132). We would like to acknowledge the High Performance Computing Centre of Nanyang Technological University Singapore, for providing the computing resources, facilities, and services that have contributed significantly to this work.
|
2308.00020 | Mapping out the parameter space for photoevaporation and core-powered
mass-loss | Understanding atmospheric escape in close-in exoplanets is critical to
interpreting their evolution. We map out the parameter space over which
photoevaporation and core-powered mass loss dominate atmospheric escape.
Generally, the transition between the two regimes is determined by the location
of the Bondi radius (i.e. the sonic point of core-powered outflow) relative to
the penetration depth of XUV photons. Photoevaporation dominates the loss when
the XUV penetration depth lies inside the Bondi radius ($R_{XUV}<R_B$) and
core-powered mass-loss when XUV radiation is absorbed higher up in the flow
($R_B<R_{XUV}$). The transition between the two regimes occurs at a roughly
constant ratio of the planet's radius to its Bondi radius, with the exact value
depending logarithmically on planetary and stellar properties. In general,
core-powered mass-loss dominates for lower-gravity planets with higher
equilibrium temperatures, and photoevaporation dominates for higher-gravity
planets with lower equilibrium temperatures. However, planets can transition
between these two mass-loss regimes during their evolution, and core-powered
mass loss can ``enhance'' photo-evaporation over a significant region of
parameter space. Interestingly, a planet that is ultimately stripped by
core-powered mass-loss has likely only ever experienced core-powered mass-loss.
In contrast a planet that is ultimately stripped by photoevaporation could have
experienced an early phase of core-powered mass-loss. Applying our results to
the observed super-Earth population suggests that it contains significant
fractions of planets where each mechanism controlled the final removal of the
H/He envelope, although photoevaporation appears to be responsible for the
final carving of the exoplanet radius-valley. | James E. Owen, Hilke E. Schlichting | 2023-07-31T18:00:00Z | http://arxiv.org/abs/2308.00020v2 | # Mapping out the parameter space for photoevaporation and core-powered mass-loss
###### Abstract
Understanding atmospheric escape in close-in exoplanets is critical to interpreting their evolution. We map out the parameter space over which photoevaporation and core-powered mass loss dominate atmospheric escape. Generally, the transition between the two regimes is determined by the location of the Bondi radius (i.e. the sonic point of core-powered outflow) relative to the penetration depth of XUV photons. Photoevaporation dominates the loss when the XUV penetration depth lies inside the Bondi radius (\(R_{XUV}<R_{B}\)) and core-powered mass-loss when XUV radiation is absorbed higher up in the flow (\(R_{B}<R_{XUV}\)). The transition between the two regimes occurs at a roughly constant ratio of the planet's radius to its Bondi radius, with the exact value depending logarithmically on planetary and stellar properties. In general, core-powered mass-loss dominates for lower-gravity planets with higher equilibrium temperatures, and photoevaporation dominates for higher-gravity planets with lower equilibrium temperatures. However, planets can transition between these two mass-loss regimes during their evolution, and core-powered mass loss can "enhance" photo-evaporation over a significant region of parameter space. Interestingly, a planet that is ultimately stripped by core-powered mass-loss has likely only ever experienced core-powered mass-loss. In contrast a planet that is ultimately stripped by photoevaporation could have experienced an early phase of core-powered mass-loss. Applying our results to the observed super-Earth population suggests that it contains significant fractions of planets where each mechanism controlled the final removal of the H/He envelope, although photoevaporation appears to be responsible for the final carving of the exoplanet radius-valley.
keywords: planets and satellites: atmospheres -- planets and satellites: physical evolution -- planet star interactions
## 1 Introduction
The discovery of the first close-in exoplanet around a main-sequence star (the hot Jupiter 51 Peg b, Mayor & Queloz, 1995) led to speculation that atmospheric escape may be important for driving the evolution of a planet's bulk properties (Burrows & Lunine, 1995; Baraffe et al., 2004). While it is now firmly established that atmospheric escape alone cannot cause sufficient mass-loss to affect the bulk of hot Jupiters (e.g. Hubbard et al., 2007), the discovery of lower mass planets gave rise to the idea that their primordial hydrogen dominated atmosphere could lose a significant fraction, or all their mass over a planet's billion year lifetime (e.g. Valencia et al., 2010; Owen & Jackson, 2012; Lopez et al., 2012; Ginzburg et al., 2018).
Under the assumption that all close-in, low-mass planets were born with voluminous hydrogen-dominated atmospheres that they accreted from their parent nebula, Owen & Wu (2013) studied the impact of atmospheric escape at the population level. This work demonstrated that atmospheric escape carved distinct features in the exoplanet population: firstly, the "evaporation desert" - a lack of intermediate (2-6 R\({}_{\oplus}\)) sized planets close to their host star, matching the observed hot Neptune desert (e.g. Szabo & Kiss, 2011; Lundkvist et al., 2016; Mazeh et al., 2016); and secondly a "evaporation valley" where planets which retained approximately \(\sim 1\%\) hydrogen by mass are separated in radius-period (and density) space from those planets that completely lost their atmospheres ending up as "stripped cores". This evaporation valley bears remarkable similarity to the observed exoplanet radius-gap initially identified in Kepler data (e.g. Fulton et al., 2017; Van Eylen et al., 2018; Fulton & Petigura, 2018; Ho & Van Eylen, 2023). These two features are generic in a planet population born with a hydrogen-dominated atmosphere undergoing atmospheric escape (e.g. Lopez & Fortney, 2013; Jin et al., 2014; Chen & Rogers, 2016; Ginzburg et al., 2018; Gupta & Schlichting, 2019; Wyatt et al., 2020); and are well understood to be a consequence of (any) efficient atmospheric escape and the mass-radius relationship for these planets (e.g. Owen & Wu, 2017; Gupta & Schlichting, 2019; Mordasini, 2020).
However, the details of the atmospheric escape process do matter. And although different escape models infer broadly similar properties about the underlying exoplanet population, they do differ, for example, in their inferred underlying exoplanet mass-distribution and inferred initial atmospheric mass fractions (e.g. Gupta & Schlichting, 2019, 2020; Rogers & Owen, 2021; Rogers et al., 2021). Furthermore, as we progress into the era of exoplanetary characterisation, understanding the details of the escape processes will become paramount since the fractionation of heavy species in a hydrogen-dominated outflow can be extremely sensitive to the details of the escape process
(e.g. Zahnle and Kasting, 1986). Thus, while the features imprinted in the exoplanet population due to different escape mechanisms may be subtle, the composition differences of any remaining primordial or secondary atmosphere are likely to be vastly more sensitive to the underlying physics of the escape mechanism (e.g. Misener and Schlichting, 2021).
Currently, atmospheric escape models for close-in exoplanets commonly fall into two generic classes: "photoevaporation" where the outflow is driven by heating from X-ray and UV stellar photons (e.g. Lammer et al., 2003; Yelle, 2004; Garcia Munoz, 2007; Murray-Clay et al., 2009; Owen and Jackson, 2012) and "core-powered mass-loss", where the outflow is driven by heating from the planet's cooling luminosity and stellar bolometric luminosity (e.g. Ginzburg et al., 2016; Gupta and Schlichting, 2019, 2020; Gupta et al., 2022). All the previously discussed evolutionary models only focus on the impact of one of these classes of escape models. The underlying principle of both models remains the same: heating drives a transonic, hydrodynamic outflow akin to a Parker wind (e.g. Parker, 1958). However, the outflows' mass-loss rates, temperature and ionization structures can be different, in some cases differing by orders of magnitude.
These different escape classes are not mutually exclusive: in the absence of XUV irradiation, a planet will naturally launch a core-powered outflow; on the other hand, under extreme XUV irradiation, photoevaporation will always occur as demonstrated in other areas of astrophysics (e.g. Begelman et al., 1983; Bertoldi and McKee, 1990; Owen et al., 2012). Therefore, the more pertinent question is under what conditions does atmospheric escape occur in a photoevaporative manner, and when does core-powered mass-loss occur? There has been some simulation work looking into the transition. When a temperature floor equivalent to bolometric stellar heating to the equilibrium temperature has been included (e.g. Kubyshkina et al., 2018), a transition between a bolometric outflow (at the equilibrium temperature) and photoevaporation did occur. Kubyshkina et al. (2018) found that this transition happened for lower values of the "escape parameter" (the ratio of a gas particle's binding energy to its thermal energy at the equilibrium temperature), with bolometric outflows were more persistent when the escape parameter was roughly smaller than 10. In addition, using the aiolos code that includes explicit XUV and bolometric heating from the star and planetary core, Schulik and Booth (2023) showed a calculation for a GJ 436 b-like planet that transitioned from core-powered mass-loss to photoevaporation as the XUV flux was increased relative to the bolometric flux. However, the governing physics underlying the transition between photoevaporation and core-powered mass-loss has not been studied, nor has an idea of when and where different escape mechanisms dominate both during an individual planet's lifetime and across the exoplanet population. Thus, in order to guide future expensive radiation-hydrodynamic simulations, here we use (semi-)analytical techniques to lay the physical foundations governing the transition between photoevaporation and core-powered mass-loss.
## 2 Problem construction
In order to gain insights into the problem, we consider the basic structure of a hydrogen-dominated envelope. Figure 1 shows a schematic of the temperature structure we are investigating. Deep in the planet's envelope, we model it to be convective, and due to the low internal luminosities, we approximate its structure as adiabatic. The atmosphere becomes radiative at the radiative-convective boundary (\(R_{\rm cph}\)). Again, due to the low internal luminosities, we approximate this radiative layer to be isothermal with a temperature set to the planet's equilibrium temperature (\(T_{\rm eq}\)). It is this isothermal layer that represents the outflowing core-powered mass-loss region. Below the planet's radius (\(R_{p}\), which we define as the \(\tau=1\) surface to outgoing thermal - IR - radiation), the outflow is mainly powered by the planet's internal luminosity. Above \(R_{p}\), the star's bolometric luminosity provides an additional energy source. XUV photons can penetrate the atmosphere to \(R_{\rm XUV}\), which we take to be the \(\tau=1\) surface to XUV photons. They heat the rarefied gas to high temperatures and drive a photoevaporative outflow. While only in the particular case of an XUV heated region in recombination-ionization equilibrium is the XUV region typically exactly isothermal, we can gain a lot of insights in the ensuing sections by considering the XUV heated region to be isothermal with a representative temperature \(T_{\rm pe}\).
An additional important length scale in the problem is the planet's "Bondi radius":
\[R_{B}=\frac{GM_{p}}{2c_{s}^{2}} \tag{1}\]
with \(G\) the gravitational constant, \(M_{p}\) the planet's mass and \(c_{s}\) the sound speed of the gas with a temperature equal to the equilibrium temperature. This radius represents the radius at which bolometrically heated gas becomes unbound from the planet and is equivalent to the sonic radius in a bolometrically powered isothermal outflow (i.e. the Bondi radius is equivalent to the sonic radius in the core-powered mass-loss regime).
We show in the following sections that we can determine when atmospheric mass-loss transitions from core-powered to escape driven by photoevaporation by calculating the location of the Bondi radius relative to the penetration depth of XUV photons (see Figure 2). Photoevaporation dominates the loss when the XUV penetration depth lies inside the Bondi radius (\(R_{XUV}<R_{B}\)) and core-powered mass-loss when XUV radiation is absorbed higher up in the flow (\(R_{B}<R_{XUV}\)).
In this hydrogen-dominated envelope, we approximate the mean particle mass \(\mu\), as \(2m_{h}\) (with \(m_{h}\) the mass of the hydrogen atom) in the bolometrically heated region, \(\mu=m_{h}\) in the XUV heated region and \(\mu=m_{h}/2\) in any region in recombination-ionization equilibrium.
This work does not distinguish between core-powered mass-loss and "boil-off/spontaneous mass-loss" since they are both bolometrically heated outflows above the radiative-convective boundary. An evolutionary model is needed to determine the radiative-convective boundary's energy supply, allowing core-powered mass-loss and boil-off/spontaneous mass-loss to be distinguished. Throughout this work, we use core-powered mass-loss to refer to this bolometrically powered outflow since we are primarily concerned with the late-time evolution of planets after disc dispersal. Although, all our criteria could equally be applied to the transition between photoevaporation and boil-off/spontaneous mass-loss.
### Core-powered mass-loss or photoevaporation?
Using this atmosphere/outflow structure, we can now begin to consider which mass-loss mechanism dominates. More specifically, we wish to determine whether the physics of photoevaporation or core-powered mass-loss ultimately sets the mass-loss rates.
A strongly irradiated planet that receives _no_ XUV flux from its host star will naturally produce an approximately isothermal outflow at roughly the planet's equilibrium temperature, hence a "core-powered" outflow. Without XUV radiation, this outflow can still be shut off if it becomes too rarefied and the upper atmosphere is no
longer collisional, meaning the hydrodynamic approximation is invalid. A hydrodynamic picture is applicable when the mean free path of the individual particles is smaller than the flow scale or:
\[\frac{1}{n\sigma_{\rm col}}\la\frac{\partial r}{\partial\log P}. \tag{2}\]
where \(n\) is the number density, \(\sigma_{\rm col}\) is the collisional cross-section, \(r\) is the radius from the centre of the planet and \(P\) is the gas pressure. For a hydrodynamic outflow, this condition is required to be satisfied everywhere inside the sonic point. Thus, core-powered mass-loss can be shut off if the inequality in Equation 2 becomes invalid at the sonic-point1. At the sonic point (\(R_{s}=R_{B}=GM_{p}/2c_{s}^{2}\)) of an isothermal Parker wind, the flow length scale is \(R_{s}/3\). Re-writing this inequality in terms of mass-loss rate (\(\dot{M}\)) we find:
Footnote 1: Since density decreases with distance from the planet and the scale height increases the inequality in Equation 2 breaks down at large distances first.
\[\dot{M}>\frac{12\mu\pi c_{s}R_{B}}{\sigma_{\rm col}}. \tag{3}\]
The core-powered mass-loss rate can be written as:
\[\dot{M}_{\rm CP}=4\pi R_{\rm p}^{2}\rho_{\rm photc}c_{s}\mathcal{M}(R_{\rm p} /R_{B}), \tag{4}\]
where \(\rho_{\rm photc}\) is the density at the planet's photosphere and \(\mathcal{M}\) is the Mach number of the flow. Now, using the fact that in hydrostatic equilibrium (an adequate approximation below the sonic point) we can write the density at the photosphere to the outgoing thermal radiation as:
\[\rho_{\rm phot}\approx\frac{g}{c_{s}^{2}\kappa_{\rm IR}}=\frac{2R_{B}}{R_{p}^ {2}\kappa_{\rm IR}}, \tag{5}\]
where \(g\) is the strength of the planet's gravitational acceleration at the photosphere and \(\kappa_{\rm IR}\) is opacity to outgoing thermal, IR radiation. We then arrive at the simple condition for the case at which the outflow will just be collisional at the sonic point.:
\[\mathcal{M}_{\rm phot}\ga 3\frac{2}{\sigma_{\rm IR}}\sim 5\times 10^{-10}\, \left(\frac{\kappa_{\rm IR}}{1\times 10^{-2}\ {\rm cm}^{2}\ {\rm g}^{-1}}\right)\left(\frac{\sigma_{\rm col}}{10^{-16}\ {\rm cm}^{2}}\right)^{-1}. \tag{6}\]
where \(\sigma_{IR}\) is the absorption cross-section to IR radiation. This "launch" Mach number is a unique function of \(R_{s}/R_{\rm p}\), or what's often called the "escape parameter" in other contexts. We find that core-powered mass-loss will shut down for \(R_{s}/R_{\rm p}\ga 14\), for canonical values of the IR opacity and collision cross-section.
However, a core-powered outflow is likely to switch to a photoevaporative if the core-powered outflow can be penetrated by XUV radiation before the sonic point. Ultimately, this means that if:
\[n\sigma_{\rm XUV}\frac{\partial r}{\partial\log P}\la 1 \tag{7}\]
at the sonic point (with \(\sigma_{\rm XUV}\) the absorption cross-section to XUV radiation), the flow will likely become photoevaporative. This condition is essentially the same as in Equation 2, except the collision cross-section has been replaced with the cross-section to absorb ionizing photons. Since \(\sigma_{\rm XUV}\sim 10^{-18}\ {\rm cm}^{2}\) in the case of EUV photons and \(\sim 10^{-21}-10^{-22}\ {\rm cm}^{2}\) in the case of soft X-rays (for Solar metallicity gas), this means a core-powered mass-loss outflow will always be penetrated by XUV radiation interior to the sonic point before it transitions to Jeans escape. This insight is an important conclusion, as it means the breakdown of core-powered mass-loss is not primarily controlled by the outflow becoming collisionless. Thus, the transition to a collisionless Jeans escape-like outflow is more likely to occur in an XUV-heated region. This conclusion only breaks down when the XUV irradiation provides insufficient heating, as discussed in Section 2.1.3.
Therefore, the transition between core-powered mass-loss and photoevaporation is going to primarily arise from whether XUV photons can penetrate a core-powered mass-loss outflow sufficiently deeply to affect the mass-loss rate. As discussed by Bean et al. (2021), XUV photons can only affect the outflow if they are absorbed interior to the planet's Bondi radius, where the gas is moving sub-sonically. If they were absorbed exterior to the Bondi radius, the bolometrically heated gas would already be travelling super-sonically. Information cannot propagate upstream in a super-sonic hydrodynamic outflow, so the XUV photons would not contribute to the mass-loss rates. Thus, a planet undergoing core-powered mass-loss has its Bondi radius residing inside the point at which XUV photons can reach. A planet undergoing photoevaporation has its Bondi radius within the XUV-heated region. We also highlight the case of "enhanced" photoevaporation, where the Bondi radius resides outside \(R_{\rm XUV}\); however, the bolometrically heated region extends well beyond \(R_{B}\). In this case, the bolometrically heated region allows the planet to intercept more stellar XUV photons resulting in higher mass-loss rates (and is sometimes parameterised in energy-loss models via a radius enhancement factor; e.g. Lammer et al., 2003; Baraffe et al., 2004; Owen, 2019)2. This penetration depth argument is similar to the discussion of X-ray compared to EUV-driven photoevaporation (e.g. Owen & Jackson, 2012), and has a well-established theoretical framework in disc photoevaporation (e.g. Johnstone et al., 1998; Richling & Yorke, 2000; Owen et al., 2012).
Footnote 2: Some previous photoevaporation evolutionary models have implicitly included this “enhanced photoevaporation”, whereas others have not.
In the following sub-sections, we expand on the above discussion,
Figure 1: A schematic of the temperature structure of a planet’s envelope/atmosphere. This temperature structure is shown in units of the planet’s equilibrium temperature. The temperature structure is adiabatic (thick dashed line) deep in the planet’s envelope. At the radiative-convective boundary \(R_{\rm reb}\), it becomes radiative and approximately isothermal. Once the XUV photons can penetrate the atmosphere at \(R_{\rm XUV}\), a photoevaporative outflow is launched. We also show \(T_{\rm pe}\) a representative XUV heated photoevaporative outflow temperature. The planet’s radius (\(R_{p}\)) and the optical transit radius lie between \(R_{\rm reb}\) and \(R_{\rm XUV}\).
and analytically derive the various transition criteria, laying the theoretical foundations for our later numerical computations. We provide a physically motivated discussion of these insights in Section 2.2, before preceding with the numerical solutions.
#### 2.1.1 Penetration by XUV radiation
We have now identified one of the main reasons why core-powered mass-loss switches to a photoevaporative outflow: that the isothermal region heated by the planet's internal and stellar bolometric radiation is penetrated by XUV radiation, launching a, generally more powerful photoevaportive outflow. Thus, the criteria to switch to a photoevaporative flow due to the penetration of XUV photons is approximately:
\[\int_{R_{B}}^{\infty}n_{\rm pe}(r)\sigma_{\rm XUV}{\rm d}r\lesssim 1\,, \tag{8}\]
where \(n_{\rm pe}\) is the number density in the photoevaporative region. The limiting case is determined by the point where the photoevaporative outflow is launched from the Bondi radius (\(R_{B}\), the sonic point of the core-powered mass loss outflow)3, in this case, the photoevaporative outflow is already trans-sonic due to its higher temperature. Thus, in this limiting case, we assume the outflow velocity outside \(R_{B}\) is approximately constant, and the density profile falls off as \(n_{\rm pe}\propto 1/r^{2}\). In addition, we assume that the density profile is predominately neutral near \(R_{B}\) and ignore the effects of ionization (which we will treat later). Under these simplifications Equation 8 becomes:
Footnote 3: In the following, we use \(c_{s}\) to refer to the sound speed in the bolometrically heated region.
\[n_{\rm pe}(R_{B})\sigma_{\rm XUV}R_{B}=1 \tag{9}\]
Assuming that the photoevaporative outflow is travelling with a velocity equal to the sound speed in the XUV heated gas (\(c_{\rm pe}\)), then momentum balance across the transition to the photoevaporative outflow implies:
\[\rho_{\rm eq}(R_{B})c_{s}^{2}\approx 2\rho_{\rm pe}(R_{B})c_{\rm pe}^{2} \tag{10}\]
where \(\rho_{\rm pe}\) is the mass density in the photoevaporative region. It is easy to show, enforcing both mass and momentum conservation, that the bolometrically heated region is strongly sub-sonic even in the case \(R_{\rm XUV}=R_{B}\). Thus neglecting its momentum flux allows us to relate \(n_{\rm pe}\) to \(\rho_{\rm phot}\) and ultimately to the planetary properties. Again, as the bolometrically heated region is moving sub-sonically, its density profile as a function of radius (\(r\)) is given approximately by the hydrostatic solution:
\[\rho_{\rm eq}(r)=\rho_{\rm phot}\exp\left[\frac{2R_{B}}{R_{p}}\left(\frac{R_{ p}}{r}-1\right)\right] \tag{11}\]
Combining Equations 9-11 with Equation 5, we arrive at a transcendental equation that describes the transition from core-powered mass-loss to photoevaporation:
\[2\left(\frac{R_{B}}{R_{p}}\right)^{2}\exp\left[2\left(1-\frac{R_{B}}{R_{p}} \right)\right]\left(\frac{c_{s}}{c_{\rm pe}}\right)^{2}\left(\frac{\sigma_{ \rm XUV}}{\sigma_{\rm IR}}\right)=1 \tag{12}\]
where we have assumed the base of the photoevaporative outflow is atomic gas and the regions around the IR photosphere are molecular, and there is a factor two change in the mean molecular weight between
Figure 2: Schematic depicting the mass-loss regimes for “photoevaporation”, ” enhanced photoevaporation” and “core-powered mass-loss”. A planet that is undergoing photoevaporation has its Bondi radius (\(R_{B}\)) located outside the region at which XUV photons can penetrate the envelope (\(R_{XUV}\)) such that \(R_{s}<R_{B}\). Whereas a planet undergoing core-powered mass-loss has its Bondi radius inside the XUV heated region (\(R_{B}<R_{XUV}\)) and \(R_{s}=R_{B}\). Even if an outflow is primarily controlled by XUV heating and \(R_{XUV}<R_{B}\); heating from the planet’s core and the bolometric luminosity of its host star can enhance the ability of a planet to absorb XUV irradiation by pushing the XUV radiation to lager heights and thus driving a more powerful photoevaporative outflow. As a result, when \(R_{XUV}\gg R_{p}\), core-powered mass-loss and photoevaporation work in concert to drive “enhanced” photoevaporative outflows (middle panel). XUV photons and optical photons from the host star are shown in blue and orange, respectively. The IR/optical radiation from the planet’s interior is shown in red.
these two positions. Thus, we see the transition point can be described in terms of the ratio \(R_{B}/R_{\rm p}\) (or the escape parameter). Since any bound planet will require \(R_{\rm p}\ll R_{B}\), we can expand Equation 12 to find:
\[\frac{R_{B}}{R_{\rm p}}\approx\log\left[\sqrt{2}e\left(\frac{c_{s}}{c_{\rm pe}} \right)\sqrt{\frac{\sigma_{\rm XUV}}{\sigma_{\rm IR}}}\right]+\mathcal{O}(1) \sim 9+\mathcal{O}(1) \tag{13}\]
where the final approximate value is determined by substituting a sound-speed ratio of \(c_{\rm pe}/c_{s}\sim 5\), \(\sigma_{\rm XUV}\sim 2\times 10^{-18}\) cm\({}^{2}\) and \(\epsilon_{\rm IR}=10^{-2}\) cm\({}^{2}\) g\({}^{-1}\). We caution that this and the following expansions are only good to of order \(1^{4}\), as evidenced by the \(\sim 10-20\%\) difference between using this approximation and explicitly solving Equation 12. Thus, Equation 13 is a rough but instructive guide, especially considering the approximations made to arrive at the result. In particular, if one would assume soft X-rays drive the photoevaporative outflow, one would find a value of \(R_{B}/R_{\rm p}\) closer to \(\sim\)6 (see Section 4.3). However, this result does imply that core-powered mass-loss is more applicable for low-mass, puffy planets, while photoevaporation applies to higher-mass, denser planets in agreement with previous simulation results (e.g. Kubyshkina et al., 2018).
#### 2.1.2 Ionization-recombination balance
In the previous case, we assumed that the gas in the vicinity of \(R_{\rm XUV}\) was predominately neutral. However, at sufficiently high EUV fluxes, the gas can become highly ionized and thus transparent to EUV photons. This allows EUV photons to penetrate deeper into the atmosphere. When the gas becomes highly ionized, recombination is frequent, and the XUV heated region reaches an ionization-recombination balance (e.g. Murray-Clay et al., 2009; Owen and Alvarez, 2016). As the recombination rate is slower than the Lyman-\(\alpha\) cooling rate, this thermostats the gas to \(\sim 10^{4}\) K. Thus, one can calculate the position of \(R_{\rm XUV}\) through a Stromgren volume argument (e.g. Bertoldi and McKee, 1990). Following (Murray-Clay et al., 2009; Owen and Alvarez, 2016), the density at \(R_{\rm XUV}\) when \(R_{\rm XUV}=R_{B}\) (i.e. the atmosphere penetration depth to XUV photons equal the sonic radius from the hydrodynamic out-flow from core-powered mass-loss) can be found by balancing ionizations with recombinations locally, using the on-the-spot approximation:
\[\frac{F_{\rm XUV}}{h\bar{\nu}}=\phi_{XUV}=\int_{R_{B}}^{\infty}\alpha_{B}\eta _{\rm pe}^{2}(r)\mathrm{d}r \tag{14}\]
where \(\phi_{\rm XUV}\) is the ionizing flux in photons per unit time per unit area, and \(\alpha_{B}\) is the case-B recombination coefficient. The photon flux is related to the energy flux (\(F_{\rm XUV}\)) in terms of a representative photon energy (\(h\bar{\nu}\)), which we choose to be 20 eV throughout this work. Following the same steps as in Section 2.1.1; specifically assuming \(n_{\rm pe}\propto 1/r^{2}\), adopting momentum balance across \(R_{\rm XUV}\) (Equation 10) and taking the bolometrically heated region to have a hydrostatic density profile (Equation 11) we arrive at the following criteria for core-powered mass-loss to transition to photoevaporation, assuming photoevaporation takes place in the recombination limit:
\[\phi_{\rm XUV}=\frac{\alpha_{B}}{3\sigma_{IR}^{2}R_{\rm p}}\left(\frac{\mu_{ \rm eq}}{\mu_{\rm pe}}\right)^{2}\left(\frac{c_{s}}{c_{\rm pe}}\right)^{4} \left(\frac{R_{B}}{R_{\rm p}}\right)^{3}\exp\left[4\left(1-\frac{R_{B}}{R_{\rm p }}\right)\right] \tag{15}\]
where \(\mu_{\rm pe}\) and \(\mu_{\rm eq}\) are the mean-molecular weights in the photoevaporative and bolometrically heated regions, respectively. This above expression can again be approximately solved by expansion to find:
\[\frac{R_{B}}{R_{\rm p}}\approx\frac{1}{4}\log\left[\frac{9e^{4}\alpha_{B}}{64 \sigma_{IR}^{2}\phi_{\rm XUV}R_{\rm p}}\left(\frac{c_{s}}{c_{\rm pe}}\right)^ {4}\left(\frac{\mu_{\rm eq}}{\mu_{\rm pe}}\right)^{2}\right]+\mathcal{O}(1) \sim 5+\mathcal{O}(1) \tag{16}\]
where the last estimate has been evaluated for \(F_{\rm EUV}=10^{4}\) erg s\({}^{-1}\) cm\({}^{-2}\), \(\kappa_{\rm IR}=10^{-2}\) cm\({}^{2}\) g\({}^{-1}\), \(\mu_{\rm pe}/\mu_{\rm eq}=1/4\), \(c_{\rm pe}/c_{s}=5\) and \(\alpha_{B}=2.6\times 10^{-13}\) cm\({}^{3}\) s\({}^{-1}\). This value is slightly smaller than evaluated in the standard penetration case (Equation 13) and can be understood in terms of the increased ability of EUV photons to penetrate into the atmosphere at high fluxes since they can ionize the gas, resulting in a longer photon mean-free path. However, like in Equation 13, the key result remains the same: the transition between photoevaporation and core-powered mass-loss occurs approximately at constant \(R_{p}/R_{B}\), although in this case, there is an explicit, albeit logarithmic dependence on the planet's radius.
#### 2.1.3 Heating Limitation
The previous analysis in Section 2.1.1 and 2.1.2_assumes_ that the ionizing radiation provides sufficient heating to drive a powerful flow. However, in the limit of a low ionizing flux, it may not provide any additional heating. This example is shown by Schulik and Booth (2023), who found a smooth transition from a core-powered outflow to a photoevaporative outflow as the ionizing flux increased at fixed bolometric flux.
We consider this ability to switch from core-powered mass-loss to photoevaporation to be a heating requirement. Namely, that the high-energy field has sufficient energy to drive a more powerful outflow than core-powered mass-loss. Thus, the transition occurs when the mass-loss rate provided by core-powered mass-loss is comparable to photoevaporation. To explore this transition approximately, we assume the photoevaporation rate is given by the commonly used "energy-limited" model (e.g. Baraffe et al., 2004; Erkaev et al., 2007), where:
\[\dot{M}_{pe}=\eta F_{\rm XUV}\frac{\pi R_{\rm XUV}^{3}}{4GM_{p}} \tag{17}\]
where \(\eta\) is the mass-loss efficiency. Thus, equating this mass-loss rate to that given by core-powered mass-loss \(\dot{M}_{\rm CP}\), we find the transition to photoevaporation occurs at a critical flux of:
\[F_{\rm XUV}=\frac{32GM_{p}\,R_{B}c_{s}}{\eta R_{\rm XUV}^{3}\kappa_{\rm IR}} \,\mathcal{M}\left(R_{p}/R_{B}\right) \tag{18}\]
where \(R_{\rm XUV}\) is found by determining:
\[\int_{R_{\rm XUV}}^{\infty}n(r)\sigma_{\rm XUV}\mathrm{d}r=1. \tag{19}\]
Now in the limiting case that the XUV irradiation provides insufficient heating the density profile \(n(r)\) will simply be that for the Parker wind solution for the core-powered mass-loss outflow. For the heating limit to even be relevant \(R_{\rm XUV}<R_{B}\); thus, we can approximate the density profile \(n(r)\) with the hydrostatic solution (Equation 11). In Appendix A, we show that Equation 19 has two limiting solutions, one for \(R_{p}/R_{B}\log(\sqrt{\sigma_{XUV}/\sigma_{IR}})\ll 1\)
\[\frac{R_{\rm XUV}}{R_{p}}\approx 1+\frac{R_{p}}{R_{B}}\log\left(\sqrt{\frac{ \sigma_{XUV}}{\sigma_{IR}}}\right) \tag{20}\]
corresponding to a dense planet, where the XUV irradiation penetrates close to \(R_{p}\). The other limiting solution, in the case \(R_{p}/R_{B}\log(\sqrt{\sigma_{XUV}/\sigma_{IR}})\gg 1\) is:
\[\frac{R_{\rm XUV}}{R_{p}}\approx\sqrt{\frac{\sigma_{XUV}}{\sigma_{IR}}}\exp\left( -\frac{R_{B}}{R_{p}}\right) \tag{21}\]
Corresponding to a buffer planet where the XUV irradiation is absorbed at several planetary radii. Given typical values of the cross sections, the transition between the two solutions occurs roughly at \(R_{B}/R_{p}\) - 10. This transition occurs before the penetration criteria given in either Equation 13 or 16. Taking \(R_{p}/R_{B}\ll 1\) (for both cases), \(\mathcal{M}(R_{p}/R_{B})\) becomes (e.g. Lamers & Cassinelli, 1999):
\[\mathcal{M}(R_{p}/R_{B})\approx\left(\frac{R_{p}}{R_{B}}\right)^{-2}\exp\left( -\frac{2R_{B}}{R_{p}}\right). \tag{22}\]
Thus, the solution to Equation 18 for the dense planet, with \(R_{\rm XUV}\approx R_{p}\) is:
\[\frac{R_{B}}{R_{p}}\approx\frac{1}{2}\log\left(\frac{108gc_{s}}{\eta\epsilon_ {IR}F_{\rm XUV}}\right)+\mathcal{O}(1) \tag{23}\]
or for the buffer planet with \(R_{\rm XUV}\) given by Equation 21 is:
\[\frac{R_{B}}{R_{p}}\approx\log\left[\frac{\eta\epsilon_{IR}F_{\rm XUV}}{864gc _{s}}\left(\frac{\sigma_{XUV}}{\sigma_{IR}}\right)^{3/2}\right]+\mathcal{O}(1) \tag{24}\]
where, like in the previous sections, these approximate solutions have been obtained by expansion. Although, we caution that for high flux values, this "heating limit" yields no solution and photoevaporation can occur for any planet (see Section 3). Thus, at sufficiently low XUV irradiation levels, we expect this energy limit to push the transition from core-powered mass loss to photoevaporation to denser planets. In our numerical evaluations, we find scenarios where, as a planet loses mass (and shrinks), it will transition from core-powered mass-loss to photoevaporation (due to XUV penetration), but as the XUV flux drops, it can transition back to core-powered mass-loss (see track "A" in Figure 7), before becoming photoevaporative again when the planet's atmosphere becomes thinner.
### Summary
By considering what physical processes determine whether the outflow is predominantly powered by XUV heating or by bolometric heating from the interior and star, we have determined the basic criteria for which each mass-loss mechanism controls the outflow properties. The key result is that the transition is primarily controlled by the ratio of the Bondi radius to the planet's radius, with typical values in the range of 6-11. This result is in agreement with the simulations of Kubyshkina et al. (2018), that found the transition was best described in terms of the "escape parameter" (which is the same as \(R_{B}\) apart from an order-unity multiplicative factor). All other properties give rise to a slowly varying logarithmic dependence. The fundamental reason is that all criteria depend, either explicitly or implicitly, on XUV photons' ability to penetrate the approximately isothermal bolometrically heated atmosphere. The scale height of such an atmosphere depends only on \(R_{B}/R_{p}\) (Equation 11) and is exponential. Hence, any optical depth into such an atmosphere will naturally depend directly on \(R_{B}/R_{p}\) but logarithmically on other parameters. The logarithmic sensitivity arises for the same reason that forming planets are only logarithmically sensitive to the disc conditions (e.g. Piso et al., 2015).
We have identified that the primary transition criteria is the ability of XUV photons to penetrate the interior to the Bondi radius (\(R_{B}\), the core-powered mass-loss sonic point), providing additional heating and hence higher mass-loss rates. Generally, larger planets (bigger \(R_{p}/R_{B}\)) have core-powered mass-loss outflows, while smaller planets (smaller \(R_{p}/R_{B}\)) have photoevaporative outflows. This transition can either occur for an energy-limited or recombination-limited outflow, with recombination-limited outflows becoming photoevaporative for larger planetary radii. This is because of their ability to reach high ionization fractions, reducing the optical depth to XUV photons allowing them to penetrate deeper. At low XUV fluxes, XUV photons may be able to penetrate the outflow, but they do not provide additional heating, and the outflow can remain driven by core-powered mass-loss.
Finally, our analysis has indicated that even if an outflow is primarily controlled by XUV heating (and hence "photoevaporative"), bolometric heating from the core and star is not unimportant. Ultimately, it's this heating source that provides the energy to lift fluid parcels from the radiative-convective boundary upto \(R_{\rm XUV}\)5. This bolometric heating can push the XUV absorption to higher heights, enhancing the ability of the planet to absorb XUV irradiation and driving a more powerful photoevaporative outflow. Thus, core-powered mass-loss and photoevaporation can work in concert to drive "enhanced" photoevaporative outflows, especially for planets that have just transitioned from core-powered mass-loss to photoevaporation as they cool and lose mass.
Footnote 5: It’s important to note the fundamental difference between this scenario for highly irradiated planets, where bolometric heating provides this energy and the original formalism of “energy-limited” mass-loss Watson et al. (1981); Lammer et al. (2003), where conduction of XUV irradiation provides this energy.
## 3 Approximate numerical solutions
Full radiation-hydrodynamic simulations that include the radiation from the planet's interior, the bolometric radiation from the star, and the stellar ionizing radiation are required to fully map out the parameter space. However, we can improve our analytic approach by numerically relaxing some of the assumptions. In addition, we can assess the role the bolometrically heated layer plays in enhancing photoevaporation. Specifically, in Section 2.1.1, our solution depends on the unknown sound speed in the XUV heated region for an "energy-limited" photoevaporative outflow.
To progress, we still assume an isothermal outflow for the photoevaporative region, but with a sound speed, we numerically obtain. In this simplification, we assume that the launch velocity at \(R_{\rm XUV}\) is either the one given by the trans-sonic Parker wind solution or the sound speed (whichever is smaller; this prevents unphysical supersonic launching of the wind). This requires using a generalised Parker wind model, described in Appendix B.
We then numerically integrate the photoevaporatively heated outflow's density profile to calculate the optical depth to XUV photons (i.e. we numerically solve Equation 8). For a given \(R_{\rm XUV}\), there is then a family of solutions, each with a different sound speed and hence different mass-loss rate, that satisfies the criteria that the optical depth to XUV photons throughout the photoevaporative region is unity.
Thus, we solve for the appropriate sound speed and hence \(R_{\rm XUV}\) to match the energy-limited model's mass-loss rate with an efficiency of 0.1 (Equation 17). If the photoevaporative outflow temperature we find is below the planet's equilibrium temperature, we identify the outflow as core-powered mass-loss because, while the XUV photons
can penetrate, they do not provide additional heating (the "heating limit" described in Section 2.1.3) and therefore don't enhance the outflow. It is well known that above a temperature of \(10^{4}\) K, Lyman-\(\alpha\) cooling dominates, and the outflow is no longer energy-limited (e.g. Murray-Clay et al., 2009; Owen and Alvarez, 2016). To mimic this effect, we do the following: If matching the energy-limited model requires a temperature in excess of \(10^{4}\) K, we fix the outflow's temperature to be \(10^{4}\) K and reduce the mass-loss rate below the energy-limited value. Furthermore, recombination can become important once the gas temperature has reached \(10^{4}\) K. If the time scale for a proton to recombine becomes shorter than the flow time scale, the outflow enters radiation-recombination balance (e.g. Bear and Soker, 2011) and the mass-loss has a square-root dependence on XUV flux (e.g. Murray-Clay et al., 2009). Thus, for outflows with temperatures of \(10^{4}\) K, we compare the recombination time to the flow timescale at \(R_{\rm XUV}\). If the recombination time is shorter than the flow time scale, we switch to using recombination-limited outflows. Thus, instead of solving Equation 8, we numerically solve Equation 14.
Finally, to make a connection to a real planetary structure (i.e. one with a specific photospheric radius or envelope mass), we then solve for the value of \(R_{\rm XUV}\) such that there is momentum balance across the transition from the bolometric heated region to the photoevaporative region. We assume, as previously, that the opacity to outgoing thermal irradiation is \(\kappa_{\rm IR}=10^{-2}\) cm\({}^{2}\) g\({}^{-1}\). Since the bolometrically heated region has to be sub-sonic before the transition into the photoevaporative region (for the outflow to be identified as photoevaporation-dominated), we neglect the momentum-flux in the bolometric region and only consider the contribution from thermal pressure (as typically done in models of external disc photoevaporation - Johnstone et al., 1998; Owen and Altaf, 2021; Owen and Lin, 2023). Namely the matching criteria to solve for \(R_{\rm XUV}\) is:
\[\rho_{\rm eq}(R_{\rm XUV})c_{s}^{2}=\rho_{\rm ph}(R_{\rm XUV})\,\left(u_{\rm ph }(R_{\rm XUV})^{2}+c_{\rm ph}^{2}\right)\,. \tag{25}\]
To solve this root-finding problem, we use the brento method provided in scipy (Virtanen et al., 2020), both for the sound-speed and value of \(R_{\rm XUV}\), using a relative tolerance of \(10^{-13}\). The optical depth through the outflow is computed through numerical integration using the trapezoidal method on a logarithmically spaced grid between \(R_{\rm XUV}\) and five times the maximum value of either \(R_{s}\) or \(R_{\rm XUV}\) on 250 cells, assuming the optical depth at the outer boundary is zero. Since the photospheric radius does not exactly correspond to the planetary radius that an optical transit observation would measure, we also compute the planet's transit radius through direct numerical integration of our density profile, assuming it to be spherically symmetric, using an optical opacity of \(\kappa_{\rm op}=4\times 10^{-3}\) cm\({}^{2}\) g\({}^{-1}\)(e.g. Guillot, 2010). This numerical integration is performed using the adaptive Gauss-Kronrod quadrature method in quadpack, with the python interface provided by scipy, to a relative tolerance of \(10^{-8}\).
We do not enforce mass conservation across the interface. This is because the photoevaporative outflow could be so powerful that the bolometrically heated layer cannot supply the required mass-loss rate (i.e. remaining sub-sonic while satisfying Equation 25). When this occurs, the photoevaporative outflow will slowly "eat" into the bolometrically heated layer, pushing \(R_{\rm XUV}\) to smaller values (we refer to this as "ravenous" photoevaporation). This is conceptually similar to the transition between expanding R-type and stationary D-type ionization fronts around massive stars (e.g. Spitzer, 1978). We check all our solutions for any occurrence of ravenous photoevaporation. We do not find any examples in the parameter space explored in this work, though this does not mean it never occurs in planetary mass loss.
### Results
An example result of our calculations is shown in Figure 3 where we show the radius of the XUV penetration depth as a function of a planet's photospheric radius for planets of various masses, an equilibrium temperature of 1000 K and the ratio of the XUV to the bolometric flux of the star of \(10^{-4}\). The evolution of the characteristic radii shown in Figure 3 demonstrates the typical evolution across the parameter space. For fixed planet mass, as the photospheric radius is increased (effectively increasing atmosphere mass fraction or decreasing a planet's age), the radius of XUV penetration increases; for small dense planets this typically begins in the "energy-limited" regime 6; however, once the planet's gravity becomes too weak, the outflow transitions to recombination limited (e.g. Owen and Alvarez, 2016). The sharp, small drop in \(R_{\rm XUV}\) arises from the assumption that for energy-limited outflows, we ignore recombination photons, which can penetrate and ionize the planet's atmosphere decreasing \(R_{\rm XUV}\), whereas in the recombination limited case, they are fully accounted for. In reality, as one approaches the transition, the energy-limited \(R_{\rm XUV}\) would smoothly attach to the recombination-limited case. Eventually, with increasing photospheric radius, the XUV penetration depth will exceed the sonic point of the core-powered mass-loss outflow (shown by the orange point in Figure 3), and the outflow transitions from photoevaporation at small photospheric radii, to core-powered mass-loss at large photospheric radius. As the planet becomes less dense, the transit radius adds a non-negligible correction to the photospheric radius. At the point of transition from core-powered mass-loss to photoevaporation, it is tens of percent larger.
Footnote 6: We note for very high-density planets, the outflow timescale will become longer than the recombination timescales and return to a recombination limited outflow (Owen and Alvarez, 2016).
Figure 3 also indicates that, as the photospheric radius increases, the XUV penetration depth is pushed to ever larger radii in a super-linear fashion. This means that while the physics of photoevaporation ultimately controls the mass-loss rates, they are enhanced by core-powered mass-loss above the value that would be found purely from using the photospheric or transit radius. As discussed above, this "enhanced" photoevaporation is ultimately driven by the energy input in the isothermal layer from both the stellar irradiation and the planet's cooling luminosity, which supplies material to a larger XUV penetration depth that can absorb a higher number of XUV photons.
We can now perform our calculation over a range of different planet masses, equilibrium temperatures and ratios of bolometric to XUV flux to determine the general conditions for the transition from photoevaporation to core-powered mass loss. In Figure 4, we show the ratio of \(XUV\) luminosity to bolometric luminosity below which the XUV irradiation would provide insufficient heating to overpower the core-powered mass-loss outflow. Thus, even if XUV photons can penetrate inside the core-powered mass-loss sonic point, the outflow will still be core-powered mass-loss below this critical value of \(L_{\rm XUV}/L_{\rm bol}\). As expected from our discussion in Section 2.1.3, we find two radii (at fixed planet mass) where one would transition from photoevaporation to core-powered mass-loss and back. However, the critical values of \(L_{\rm XUV}/L_{\rm bol}\) only reach observed values for typical late-type stars for cooler equilibrium temperatures. Thus, this heating limit is not the main transitioning criterion, although it will apply to the important case of temperate, low-mass planets, such as those identified in short-period orbits around M-dwarfs.
Having investigated the heating limit transition, we now explore the penetration limit and the role of "enhanced" photoevaporation.
In Figure 5, we show the various mass-loss regimes as a function of planet mass and radius for \(L_{\rm XUV}/L_{\rm bol}=10^{-4}\) and equilibrium temperature 1000 K. As expected, the transition occurs at roughly a fixed value of \(R_{p}/R_{B}\), where the slight increase can be explained in terms of the logarithmic dependence of \(R_{p}/R_{B}\) on \(R_{p}\) from Equation 16. For these values of the XUV and bolometric flux, the transition between core-powered mass-loss and photoevaporation occurs mainly in the recombination limit (a result that appears to hold over most of the parameter space: see Figure 6). The correction between the transit radius and photospheric radius is a small but important correction increasing the transition radii by several 10s of percent. Finally, the region of enhanced photoevaporation, which we take to mean where \(R_{\rm XUV}\sim 2R_{p}\), encompasses a significant region of parameter space.
We now expand our range of parameters to roughly cover the range of XUV fluxes and equilibrium temperatures covered by close-in, low-mass exoplanets. The result for the transit radius at which core-powered mass-loss transitions to photoevaporation and the range of parameters in which photoevaporation is "enhanced" is shown in Figure 6. As expected, the transition occurs at roughly fixed \(R_{p}/R_{B}\) with the value ranging between \(1/5\) and \(1/8\). The slow changes in these values mirror the logarithmic dependence on XUV flux and sound speed in the bolometric region found in Equations 13 and 16. This plot also confirms the heating limit only matters in a small region of parameter space. In most of the parameter space, core-powered mass-loss will transition to recombination-limited photoevaporation, but photoevaporation becomes energy-limited everywhere at low XUV irradiation levels.
## 4 Discussion
In the absence of XUV irradiation, highly irradiated planets will undergo hydrodynamic mass-loss in the form of an isothermal wind with a temperature of order the equilibrium temperature. This core-powered mass-loss outflow gets its energy from the core and envelope's cooling luminosity as well as from the star's bolometric output. We have shown that under conditions typical of close-in, low-mass planets XUV photons can penetrate this core-powered mass-loss outflow, providing extra heating before it becomes collisionless. This is because the collision cross-section is significantly larger than the cross-section to absorb XUV photons. Thus, the breakdown of the hydrodynamic limit is not really a concern in the case of core-powered mass loss.
We have shown that the primary controlling physics determining
Figure 3: The radius to which XUV photons penetrate as a function of the planet’s photospheric radius for planets of various masses, an equilibrium temperature of 1000 K and \(L_{\rm XUV}/L_{\rm bol}=10^{-4}\). The transition between the solid and dashed lines shows where photoevaporation transitions from being energy limited to recombination limited and the XUV photons can penetrate deeper into the planet’s atmosphere (e.g. at a photospheric radius of \(\sim 5.2\) R\({}_{\oplus}\) for the 4 M\({}_{\oplus}\) case). The green dotted line indicates \(R_{B}\), and the orange point shows the transition between photoevaporation and core-powered mass loss. The purple dashed line shows the optical transit radius as a function of the planet’s photospheric radius.
whether an outflow is photoevaporative or core-powered mass-loss is the ability of XUV photons to penetrate interior to core-powered mass-loss's sonic point (\(R_{B}\)). This results in the transitioning criteria occurring at roughly a constant value of the planet's radius to its Bondi radius (\(R_{p}/R_{B}\)), where the exact value depends logarithmically on planetary and stellar properties. In general, photoevaporation will be operating in the recombination limited regime when the transition from core-powered mass-loss to photoevaporation occurs; this is because the planet's gravity is weak at \(R_{B}\) by construction, resulting in a large flow length scale (and hence a long flow time scale) allowing sufficient time for protons to recombine (e.g. Owen and Alvarez, 2016). However, at low XUV irradiation levels and cool equilibrium temperatures, the transition will occur to either energy-limited photoevaporation as the ionization rate is insufficient to reach ionization-recombination equilibrium, or the outflow can remain in the core-powered mass-loss regime for smaller planets due to insufficient XUV heating.
We have also mapped out "enhanced" photoevaporation, where, while photoevaporation sets the mass-loss rate, a sub-sonic core-powered mass-loss outflow resouples the XUV heated region. In this case, the sub-sonic core-powered mass-loss maintains an XUV absorption radius far enough from the radiative-convective boundary allowing the planet to absorb more of the star's XUV output. Since most photoevaporation occurs early when the star's \(L_{\rm XUV}/L_{\rm bol}\) is \(\sim 10^{-3}\), the most common young planet with a core mass of \(\sim 4-5\) M\({}_{\oplus}\) and radius of \(2.5-3\) R\({}_{\oplus}\) at an equilibrium temperature of \(\sim 1000\) K (e.g. Rogers and Owen, 2021), will undergo photoevaporation around the boundary of this "enhanced photoevaporation" region (Figure 6).
### Relationship to planetary properties
In our previous discussion, we have worked analytically in terms of the planet's photosphere to outgoing thermal IR radiation (which we call the planet's radius, \(R_{p}\)) as it is well defined. Numerically, we have also determined the planet's optical transit radius, as this is the observed quantity. While similar, the transit radius is always 10s of percent larger than \(R_{p}\) at the transition boundary from core-powered mass-loss to photoevaporation. However, neither radii encapsulate the fundamental structure of the planet's envelope. The planet's radiative-convective boundary sets the transition between the adiabatic interior and the potentially outflowing atmosphere. For dense planets, this radius is similar to its photospheric and transit
Figure 4: Heating limit: The threshold \(L_{\rm XUV}/L_{\rm bol}\) for the transition between core-powered mass-loss and photoevaporation due to insufficient XUV heating, shown for different planetary equilibrium temperatures. The shaded brown regions represent Earth-like “rocky” cores with no atmosphere. The contours show \(L_{\rm XUV}/L_{\rm bol}\) at values of \(10^{-6}\), \(10^{-5}\). Typically, there are either two radii at which the transition occurs at a given mass or none, as discussed in Section 2.1.3. Given that typical values of \(L_{\rm XUV}/L_{\rm bol}\) are in the range of \(10^{-6}\) to \(10^{-3}\) for most late-type stars, this heating limit only applies in a narrow region of parameter space at cool equilibrium temperatures.
Figure 5: The various mass-loss regimes as a function of planet mass and radius for an equilibrium temperature of 1000 K and \(L_{\rm XUV}/L_{\rm bol}=10^{-4}\). The transition between core-powered mass-loss (at large radii) and photoevaporation at small radii typically occurs at fixed \(R_{p}/R_{B}\). The wide-spaced dotted lines show contours of constant \(R_{p}/R_{B}\) for 1/5, 1/6 & 1/8. The dashed line shows the transition from recombination limited to energy-limited photoevaporation, indicating core-powered mass-loss transitions to recombination-limited photoevaporation for these parameters. The narrow dotted lines show lines of constant \(R_{\rm XUV}/R_{p}\), where planets above the blue dotted line with \(R_{\rm XUV}=2R_{p}\) are representative of those undergoing “enhanced” photoevaporation (highlighted by the shaded blue region). The kink in this line occurs when photoevaporation transitions from energy limited to recombination limited. The shaded brown regions represent Earth-like “rocky” cores with no atmosphere.
radius; however, for buffer planets, it can be different, and it is a time-evolving quantity. Therefore, to relate our results to planetary structure, we express the transition between core-powered mass-loss and photoevaporation in terms of the envelope mass fraction.
To do this we use mesa models (Paxton et al., 2011, 2013, 2015). The models are setup in an identical way to those described in (Owen, 2020) for Earth-like core compositions. Our ratios of XUV to bolometric fluxes are converted to planetary ages using the empirical relations of Rogers et al. (2023b) for a Solar mass star. Where values of \(L_{\rm XUV}/L_{\rm bol}\) of \(10^{-3}\), \(10^{-4}\) and \(10^{-5}\) correspond to ages of approximately, 100 Myr, 500 Myr and 2 Gyr. The envelope mass fractions at which core-powered mass-loss transitions to photoevaporation are shown in Figure 7, where core-powered mass-loss dominates for lower-mass planets with comparatively larger envelope mass fractions (i.e. upper-right area in figure 7), and photoevaporation dominates for higher-mass planets with comparatively lower envelope mass fractions (i.e. lower-left area in figure 7). If the typical close-in planet is born with a few-percent envelope mass fraction around a 4 M\({}_{\oplus}\) core, then we see that for equilibrium temperatures \(\gtrsim 1500\) K core-powered mass-loss dominates the bulk of the mass loss, while for cooler planet's photoevaporation dominates the bulk of the mass-loss.
Figure 7 allows us to assess the mass-loss histories of various planets. The grey, dot-dashed lines show the trajectories of planets with 1, 2, 4 & 8 M\({}_{\oplus}\) cores. These trajectories are essentially vertical because the envelope mass is only a small fraction of the planet's total mass, resulting in limited curvature at envelope mass fractions above 10%. The plotted trajectories cross the mass-loss transition (moving from higher envelope mass fraction to lower envelope mass fractions) from core-powered mass-loss to photoevaporation. Thus, there are three typical planetary pathways: (i) a hot, low-mass planet (e.g. the \(T_{\rm eq}=1500\) K, 2 M\({}_{\oplus}\) core will start in the core-powered mass-loss
Figure 6: The transit radius at which core-powered mass-loss (at large radii) transitions to photoevaporation (at small radii) for different equilibrium temperatures and XUV fluxes. This transition occurs to either recombination-limited photoevaporation (magenta, dash-dotted line) or energy-limited photoevaporation (green, solid-line). The grey, dotted lines show lines of constant fractions of \(R_{B}\), and the dashed line shows the region above, which we consider photoevaporation to be ”enhanced”. The kinks in these lines occur when photoevaporation transitions from energy limited to recombination limited. The shaded brown regions represent Earth-like “rocky” cores with no atmosphere.
regime and is expected to continue in this mass-loss regime until it is stripped of its envelope (e.g. track C in figure 7). (ii) For the same equilibrium temperature (i.e., \(T_{\rm eq}=1500\) K), a planet with a core mass \(\sim 4\) M\({}_{\oplus}\) born with an envelope mass fraction of a few per cent or more will start off undergoing core-powered mass-loss, but as its envelope loses mass and shrinks it will transition to photoevaporation and will continue to be stripped by photoevaporation until the entire envelope is lost (D). (iii) Finally, a planet born with a core mass of \(\sim 8\) M\({}_{\oplus}\) will begin life undergoing photoevaporation, and its mass-loss remain photoevaporative throughout (e.g. track E in figure 7). Importantly, the general trends mean a planet that is stripped by core-powered mass-loss has likely only ever experienced core-powered mass-loss. Whereas a planet stripped by photoevaporation could have experienced an early phase of core-powered mass loss or could have been entirely photoevaporative.
We point out two complications to the above summary, demonstrated by trajectories "A" and "B" in Figure 7. Trajectory "A" shows a planet which begins with undergoing core-powered mass-loss, then switches to photoevaporation through the penetration limit; however, as it continues to lose mass and as the XUV flux decreases, it enters core-powered mass-loss again due to the heating limit. If it continues to lose mass, it will again transition back to a photoevaporative outflow. Alternatively, trajectory "B" shows a planet that begins in core-powered mass-loss before transitioning to photoevaporation at an envelope mass fraction of \(\sim 1\)% at the age of a few hundred Myr; however, if it remained with a mass-fraction of \(\sim 1\)%, as the XUV flux drops the transition to photoevaporation shifts to smaller envelope mass fractions resulting in the planet switching back to core-powered mass-loss as it evolves over time. If it loses enough mass through core-powered mass-loss, it could again return to a photoevaporative outflow.
Thus, while possible for a planet to switch multiple times in its life, the standard outcome is a planet either undergoes exclusively core-powered mass-loss (low masses, high temperatures) or photoevaporation (moderate core masses and initial envelope mass fractions) or switches from core-powered mass-loss to photoevaporation (moderate core masses and high initial envelope mass fractions). While our above trajectories are indicative, before full evolutionary calculations are performed, we cannot speculate how far down these trajectories an individual planet may make it in a few billion years. For example, planets may "stall" on these trajectories when mass-loss becomes evolutionary unimportant.
### Mass-loss across the exoplanet population
Using our results, we can sketch out an evolutionary pathway for the close-in, low-mass exoplanets population. Assuming the planets are embedded in their parent protoplanetary disc, in similar orbits before disc dispersal, their radii will fill their Bondi radii (\(\sim R_{B}\)), or in some cases, their Hill radii. As disc dispersal begins, the rapid depressurisation of their atmospheres will trigger "boil-off/spontaneous mass-loss", a period of rapid mass-loss and shrinking (e.g. Ikoma & Hori, 2012; Owen & Wu, 2016; Ginzburg et al., 2016). Boil-off/spontaneous mass-loss will then transition to a core-powered mass-loss outflow7, as the planet's are initially large \(\sim 10\) R\({}_{\oplus}\) (e.g. Owen, 2020). Figure 7, indicates a "typical" planet (i.e. few percent envelope mass fraction, core mass of 4-5 M\({}_{\oplus}\) and equilibrium temperature of 1000 K - e.g. Rogers & Owen, 2021; Rogers et al., 2023) will have transitioned to
Figure 7: The hydrogen-dominated envelope mass fraction at which core-powered mass-loss (at large envelope mass fractions) transitions to photoevaporation at (small envelope mass fractions), shown for different ages (100 Myrs-blue-dashed, 500 Myrs-yellow-solid and 2 Gyrs-green-dotted lines). The three panels correspond to equilibrium temperatures of 500K, 1000K and 1500K from left to right, respectively. Atmospheric-mass loss is dominated by core-powered mass-loss for lower-mass planets with large-envelope mass fractions and transitions to photoevaporation-dominated for larger-mass planets and lower-envelope mass-fractions as shown by the blue, yellow and green lines and indicated by the red and blue errors in the middle panel. The exact location of the transition between these two regimes depends on the equilibrium temperature, as can be seen by comparing the results shown in the three panels. The shaded region bounded by the green dashed line represents the region where XUV photons can penetrate the core-powered mass-loss outflow but provide insufficient heating and arising from the \(L_{\rm XUV}/L_{\rm bol}=10^{-5}\) contour in Figure 4 for an equilibrium temperature of 500 K. The grey, dot-dashed lines with the arrows indicate the trajectory of planets with core masses of 1, 2, 4 & 8 M\({}_{\oplus}\). These trajectories display the rich diversity in mass-loss regimes and histories; illustrative examples are labelled “A” through “E”. In general, the planet mass-loss trajectories indicate that a planet can transition from core-powered mass-loss to photoevaporation but not vice-versa. However, trajectories “A” and “B” indicate special cases discussed in the text. Trajectory “D” represents an example where atmospheric loss transitions from core-powered mass-loss to photoevaporation, and trajectories “C” and “E” show cases where a planet’s mass loss is dominated entirely by core-powered mass-loss (C) or photoevaporation (E), respectively.
photoevaporation by the age of 100 Myr, and it is likely photoevaporation will dominate its sub-subsequent mass-loss. Hotter planets can remain undergoing core-powered mass loss to higher planet masses and for longer, whereas photoevaporation will typically dominate for cooler planets.
Since the trajectory of most planets in Figure 7 will essentially be vertically downwards, planets can start losing mass in the core-powered mass-loss regime and transition into photoevaporation to be completely stripped or remain undergoing core-powered mass-loss until their envelopes are completely removed at high equilibrium temperature and low core masses. However, as discussed above, it is rare to find a scenario where a photoevaporting planet transitions back to core-powered mass-loss during its mass-loss history and subsequently to be completely stripped. Future evolutionary calculations incorporating both models could explore the possibility that very low-mass, temperate planets can transition between the two mass-loss regimes multiple times during their lifetime.
One of the insights we can infer from our work is which mass-loss mechanism is responsible for the final "stripping" of the envelopes. We accomplish this by investigating the core masses at which core-powered mass-loss transitions to photoevaporation for a negligible envelope mass fraction (specifically \(10^{-4}\)). Unsurprisingly, this transition scales with \(R_{P}/R_{B}\), and we show this boundary in terms of planet equilibrium temperature and radius compared to the observed exoplanet population in Figure 8. This indicates that the observed super-earth planet population contains a significant fraction of planets where the final removal of the H/He envelope was controlled by either photoevaporation or core-powered mass-loss. However, since the exoplanet radius gap borders the photoevaporative region, it is likely that photoevaporation was responsible for the mass loss of observed super-earths located in its direct vicinity. For sub-Neptunes with hydrogen-dominated atmospheres, planets just above the radius-gap host a few-percent atmospheres by mass (e.g. Wolfgang & Lopez, 2015; Owen & Wu, 2017). Thus, even though core-powered mass-loss and photoevaporation can create the observed radius gap in isolation (e.g Rogers et al., 2023a); the black lines in Figure 8 indicate it was likely that photoevaporation was responsible for the final carving of the radius gap, setting its topography observed today. This is because, the planetary cores straddled by the black lines are expected to have transitioned from core-powered mass-loss to photoevaporation once they reached envelope mass fractions of a few percent on 10-100 Myr timescales.
Distinguishing between the mass-loss mechanisms responsible for the final "stripping" of the envelopes is important as photoevaporation and core-powered mass-loss may imprint different final atmospheric compositions during the removal of the last amounts of hydrogen. For example, core-powered mass-loss can leave small residual hydrogen in the atmosphere (e.g. Misener & Schlichting, 2021), and photoevaporation can drag heavy elements along with it (e.g. Zahnle & Kasting, 1986). This opens up the possibility for future observational tests of these two mass-loss scenarios.
### Role of X-rays
Young stars emit a significant fraction of their XUV output in the X-rays (e.g. Jackson et al., 2012; Chadney et al., 2015; King & Wheatley, 2021). Thus, photoevaporation can be driven by the X-rays rather than the EUV as predominately assumed in our previous calculations. While the controlling physics is similar to the penetration limit for energy-limited flows described above8, the major difference is that the cross-section to the absorption of X-rays is significantly smaller \(\sim 10^{-22}\) cm\({}^{2}\). This increases the penetration depth of X-rays, resulting in photoevaporation taking over from core-powered mass-loss at larger planetary radii than EUV irradiation. To assess the impact, we remake Figure 5 for the case of X-ray irradiation in Figure 9. This shows the fundamentals are similar, just the transition is shifted to slightly larger radii due to the logarithmic dependence on the
Figure 8: The radii of small detected transiting planets shown as a function of equilibrium temperature. Those planets where the final hydrogen-dominated atmosphere is likely removed by photoevaporation lie above the solid, yellow line and the super-earths where the final stripping likely proceeded by core-powered mass-loss lie below the solid line. The black lines correspond to planets that transitioned from core-powered mass-loss to photoevaporation when their envelope mass fractions equalled 0.03 at 3 Myr (black dotted) and 100 Myr (black dot-dashed). The position of the radius-gap from Owen & Wu (2017) is shown as the red dashed line. Exoplanet data was downloaded from the NASA exoplanet archive (Akeson et al., 2013) on 22/3/2023.
Figure 9: The same as Figure 5, but instead of EUV photons penetrating the outflow, X-ray photons with a cross-section of \(\sigma_{\rm XLV}=3\times 10^{-22}\) cm\({}^{2}\) is used. This shifts the transition to photoevaporation to slightly larger planetary radii.
absorption cross-section to high-energy photons (Equation 13). One major difference is the reduction of parameter space occupied by "enhanced" X-ray photoevaporation, as the higher penetration of X-ray photons results in \(R_{\rm XUV}\) sitting closer to the planet's radius, a result previously discussed in Owen & Jackson (2012). Although, as discussed in previous works on X-ray photoevaporation, the isothermal outflow approximation can be fairly poor (Owen & Jackson, 2012). Thus, further simulation work is required to explore the transition between X-ray photoevaporation and core-powered mass loss.
### Limitations and directions for future work
Our aim here has been to lay the theoretical foundations to assess the role core-powered mass-loss and photoevaporation play in shaping the exoplanet population. However, to understand the basics of physics shaping the problem, we have simplified the models. For example, for the photoevaporation, we treat it as either an energy-limited outflow modelled with a constant sound speed or recombination limited. We also do not attempt to smoothly transition between the two cases nor consider how efficiency evolves with planetary properties. Furthermore, we treat core-powered mass loss as an isothermal outflow occurring at the planet's equilibrium temperature. This approximation neglects the fact that variations in opacity between the infrared and optical wavelengths can lead to heating of the approximated isothermal region above the nominal equilibrium temperature, which could lead to faster mass-loss rates. However, the treatment of core-powered mass-loss in this paper also neglects the energy limit (e.g. Ginzburg et al., 2018), where the cooling luminosity at the radiative-convective boundary is insufficient to resupply gas to \(R_{p}\). We also do not model the transition from boiloff/spontaneous mass-loss (e.g. Ikoma & Hori, 2012; Owen & Wu, 2016; Ginzburg et al., 2016) to core-powered mass-loss and photoevaporation explicitly. Realistic radiation-hydrodynamic simulations (e.g. Garcia Munoz, 2007; Murray-Clay et al., 2009; Owen & Jackson, 2012; Kubyshkina et al., 2018, 2018), indicate while the above represents a broad-brush approach to the problem, it neglects many of the details which can change the mass-loss rates by order unity factors. In particular, it's worth reiterating that constant efficiency energy-limited photoevaporation is inconsistent with the slope of the radius-gap, while the radiation-hydrodynamic simulations are consistent (Van Eylen et al., 2018).
Furthermore, while we have used our results to sketch various evolutionary histories, identifying the boil-off and core-powered mass-loss dominates early, before switching to photoevaporation in many cases, without evolutionary calculations, it's unclear where different amounts of mass-loss occur. Nonetheless, since pure photoevaporation (e.g. Owen & Wu, 2017; Jin & Mordasini, 2018; Wu, 2019; Owen & Adams, 2019; Rogers & Owen, 2021; Rogers et al., 2023) and core-powered mass-loss models (e.g. Ginzburg et al., 2018; Gupta & Schlichting, 2019, 2020; Gupta et al., 2022) give conceptually similar results for the origin and physical properties of the close-in exoplanet populations, we expect these results to be robust.
Finally, we've identified that observed terrestrial planets can either have been getting their final atmospheric stripping by core-powered mass-loss or photoevaporation. Properties of the atmospheres of these terrestrial planets are beginning to be observed; for example, LHS 3844b (Kreidberg et al., 2019), GJ 1252b (Crossfield et al., 2022) and Trappist-1b (Greene et al., 2023) all sit in the likely to have been finally striped by photoevaporation. More theoretical work on the residual atmospheres left behind, in concert with continued observations, should be able to test the roles of mass loss from hydrogen-dominated primary atmospheres in controlling the secondary atmospheres of hot rocky exoplanets.
## 5 Conclusions
We have studied how atmospheric escape from close-in planets transitions from core-powered mass-loss to photoevaporation. By focusing on (semi-)analytic methods, we have provided physical insights that should help guide future, expensive radiation-hydrodynamic simulations necessary to fully map out the mass-loss rates across the planet and stellar parameters. Our main results are as follows:
1. A planetary outflow will occur in the core-powered mass-loss regime if it cannot be penetrated by XUV photons interior to the Bondi radius or if the XUV photons provide insufficient heating. Across most of the planetary parameter space, the penetration of XUV photons interior to the Bondi radius sets the transition between the two outflow regimes.
2. The transition between core-powered mass-loss occurs at roughly constant \(R_{p}/R_{B}\) (or escape parameter), where \(R_{B}\) is the sonic point of a core-powered mass-loss outflow. This ratio takes a value of roughly \(1/5-1/9\), with the exact value only being logarithmically sensitive to stellar and planetary parameters.
3. Thus, core-powered mass-loss dominates for hot, puffy planets, while photoevaporation dominates for denser cooler planets. Where typically core-powered mass-loss transitions to recombination limited EUV photoevaporation.
4. Under most situations, a planet can transition from core-powered mass-loss to photoevaporation as it evolves, but not vice-versa. Meaning a planet that is completely stripped by core-powered mass-loss will only have ever experienced core-powered mass-loss.
5. Observed close-in exoplanets cover planets that only ever experienced core-powered mass-loss or photoevaporation or transitioned from core-powered mass-loss to photoevaporation.
6. Even when the mass-loss is photoevaporative, core-powered mass loss can "enhance" photo-evaporation over a significant region of parameter space.
7. Observed, rocky terrestrial planets are likely to have been stripped by core-powered mass-loss at high equilibrium temperatures and low mass, whereas they were _finally_ stripped by photoevaporation at cooler temperatures and higher masses.
8. Applying our results to the observed super-Earth population indicates that it contains significant fractions of planets where the final removal of the H/He envelope occurred in both regimes.
9. Photoevaporation was likely responsible for the final carving of the exoplanet radius-valley, setting its topography; however, core-powered mass-loss/boil-off should have played a role earlier in these planet's evolution depending on their initial hydrogen inventories.
Since core-powered mass-loss and photoevaporation both operate in the observed parameter space, work needs to be done to incorporate a combined model into evolutionary calculations to explore exoplanet demographics, similar to the core-powered mass-loss only (e.g. Gupta & Schlichting, 2019, 2020) and photoevaporation only works (e.g. Wu, 2019; Rogers & Owen, 2021; Rogers et al., 2021).
## Acknowledgements
This research has been supported by NASA's Exoplanet Research Program (XRP) under grant number 80NSSC21K0392. JEO is also supported by a Royal Society University Research Fellowship. JEO.
has also received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 853022, PEVAP). For the purpose of open access, the authors have applied a Creative Commons Attribution (CC-BY) licence to any Author Accepted Manuscript version arising.
## Data Availability
The work underlying this article will be shared on reasonable request to the corresponding author.
|
2309.03600 | A Novel Immersed Boundary Approach for Irregular Topography with
Acoustic Wave Equations | Irregular terrain has a pronounced effect on the propagation of seismic and
acoustic wavefields but is not straightforwardly reconciled with structured
finite-difference (FD) methods used to model such phenomena. Methods currently
detailed in the literature are generally limited in scope application-wise or
non-trivial to apply to real-world geometries. With this in mind, a general
immersed boundary treatment capable of imposing a range of boundary conditions
in a relatively equation-agnostic manner has been developed, alongside a
framework implementing this approach, intending to complement emerging
code-generation paradigms. The approach is distinguished by the use of
N-dimensional Taylor-series extrapolants constrained by boundary conditions
imposed at some suitably-distributed set of surface points. The extrapolation
process is encapsulated in modified derivative stencils applied in the vicinity
of the boundary, utilizing hyperspherical support regions. This method ensures
boundary representation is consistent with the FD discretization: both must be
considered in tandem. Furthermore, high-dimensional and vector boundary
conditions can be applied without approximation prior to discretization. A
consistent methodology can thus be applied across free and rigid surfaces with
the first and second-order acoustic wave equation formulations. Application to
both equations is demonstrated, and numerical examples based on analytic and
real-world topography implementing free and rigid surfaces in 2D and 3D are
presented. | Edward Caunt, Rhodri Nelson, Fabio Luporini, Gerard Gorman | 2023-09-07T09:51:05Z | http://arxiv.org/abs/2309.03600v1 | # A Novel Immersed Boundary Approach for Irregular Topography with Acoustic Wave Equations
###### Abstract
Irregular terrain has a pronounced effect on the propagation of seismic and acoustic wavefields but is not straightforwardly reconciled with structured finite-difference (FD) methods used to model such phenomena. Methods currently detailed in the literature are generally limited in scope application-wise or non-trivial to apply to real-world geometries. With this in mind, a general immersed boundary treatment capable of imposing a range of boundary conditions in a relatively equation-agnostic manner has been developed, alongside a framework implementing this approach, intending to complement emerging code-generation paradigms. The approach is distinguished by the use of N-dimensional Taylor-series extrapolants constrained by boundary conditions imposed at some suitably-distributed set of surface points. The extrapolation process is encapsulated in modified derivative stencils applied in the vicinity of the boundary, utilizing hyperspherical support regions. This method ensures boundary representation is consistent with the FD discretization: both must be considered in tandem. Furthermore, high-dimensional and vector boundary conditions can be applied without approximation prior to discretization. A consistent methodology can thus be applied across free and rigid surfaces with the first and second-order acoustic wave equation formulations. Application to both equations is demonstrated, and numerical examples based on analytic and real-world topography implementing free and rigid surfaces in 2D and 3D are presented.
## Introduction
Irregular topography introduces considerable complexity to geophysical wavefield models, creating complex path effects which turn clean, defined reflections into cascades of overlapping arrivals. Interactions with topography cause waves to diffract and scatter, focus and defocus (Takemura et al., 2015; Reinoso et al., 1996; Boore, 1972; Griffiths and Bollinger, 1979): effects that must be encapsulated by any attempt to simulate the behaviour of such wavefields (Borisov et al., 2018). Whilst early numerical experiments demonstrated its capacity to markedly affect recorded data (Boore, 1973), the proliferation of wave-equation-based workflows, most notably full-waveform inversion (FWI) and reverse-time migration (RTM), has placed a sharpened focus on understanding topographic effects.
Topographic effects have been explored in contexts including FWI (Bleibinhaus and Rondenay, 2009), understanding seismic wave scattering (Takemura et al., 2015), infrasound propagation problems (Kim and Lees, 2014; Fee et al., 2021), and is crucial for the emerging field of teleseismic FWI (Monteiller et al., 2013, 2015). It is recognised that poor topography implementation adversely affects model accuracy (e.g. Monteiller et al., 2013; Li and Yao, 2020; Borisov et al., 2018), often severely (Nuber et al., 2016) although this may be acceptable in some applications (Bleibinhaus and Rondenay, 2009). When applied to imaging problems, unrealistic propagation paths result in artefacts in the processed image (Bleibinhaus and Rondenay, 2009; Nuber et al., 2016; Borisov et al., 2018) which risk being mistaken for real geological features, whilst models of ground motion have been found to significantly underestimate local amplification factors if topography is omitted (Reinoso et al., 1996).
Unstructured meshes suitably conformed to the topography offer an immediate solution. Finite-element methods (FEMs) have been demonstrated as effective for modelling wave propagation in the presence of variable to
ography (e.g. Zhebel et al., 2014; Dupros et al., 2010; Borisov et al., 2018; Liu et al., 2014; Mulder and Shamasundar, 2016), and have been applied to include complex terrain in models of earthquake peak ground acceleration (Gals et al., 2008), in shallow structure characterization applications (Romdhane et al., 2011), and to imaging problems such as full waveform inversion (FWI) (Roberts et al., 2021; Shin et al., 2013; Monteiller et al., 2015). Unstructured finite-difference methods (FDMs) have also been developed (Takekawa et al., 2015; Martin et al., 2015), and successfully used to perform FWI on synthetic datasets (Wu et al., 2021).
However, structured FD methods dominate in geophysical wavefield modelling and processing applications, including seismic reverse-time migration (RTM) (Fletcher et al., 2009), full-waveform inversion (FWI) (Warner et al., 2013), and source localisation (Mckee et al., 2014). This is not without reason: they eschew potentially cumbersome grid-generation algorithms (Brehm and Fasel, 2013), which become increasingly problematic at large scales (Slotnick et al., 2014) and require apriori knowledge of seismic velocities (Roberts et al., 2021, 20). The requisite information is potentially unavailable in practice or is iteratively updated in outer-loop problems, necessitating repeated mesh adaptation. Furthermore, the geometry of geological structures is prone to generating ill-conditioned sliver elements as units get pinched out resulting in an unreasonably small timestep Roberts et al. (2021). Automated mesh generation for seismic wave propagation in models containing arbitrary horizons remains an open problem, and thus unstructured approaches are rarely used in production. Structured FD methods are simple to implement, with relatively-low computational footprints (Liu and Sen, 2009), even before considering the suite of known optimizations (Luporini et al., 2019; Louboutin et al., 2019). However, in the presence of irregular topography, accurately representing the sharp, uneven material discontinuity on regular grids can be problematic (Zeng et al., 2012; Mulder, 2017; Gao et al., 2015). Ideally, one would want to accurately represent complex, curvilinear topography as a sharp interface whilst retaining the advantages of structured grids.
Air- or vacuum-layer approaches, achieved through a low-impedence layer at the surface (Schultz, 1997; Zeng et al., 2012) trade accuracy for simplicity in mimicking the free surface. Satisfactory results can be obtained with careful implementation (Zeng et al., 2012), although error analyses and numerical experiments indicate approximate sub-second-order convergence in space, regardless of interior discretization (Zhebel et al., 2014; Symes and Vdovina, 2009) and potentially egregious (Graves, 1996; Zahradnik et al., 1993) error when applied to complex geometries. Suppression of spurious scattering requires heavy oversampling (Bohlen and Saenger, 2006), and smoothing or stabilisation routines are often required to stabilise the surface treatment (Bartel et al., 2000; Zeng et al., 2012; Vieira et al., 2018). Whilst such approaches can be improved with locally-refined subgrids (Oprsal and Zahradnik, 1999; Tavelli et al., 2019), such refinement is non-trivial (Lai and Peskin, 2000; Goldstein et al., 1993), often requiring careful filtering and interpolation routines (Zhang et al., 2013). Sitarcased image methods marginally improve on this (Robertsson, 1996): variations have been explored by several authors (e.g. Robertsson (1996); Boore (1972); Hayashi et al. (2001); Ohminato and Chouet (1997); Ripperger et al. (2003)), used to study macro-scale topographic scattering effects (Takemura et al., 2015; Nakamura et al., 2012) and volcanic source location (Kim and Lees, 2014; Fee et al., 2021). The stepped boundary still generates diffractions (Muir et al., 1992; Hu, 2016), and a time shift proportional to the difference between the interface and computational grids (Symes and Vdovina, 2009). A more accurate image method can be achieved via coordinate transform (e.g. Petersson and Sjogreen, 2015) such that the surface is a horizontal plane in the iteration space (Zhang and Chen, 2006; Hestholm and Ruud, 2002). This approach is widely adaptable, and various wave equations have been solved with such schemes (e.g. Zhang and Chen, 2006; Zhang et al., 2012; Hestholm and Ruud, 1994; Hestholm et al., 1999; Hestholm and Street, 2000; Sun et al., 2017; de la Puente et al., 2014), demonstrating a high degree of accuracy, on par with FEMs (Zhang et al., 2012). However, a smooth conformal mapping may be challenging to obtain or yield locally small cells, limiting the maximum timestep (Shragge, 2014) in much the same manner as the aforementioned sliver elements in FEM.
An alternative approach, popular in computational fluid dynamics (CFD) contexts (see Mittal et al., 2008; Seo and Mittal, 2011; Dong et al., 2010; Vargas et al., 2008 for examples), is the immersed-boundary method. This approach embeds a curvilinear interface within a Cartesian grid by locally modifying the finite-difference operators in the vicinity of the boundary (Brehm and Fasel, 2013; Brehm et al., 2015; Mulder, 2017). This approach has seen several variations for second-order acoustic wave equation formulations (Zhang et al., 2013; Mulder, 2017; Li and Yao, 2020), with some extending to first-order formulations (Mulder and Huiskes, 2017; Hu, 2016), and even the isotropic elastic wave equation with some success (Lombard et al., 2008; Almhuidib and Toksoz, 2015; Gao et al., 2015). The standard Cartesian grid and equations are retained and complex geometries including sharp edges and concavities can be represented.
To date, most applications of such schemes in geophysical wave modelling scenarios have made use of problem-specific approximations prior to discretization to impose boundary conditions. For example, imposing a suitable 1D approximation of the boundary conditions. It has been repeatedly demonstrated that such approximations yield satisfactory results (Hu, 2016; Mulder, 2017; Almuhaidib and Toksoz, 2015; Li et al., 2022) and their usage is widespread. Whilst such approaches offer intuitive parallels with conventional image methods, the condition imposed may not, even at its limit, strictly equal the true
boundary condition, instead merely being suitably similar. This is particularly apparent for vector boundary conditions such as those applied to the velocity fields at the free surface. Hu (2016) addressed this by decomposing velocity field components into tangential and radial components in a local cylindrical coordinate system, whilst Mulder and Huiskes (2017) assumed locally vertical or horizontal boundaries, incurring numerical error.
Such approximations can be suitable on a case-by-case basis, but are reliant upon domain knowledge and do not necessarily generalise straightforwardly: it is not immediately clear as to their applicability across multiple wave equations. Indeed, these approximations may be inherently limiting, precluding adaption to more complex physics (wave equations featuring transverse isotropy or elastic wave equations for example). Given the trajectory towards increasingly complex physics and geometries, this paper sets out to develop a generalised immersed boundary approach, such that a consistent methodology can be applied in a relatively problem-agnostic manner.
This approach aligns with current trends in geophysical modelling: domain-specific languages (DSLs) and automatic code generation are increasingly prevalent, with projects such as Devito (Luporini et al., 2019; Louboutin et al., 2019) and Firedrake (Rathgeber et al., 2016) leveraging high-level abstractions to generate low-level FDM and FEM solver kernels from symbolic partial differential equations (PDEs). Abstracting low-level aspects of implementation achieves a separation of concerns between the overarching problem and its underlying implementation (Rathgeber et al., 2016), enabling domain specialists to focus on the application level. High-level interfaces substantially reduce development time (Louboutin et al., 2019), whilst sophisticated optimization routines in the lowering process yield high-performance, portable code (Luporini et al., 2019; Louboutin et al., 2019). Such approaches hinge on a general method for solving a given problem so that suitable abstractions can be developed; this cannot be easily achieved with the application-specific immersed boundary implementations developed to date.
Developing a generalised mathematical approach enables abstraction, which in turn lends itself to automation and thus code generation. Furthermore, direct generalisation to other wave equations widens potential application, spanning a range of geophysical problems. The generalised nature of the method presented greatly simplifies inclusion of immersed boundaries in geophysical models, as a suitable treatment can be devised according to the prescribed formula to suit the physical problem at hand.
This paper is structured as follows: an immersed boundary approach supporting the imposition of multidimensional and vector boundary conditions is outlined, followed by its application to a selection of equations and boundary conditions of interest. This mathematical approach forms the backbone of Schism: a plugin for Devito used to implement the examples shown in this paper. The numerical accuracy of the approach proposed is explored through convergence testing of the resultant treatment, and several geophysically-relevant test cases based on the first and second-order formulations of the acoustic wave equation are explored. Note that whilst the demonstrations within this paper focus solely on variants of the acoustic wave equation, the method is nominally equally suited to pseudoacoustic wave equations featuring vertical and tilted transverse isotropy (VTI and TTI), alongside elastic wave equations. Further exploration of these areas is planned in subsequent publications.
## General approach for constructing immersed boundary derivative operators
### Constructing Finite-Difference Approximations
In this section, the discretization of continuous fields in the vicinity of non-grid-conforming boundaries is outlined. The reduction of continuous functions to a discrete set of values is key to all manner of numerical models. In the case of FDMs, these points constitute nodes on a grid, and updating these discrete values according to some governing equation approximates the evolution of the underlying continuous function. Function values located at these points constrain a basis with some specified error relative to the continuous function, diminishing as resolution increases.
Understanding the evolution of these functions necessitates the calculation of derivatives; to this end the aforementioned basis is used to form approximations of the derivative operators, discretizing the continuous equation. As with the basis, these will have some error relative to their true value. To outline how these approximations are made in the typical case, consider some function \(f\) in a 1D space; approximating with an \(M\)th-order Taylor series expansion at some point \(x_{0}\) yields
\[f(x)=\sum_{m=0}^{M}\frac{(x-x_{0})^{m}}{m!}\frac{\partial^{m}f(x_{0})}{ \partial x^{m}}+O((x-x_{0})^{M+1}), \tag{1}\]
concisely represented as
\[\mathbf{a}\cdot\boldsymbol{\delta}=f(\mathrm{x}), \tag{2}\]
where
\[\mathbf{a}^{T}=\left(1,(x-x_{0}),...,\frac{(x-x_{0})^{M}}{M!}\right), \tag{3}\]
and
\[\boldsymbol{\delta}^{T}=\left(f(x_{0}),\frac{\partial f}{\partial x}(x_{0}),...,\frac{\partial^{M}f}{\partial x^{M}}(x_{0})\right). \tag{4}\]
For some even \(M\), expansions at \(M+1\) discrete points, labelled \(x_{-M/2}\) through \(x_{M/2}\), enables the formation of the linear system
\[\mathbf{A}\boldsymbol{\delta}=\begin{pmatrix}f(x_{-M/2})\\ \vdots\\ f(x_{M/2})\end{pmatrix}. \tag{5}\]
To illustrate, for \(M=2\), the system is as follows
\[\begin{pmatrix}1&x_{-1}-x_{0}&\frac{(x_{-1}-x_{0})^{2}}{2}\\ 1&0&0\\ 1&x_{1}-x_{0}&\frac{(x_{1}-x_{0})^{2}}{2}\end{pmatrix}\begin{pmatrix}f(x_{0}) \\ \frac{\partial f}{\partial x^{2}}(x_{0})\\ \frac{\partial^{2}f}{\partial x^{2}}(x_{0})\end{pmatrix}=\begin{pmatrix}f(x_{- 1})\\ f(x_{0})\\ f(x_{1})\end{pmatrix}, \tag{6}\]
which can be rearranged in the form
\[\mathbf{A}^{-1}\begin{pmatrix}f(x_{-M/2})\\ \vdots\\ f(x_{M/2})\end{pmatrix}=\boldsymbol{\delta}, \tag{7}\]
thereby obtaining derivatives as some weighted sum of discrete function values: a stencil. With the assumption of a regular grid, relative positions of points become fixed regardless of where the derivative is being taken, yielding
\[\begin{pmatrix}1&-1&\frac{1}{2}\\ 1&0&0\\ 1&1&\frac{1}{2}\end{pmatrix}\begin{pmatrix}f(x_{0})\\ \Delta x^{2}\frac{\partial f}{\partial x^{2}}(x_{0})\\ \Delta x^{2}\frac{\partial f}{\partial x^{2}}(x_{0})\end{pmatrix}=\begin{pmatrix} f(x_{-1})\\ f(x_{0})\\ f(x_{1})\end{pmatrix}, \tag{8}\]
for the above case, the inverse being
\[\begin{pmatrix}0&1&0\\ -\frac{1}{2}&0&\frac{1}{2}\\ 1&-2&1\end{pmatrix}\begin{pmatrix}f(x_{-1})\\ f(x_{0})\\ f(x_{1})\end{pmatrix}=\begin{pmatrix}f(x_{0})\\ \Delta x\frac{\partial f}{\partial x}(x_{0})\\ \Delta x^{2}\frac{\partial f}{\partial x^{2}}(x_{0})\end{pmatrix}. \tag{9}\]
One may observe that the leftmost matrix contains weights for FD stencils of every derivative order up to that of the basis.
For higher dimensions, a suitably higher-dimensional polynomial is required, formed as a product of per-dimension 1D series of the form
\[f(x)=\sum_{m=0}a_{m}x^{m}, \tag{10}\]
in 3D yielding
\[f(\mathrm{x})=\sum_{m=0}\sum_{n=0}\sum_{l=0}a_{mnl}x^{m}y^{n}z^{l}. \tag{11}\]
Given this, the process outlined above is extensible to higher dimensions as in Takekawa et al. (2015). In the pursuit of consistency and accuracy, it is crucial to maintain consistent error throughout any numerical scheme; thus polynomials used in such schemes must be of an order matching the spatial discretization employed. To this end, for an order \(M\) spatial discretization, the 3D expansion is truncated as follows:
\[f(\mathrm{x})=\sum_{m=0}^{M_{x}}\sum_{n=0}^{M_{y}}\sum_{l=0}^{M_{x}}a_{mnl}x^ {m}y^{n}z^{l}+O(|\Delta\mathrm{x}|^{(M+1)}), \tag{12}\]
subject to
\[M_{x}+M_{y}+M_{z}=M, \tag{13}\]
thus removing any terms with order greater than \(M\).
An N-dimensional Taylor series is a robust candidate for such applications, retaining similarities to the aforementioned 1D case and being suitably problem-agnostic. In 3D, this is given by
\[f(\mathrm{x})=\sum_{m=0}\sum_{n=0}\sum_{l=0}\frac{(x-x_{0})^{m}(y-y_{0})^{n}( z-z_{0})^{l}}{m!n!l!}\delta_{m,n,l}), \tag{14}\]
where
\[\delta_{m,n,l}=\frac{\partial^{m+n+l}}{\partial x^{m}\partial y^{n}\partial z ^{l}}f(\mathrm{x}_{0}). \tag{15}\]
Truncating this expansion as in Equation 13, the polynomial expansion at points within the support region can be represented as a matrix-vector multiplication, as in Equation 2. Reflecting the higher dimensionality and corresponding increased number of terms, \(\mathbf{a}\) is given by
\[\begin{split}\mathbf{a}^{T}=\bigg{(}1,(x-x_{0}),...,(z-z_{0}), \frac{(x-x_{0})^{2}}{2},\\ (x-x_{0})(y-y_{0}),...,\frac{(z-z_{0})^{2}}{2},...,\\ \frac{(x-x_{0})^{M}}{M!},...,\frac{(z-z_{0})^{M}}{M!}\bigg{)}, \end{split} \tag{16}\]
whilst
\[\begin{split}\boldsymbol{\delta}^{T}=\bigg{(}1,\frac{\partial}{ \partial x},...,\frac{\partial}{\partial z},\frac{\partial^{2}}{\partial x^{2} },\\ \frac{\partial^{2}}{\partial x\partial y},...,\frac{\partial^{2}}{ \partial z^{2}},...,\\ \frac{\partial^{M}}{\partial x^{M}},...,\frac{\partial^{M}}{ \partial z^{M}}\bigg{)}f(\mathrm{x}_{0}).\end{split} \tag{17}\]
As in the 1D case, expansions taken at some set of points are used to form a linear system. However, with increased dimensionality comes increased flexibility regarding the distribution of these points. Consider function values discretized at a set of points distributed in an arbitrary manner. In one dimension it is reasonably simple to construct a polynomial expansion of any desired order to fit this data, provided the number of linearly-independent data points equals or exceeds the coefficients in the truncated expansion. However, when constraining the higher-dimensional basis, this linear independence requires suitable point distribution.
A hyperspherical support region footprint, similar to that used by Takekawa et al. (2015), is proposed as a
Figure 1: A selection of 2D support region footprints with radii of 1.5, 2.5, and 3.5 respectively. The vertical black cross designates the stencil centre (the position at which the stencil is applied), green crosses are interior points, and the dotted black line shows the extent of the support region.
### Immersed boundary topography approach
### Approximating Boundary Conditions
In 3D, meaningful linear boundary conditions imposed upon a single field (boundary conditions imposed on multiple fields will be discussed in due course) have the general form
\[\sum_{m=0}^{M_{x}}\sum_{n=0}^{M_{y}}\sum_{l=0}^{M_{z}}\alpha_{mnl}(\mathrm{x_{b} })\frac{\partial^{m+n+l}f}{\partial x^{m}\partial y^{n}\partial z^{l}}(\mathrm{x_ {b}})=g(\mathrm{x_{b}}) \tag{20}\]
where \(M_{x}\), \(M_{y}\), and \(M_{z}\) are as in Equation 13. Conditions of this form will not yield the trivial expression \(0=g(\mathrm{x_{b}})\) when approximated with the basis. The coefficient \(\alpha_{mnl}(\mathrm{x_{b}})\) may vary with position, although for the applications considered in this paper, these are constant.
Substituting \(f\) for its Taylor-series approximation yields
\[\sum_{m=0}^{M_{x}}\sum_{n=0}^{M_{y}}\sum_{l=0}^{M_{z}}\sum_{i=0}^{M_{x}}\sum_{j =0}^{M_{y}}\sum_{k=0}^{M_{z}}\beta_{mnlijk}\frac{\partial^{i+j+k}f}{\partial x ^{i}\partial y^{j}\partial z^{k}}(\mathrm{x_{0}})=g(\mathrm{x_{b}}), \tag{21}\]
where
\[\beta_{mnlijk}=\alpha_{mnl}(\mathrm{x_{b}})\frac{\partial^{m+n+l}}{\partial x ^{m}\partial y^{n}\partial z^{l}}\frac{(x-x_{0})^{i}(y-y_{0})^{j}(z-z_{0})^{k} }{i!j!k!}. \tag{22}\]
As is the case with Taylor-series approximations at interior points, this is expressible as a dot product:
\[\left(\sum_{m=0}^{M_{x}}\sum_{n=0}^{M_{y}}\sum_{l=0}^{M_{z}}\alpha_{mnl}( \mathrm{x_{b}})\frac{\partial^{m+n+l}}{\partial x^{m}\partial y^{n}\partial z ^{l}}\mathrm{a}\right)\cdot\mathbf{\delta}=g(\mathrm{x_{b}}). \tag{23}\]
Note that the derivative vector is identical to that used for Taylor-series expansion at interior points, and thus some set of interior and boundary constraints can be encapsulated as the multiplication of the derivative vector by some matrix of coefficients. There is no distinction between interior and boundary constraints: it is apparent that interior constraints can be formed from Equation 20, and both can be represented as rows of the linear system.
With individual constraints for both interior points and boundary conditions, a linear system can be obtained with which to fit the basis, not unlike that in Equation 5. Figure 4 shows the effect of adding free-surface boundary conditions to the truncated stencil shown in Figure 3; the matrix is once again overdetermined, and thus nominally has sufficient information to constrain the derivatives. Note however, that it may be the case that such a matrix is still not full-rank. For example, particular boundary constraints may contain redundant information, as seen in the lowermost rows of the matrix or lack information regarding particular derivatives.
With interior function values and some \(J\) boundary conditions imposed at the boundary points \(\mathrm{X_{b}}\), the linear system
\[\mathbf{A}\mathbf{\delta}=\begin{pmatrix}f(\mathbf{X})\\ g_{1}(\mathbf{X_{b}})\\ \vdots\\ g_{J}(\mathbf{X_{b}})\end{pmatrix}, \tag{24}\]
can be formed, with rows of the form given in Equations 2 and 23, as shown in Figure 4. \(\mathbf{X}\) is the set of interior points used to construct the extrapolant and vectors \(g_{1}(\mathbf{X_{b}})\) through \(g_{J}(\mathbf{X_{b}})\) contain forcing values corresponding to each boundary condition and forcing point.
As a tangible example, consider constructing a second-order extrapolant in 1D from two interior points (labelled \(x_{1}\) and \(x_{2}\)) and a boundary point upon which the bound
Figure 4: The effect of adding boundary constraints to the truncated support region shown in Figure 3. Boundary points where conditions are imposed are shown as hollow green crosses. Solid green dots show the centre of FD cells containing a boundary point; the normal from this point to the boundary is shown as a green line. Boundary conditions increase in order towards the bottom of the matrix. Note that the highest-order boundary conditions here are invariant with position and thus redundant.
Figure 3: The effect of truncation of the stencil footprint shown in Figure 2 by a 45“boundary. Note that the points removed from the stencil correspond to rows removed from **A**.
ary conditions \(f(x_{b})=0\) and \(\frac{\partial^{2}f}{\partial x}(x_{b})=0\) are imposed. This particular case yields the linear system
\[\begin{pmatrix}1&(x_{1}-x_{0})&(\frac{x_{(}x_{1}-x_{0})^{2}}{2})\\ 1&(x_{2}-x_{0})&\frac{(x_{2}-x_{0})^{2}}{2}\\ 1&(x_{b}-x_{0})&\frac{(x_{b}-x_{0})^{2}}{2}\\ 0&0&1\end{pmatrix}\begin{pmatrix}f(x_{0})\\ \frac{\partial f}{\partial x}(x_{0})\\ \frac{\partial f^{2}}{\partial x^{2}}(x_{0})\\ 0\end{pmatrix}=\begin{pmatrix}f(x_{1})\\ f(x_{2})\\ 0\\ 0\end{pmatrix}. \tag{25}\]
Whilst higher-order accuracy and higher dimensionality increase the size of this system (making it somewhat unwieldy to show here) the process of constructing this system remains the same. Note here that whilst this particular system is overdetermined, removing a single interior point from this example gives the approach detailed in Mulder (2017) and Mulder and Huiskes (2017) in which all boundary constraints are selected then supplemented with interior constraints to obtain a square system for which an inverse can be found.
The process of selecting a suitable set of interior and boundary conditions is worthy of some consideration here. Whilst the method used by Mulder (2017); Mulder and Huiskes (2017) yields easy-to-invert systems and is intuitive in 1D, in higher dimensions more choices are available.
To preserve the Cartesian topology of the grid (thereby allowing all points to be directly indexed), boundary points are taken as the intersection of the boundary normal at a grid point and the boundary, assuming this lies within a point-centred hyperrectangle whose sides correspond to a single grid increment in each dimension. Such a support region is shown in Figure 5. From here, a cutoff parameter \(\eta\) is defined. Interior points where a boundary point lies within a gridpoint-centred hyperrectangle with sides of length \(2\eta\Delta x_{n}\) (where \(\Delta x_{n}\) is the grid increment in the \(n^{\text{th}}\) dimension) are excluded for the purposes of constructing the extrapolant to ensure stability. Note that whilst suitable values of \(\eta\) have been empirically determined in this work, more rigorous analysis would be beneficial (although this may be challenging, as in Mulder and Huiskes, 2017). An initial support region radius of \((M+1)/2\) is selected, expanded incrementally if insufficient information to constrain the extrapolation is contained therein.
If the basis is sufficiently constrained, \(rank(A)\) will equal the number of expansion terms. The required rank is \(M+1\) in 1D, \((M+1)(M+2)/2\) in 2D, and \((M+1)(M+2)(M+3)/6\) in 3D respectively, the latter two corresponding to the \((M+1)^{\text{th}}\) triangular and tetrahedral numbers respectively.
If one does not intend to be so strict about maintaining formal order, the order of the polynomial basis can be reduced instead, as in Mulder (2017); Mulder and Huiskes (2017). It is anticipated that this may be advisable with some boundary conditions and geometries to avoid stencil footprints becoming excessively large in edge cases.
### Reconciliation with the Interior Numerical Discretization
Whilst the method described above can be used to directly obtain FD stencils, sudden switches in derivative approximation may lead to instability, and it is thus desirable to retain the interior discretization throughout the computational domain. The truncation of interior operators applied in the vicinity of the boundary can be addressed by using the boundary-constrained N-dimensional polynomial basis to approximate function values at required exterior points. The process of projecting this basis onto some set of required exterior points \(\mathbf{X_{e}}\) can be represented by the matrix-vector multiplication
\[\mathbf{B}\boldsymbol{\delta}=\bar{f}(\mathbf{X_{e}}), \tag{26}\]
where \(\mathbf{B}\) contains the terms associated with each derivative in the Taylor series evaluated at the respective points in \(\mathbf{X_{e}}\). Derivatives are approximated as
\[\boldsymbol{\delta}=\mathbf{A}^{+}\begin{pmatrix}f(\mathbf{X})\\ g_{1}(\mathbf{X_{b}})\\ \vdots\\ g_{J}(\mathbf{X_{b}})\end{pmatrix}, \tag{27}\]
where \(\mathbf{A}^{+}\) is the Moore-Penrose pseudoinverse of \(\mathbf{A}\). Note that this is the inverse of Equation 24.
Using the 1D example shown in Equation 25, assume two exterior points, designated \(x_{3}\) and \(x_{4}\) are required to complete the stencil operator applied at the specified position. \(\mathbf{B}\) will then be of the form
\[\mathbf{B}=\begin{pmatrix}1&(x_{3}-x_{0})&\frac{(x_{3}-x_{0})^{2}}{2}\\ 1&(x_{4}-x_{0})&\frac{(x_{4}-x_{0})^{2}}{2}\end{pmatrix}, \tag{28}\]
Figure 5: Construction of the support region in the vicinity of the boundary. Symbols are as in Figures 1 and 4. The solid black line represents the boundary surface. Hollow black crosses are considered to be outside or too close to the boundary for the purposes of constructing the extrapolant (as defined in the proceeding section).
and thus Equation 26 will yield approximations of \(f(x_{3})\) and \(f(x_{4})\) as functions of \(f(x_{1})\), \(f(x_{2})\) (the other entries in the right-hand side vector of Equation 25 are zero). More generally, this process yields exterior function values as functions of interior values and boundary conditions.
Substituting these into the interior stencil as necessary, a modified version of the interior operator is obtained. This operator has a support region consisting of the unity of interior points in the original stencil (\(\mathbf{X_{i}}\)) and those in the circular stencils used for extrapolation, plus boundary points where conditions are enforced.
A stencil expression approximating some derivative of \(f\) can be expressed as
\[\mathbf{w}\cdot f(\mathbf{X})=d, \tag{29}\]
where \(d\) is some arbitrary derivative and \(\mathbf{w}\) contains the stencil weights corresponding to each function value in the vector \(f(\mathbf{X})\). Separating the corresponding vector of stencil weights \(\mathbf{w}\) into two subvectors \(\mathbf{w_{i}}\) containing weights for points on the interior and \(\mathbf{w_{e}}\) for points on the exterior, the expression for the modified stencil \(d\) can be obtained via
\[\begin{pmatrix}\mathbf{w_{i}}\\ \mathbf{w_{e}}\end{pmatrix}\cdot\begin{pmatrix}f(\mathbf{X_{i}})\\ \tilde{f}(\mathbf{X_{e}})\end{pmatrix}=d. \tag{30}\]
Note that by constructing modified operators in this manner, the extrapolated values are used without being explicitly calculated, with the extrapolation process baked into the stencil coefficients. Furthermore, derivative stencils centred at different gridpoints will have independent extrapolations. This independence and the local nature of these extrapolation operators ensures that the linear systems constructed remain relatively small and simplifies the process of obtaining a solution to the system (both criterion prioritised by Hu, 2016).
## Application to the 2nd-order acoustic wave equation
Application to the 2nd-order acoustic wave equation represents a suitable first step for the detailed method. Containing only a single time-variant field and equation, it has reduced computational cost and implementation complexity compared to other formulations and wave equations; simpler equations also yield simpler boundary conditions. Whilst only P-wave components can be propagated and more complex physics such as anisotropy and viscoelasticity commonly desired for seismic imaging are omitted, applications remain in medical imaging (Guasch et al., 2020) and infrasound studies (Kim and Lees, 2014). Furthermore, it offers a simple platform to test the applicability of the method to various boundary conditions. Doing so paves the way for further diverse boundary conditions introduced with more complex physics.
The equation itself is given as
\[\frac{\partial^{2}p}{\partial t^{2}}=c^{2}\nabla^{2}p+f, \tag{31}\]
containing a single time-dependent variable \(p\), and parameterised via a wavespeed \(c\) which varies in space alone. There is also an additional forcing term \(f\) which in the context of seismic simulation takes the form of some point source or set thereof.
In seismic applications, this equation is typically discretized with an explicit timestepping scheme, replacing the time derivative with a second-order centred-difference approximation:
\[\frac{p^{t+1}-2p^{t}+p^{t-1}}{\Delta t^{2}}=c^{2}\nabla^{2}p+f. \tag{32}\]
Rearranging for pressure at the forward timestep,
\[p^{t+1}=2p^{t}-p^{t-1}+\Delta t^{2}c^{2}\nabla^{2}p+\tilde{f}, \tag{33}\]
is obtained, where \(\tilde{f}=f\Delta t^{2}\). This is the update equation used to estimate the field at the next iteration.
Spatial derivatives are all contained within the Laplacian; for the purposes of an immersed boundary, modified 2nd-derivative operators will need to be generated for each dimension, assuming the boundary is not axially aligned.
Dependent upon the side from which the wave approaches the surface, the reflection coefficient is near 1 or -1, the former when approaching from above whilst the latter whilst approaching from below. We will consider the latter case for now. In this scenario, the topography for all intents and purposes represents an irregular free surface, upon which the condition
\[p(t,\mathbf{x_{b}})=0, \tag{34}\]
is to be applied. Using Equation 31, incrementally higher-order boundary conditions can be derived, these being \(\nabla^{2}p(t,\mathbf{x_{b}})=0\), \(\nabla^{4}p(t,\mathbf{x_{b}})=0\), and so forth.
## Free-Surface Boundary Constraints
By substituting the polynomial basis into these boundary conditions in the place of pressure, suitable approximations can be formed. For boundary conditions of higher order than the spatial discretization, this will yield the trivial expression \(0=0\), and thus such conditions can be discarded.
Considering a fourth-order discretization in 2D, substituting the corresponding basis into the zeroth-order
\[\frac{\partial^{4}p}{\partial x^{4}}(x_{0},y_{0})+2\frac{\partial^{4}p}{ \partial x^{2}\partial y^{2}}(x_{0},y_{0})+\frac{\partial^{4}p}{\partial y^{4}}(x _{0},y_{0})=0, \tag{37}\]
respectively. Each of these equations can be expressed as in Equation 2, with \(\mathbf{\delta}\) given by
\[\mathbf{\delta}^{T}=\bigg{(}p(x_{0},y_{0}),\frac{\partial p}{\partial x }(x_{0},y_{0}),\frac{\partial p}{\partial y}(x_{0},y_{0}), \tag{38}\] \[\qquad\frac{\partial^{2}p}{\partial x\partial y}(x_{0},y_{0}), \frac{\partial^{2}p}{\partial y^{2}}(x_{0},y_{0}),\frac{\partial^{3}p}{ \partial x^{3}}(x_{0},y_{0}),\] \[\qquad\frac{\partial^{3}p}{\partial x^{2}\partial y}(x_{0},y_{0} ),\frac{\partial^{3}p}{\partial x\partial y^{2}}(x_{0},y_{0}),\frac{\partial^{ 3}p}{\partial y^{3}}(x_{0},y_{0}),\] \[\qquad\frac{\partial^{4}p}{\partial x^{4}}(x_{0},y_{0}),\frac{ \partial^{4}p}{\partial x^{3}\partial y}(x_{0},y_{0}),\frac{\partial^{4}p}{ \partial x^{2}\partial y^{2}}(x_{0},y_{0}),\] \[\qquad\frac{\partial^{4}p}{\partial x\partial y^{3}}(x_{0},y_{0} ),\frac{\partial^{4}p}{\partial y^{4}}(x_{0},y_{0})\bigg{)}.\]
For the zeroth-order condition,
\[\mathbf{a}^{T}=\bigg{(}1,(x_{b}-x_{0}),(y_{b}-y_{0}),\frac{(x_{b} -x_{0})^{2}}{2}, \tag{39}\] \[\qquad(x_{b}-x_{0})(y_{b}-y_{0}),\frac{(y_{b}-y_{0})^{2}}{2}, \frac{(x_{b}-x_{0})^{3}}{6},\] \[\qquad\frac{(x_{b}-x_{0})^{2}(y_{b}-y_{0})}{2},\frac{(x_{b}-x_{0 })(y_{b}-y_{0})^{2}}{2},\] \[\qquad\frac{(y_{b}-y_{0})^{3}}{6},\frac{(x_{b}-x_{0})^{4}}{24}, \frac{(x_{b}-x_{0})^{3}(y_{b}-y_{0})}{6},\] \[\qquad\frac{(x_{b}-x_{0})^{2}(y_{b}-y_{0})^{2}}{4},\frac{(x_{b}- x_{0})(y_{b}-y_{0})^{3}}{6},\] \[\qquad\frac{(y_{b}-y_{0})^{4}}{24},\]
whilst the Laplacian and Biharmonic conditions similarly correspond to
\[\mathbf{a}^{T}=\bigg{(}0,0,0,1,0,1,(x_{b}-x_{0}),(y_{b}-y_{0}),(x _{b}-x_{0}), \tag{40}\] \[\qquad(y_{b}-y_{0}),\frac{(x_{b}-x_{0})^{2}}{2},(x_{b}-x_{0})(y_{ b}-y_{0}),\] \[\qquad(x_{b}-x_{0})^{2}+(y_{b}-y_{0})^{2}),(x_{b}-x_{0})(y_{b}-y_{ 0}),\] \[\qquad(y_{b}-y_{0})^{2}\bigg{)}\]
and
\[\mathbf{a}^{T}=(0,0,0,0,0,0,0,0,0,1,0,2,0,1) \tag{41}\]
respectively. The right-hand side will be zero.
With Equations 39-41, three rows can be constructed for each boundary point used to constrain the basis. Note that the row corresponding with the fourth-order condition is invariant in space, and so will contain the same information irrespective of boundary point location.
### Application to Example Geometry
To exemplify this process, consider the case of a fourth-order stencil approximating \(\frac{\partial^{2}p}{\partial y^{2}}\), truncated by an arc-shaped boundary, as shown in Figure 6. It is clear that it will not be possible to form this stencil as one would in free space since two of the requisite points lie outside the physical domain.
To rectify this, a circular support region of radius \(2.5\Delta x\) is extended from the stencil centre, as in Figure 7. This
support radius encircles 5 boundary points, determined via the previously-discussed criteria. Given a cutoff of \(\eta=0.5\) to prevent instability related to the boundary forcing, 11 interior points are also available for purposes of constructing the 2D extrapolant. Each boundary point will have three matrix rows associated with it, one for each boundary condition imposed at that point, whilst each interior point will correspond to a single row of \(\mathbf{A}\).
The set of interior points used to fit the basis is shown in Figure 8a alongside the resultant submatrix containing the constraints applied at this set of points. Note however that these boundary constraints are not necessarily unique (most prominently the zero biharmonic condition as aforementioned), and may contain overlapping information: whilst the resultant matrix has more columns than rows, it is not necessarily full-rank.
In this particular case, the rank is equal to the number of columns, implying an overdetermined system. To obtain approximations of the derivatives and thus fit the basis, a Moore-Penrose pseudoinverse is used. A weighted least-squares approach was briefly explored to prioritise particular boundary conditions or points but was found to be prone to generating ill-conditioned linear systems whilst having minimal discernible benefit.
Continuing the pressure field onto the pair of required exterior points requires the construction of \(\mathbf{B}\) as in Equation 26, evaluating the basis at points \((0,1)\) and \((0,2)\), and constructing matrix rows via in the prescribed manner. Note that as all boundary forcing values are zero in this case, corresponding columns of \(\mathbf{B}\mathbf{A}^{+}\) can effectively be ignored for the purpose of constructing the stencil. Applying weights for the interior stencil, the modified boundary operator for this particular point is obtained, as shown in Figure 10
### Reduction to 1D
Through particular choices made when applying the method described, other, equally feasible stencils can be obtained, including the 1D approximation detailed in Mulder (2017). Considering the 1D case, the aforementioned free-surface boundary conditions reduce to
\[p(x_{b})=0,\quad\frac{\partial^{2}p}{\partial x^{2}}(x_{b})=0,\quad\frac{ \partial^{4}p}{\partial x^{4}}(x_{b})=0,\quad... \tag{42}\]
and so forth. Suppose some case is encountered where a boundary truncates a stencil as in Figure 6. Selecting the \(M/2\) closest points to the boundary (at distance greater than \(\eta\Delta x\)), \(\mathbf{A}\) can be constructed such that it is square, enabling derivatives to be obtained by inverting this matrix. Thus the method described by Mulder (2017) can be considered a sub-case of the overarching method described in this paper, albeit distinct from the specific approach taken for the examples shown. The optimal manner by which to delineate the support region and solve the linear system is worthy of future attention - the approach taken by this study is by no means optimal, merely aiming for simplicity and robustness.
Figure 6: The stencil footprint of a 4th-order-accurate \(\frac{\partial^{2}p}{\partial y^{2}}\) stencil truncated by an arc-shaped boundary. The bold black cross is the stencil position, whilst pale green crosses are other interior points within the stencil. Values at both hollow black crosses are required by the stencil but are located outside the computational domain.
Figure 7: Support region for a 2D polynomial fitted with a combination of available interior points and boundary points.
[MISSING_PAGE_POST]
Comparing the proposed approach based on an N-dimensional basis with circular support to an immersed boundary implementation based on per-dimension 1D extrapolations for a 2D second-order acoustic example featuring a sinusoidal free surface, shown in Figure 11, both approaches yield visually similar results. However, some minor unevenness is observable in the trailing edge of the reflected wave when 1D extrapolations are used, this area appearing smoother when N-dimensional extrapolation is used.
### Convergence testing
To examine the convergence behaviour of the proposed approach, a setup initially presented by Mulder (2017) and subsequently used in Mulder and Huiskes (2017) (the second example in both) was replicated, as shown in Figure 12a. An exact solution exists for this example, allowing for the error in any numerical solution to be evaluated. Figure 12b shows the convergence behaviour of a scheme based on N-dimensional extrapolations, compared against a scheme based on axially-aligned 1D extrapolations.
For a fourth-order spatial discretization, the reduction in observed error with respect to grid increment was found to be initially just short of fourth-order, flattening around a grid increment of 0.02 as the spatial error is eclipsed by second-order timestepping error for finer grids. As the timestep was set at 10% of the critical value, this implies that the error introduced by the immersed boundary was minimal in all cases and that in many cases, topography implementation will cease to be the accuracy bottleneck when this immersed boundary treatment is applied. Reducing the timestep enabled the continuation of the approximate fourth-order trend to finer grids, but accumulation of floating-point error again resulted in a similar flattening albeit at a smaller grid increment. The immersed boundary approach based on N-dimensional extrapolations was found to yield reduced error versus that based on 1D extrapolations for all grid increments tested, particularly at finer resolutions, albeit with similar con
Figure 11: Snapshots at 400ms, 500ms, 600ms, and 700ms of a wavefront reflecting off a sinusoidal hill upon which a free surface has been imposed. Note that zero Dirichlet boundary conditions have been imposed on all other edges of the domain. Subfigures a, c, e, and g feature an immersed boundary based on a 1D extrapolation scheme, whilst subfigures b, d, f, and h use an N-dimensional basis with circular support. Wavefield amplitudes are normalised against the maximum absolute value in each subfigure for clarity. This convention is continued henceforth.
Figure 10: Comparison of the standard stencil footprint and weights to that of the modified boundary operator for the case illustrated in Figure 6. The colourbar indicates the value of the stencil weight at each point.
vergence behaviour.
## 5 Extension to multiple fields and the 1st-order acoustic wave equation
The method detailed yields accurate results for equations concerning a single field, whilst enabling higher-dimensional conditions to be imposed on the boundary. Another appeal of this approach is the readiness with which it is extended to cases where multiple fields are present. The acoustic wave equation can alternatively be formulated as a coupled system of pressure and particle velocity, introducing a spatially-variant density parameter, capturing density-dependent amplitude variations. These equations take the form
\[\frac{\partial p}{\partial t}=\rho c^{2}\mathbf{\nabla}\cdot\mathbf{v}+f,\quad \frac{\partial\mathbf{v}}{\partial t}=\frac{1}{\rho}\mathbf{\nabla}p, \tag{43}\]
where \(\mathbf{v}\) is particle velocity and \(\rho\) is density. This formulation introduces additional fields in the form of components of the particle velocity vector. Given the aforementioned free-surface condition imposed on the pressure field, it is apparent from Equation 43 that the condition
\[\mathbf{\nabla}\cdot\mathbf{v}(t,\mathbf{x_{b}})=0 \tag{44}\]
must also be imposed. Whilst boundary conditions considered prior to now have concerned some property of a single field at the boundary, when multiple fields are present in a model, boundary conditions specifying some relationship between these may be present. Each vector component can be approximated by an independent polynomial basis in free space, but at the edge of the domain, they will require construction such that this relationship is respected. As will be highlighted - this extension can be naturally handled by the method described.
The proceeding section considers boundary conditions of this type more generally, before honing in on the application of this approach to the particle velocity free surface.
### Boundary Conditions Spanning Multiple Fields
Where multiple fields are present, individual Taylor Series are used to approximate each. Supposing \(K\) separate fields, labelled \(f_{1}\) through \(f_{K}\) are present within a model: ignoring boundary conditions for now, the polynomial fitting process can be expressed as
\[\begin{pmatrix}\mathbf{A_{1}}&&\\ &\ddots&\\ &&\mathbf{A_{K}}\end{pmatrix}\begin{pmatrix}\mathbf{\delta_{1}}\\ \vdots\\ \mathbf{\delta_{K}}\end{pmatrix}=\begin{pmatrix}f_{1}(\mathbf{X_{1}})\\ \vdots\\ f_{K}(\mathbf{X_{K}})\end{pmatrix}, \tag{45}\]
where \(\mathbf{A_{k}}\) is the matrix containing coefficients associated with each derivative in the polynomial expansion approximating field \(f_{k}\) at the set of points \(\mathbf{X_{k}}\). The vector \(\mathbf{\delta_{k}}\) contains the various derivatives of \(f_{k}\), analogous to the single-field case. The left-hand matrix is block-diagonal, and thus the linear system can be split into \(K\) smaller systems, each to be solved individually. In the case that boundary conditions concern only a single field apiece, this remains true, and one can still separate the system in this manner. The intuitive implication of this is the independence of polynomial expansions approximating each field in the absence of any constraints which would otherwise link them.
However, if boundary conditions impose some particular relationship between fields, then it is apparent that the resultant polynomial approximations of one of these fields will require information regarding the other fields present in the boundary condition for it to be respected. As in the single-field case, each boundary condition will be approximated with Taylor-series expansions, although
Figure 12: The exact solution at \(t=0\) is shown in the left subfigure. The solid black line is the free surface, the left and right sides of the domain have periodic boundary conditions applied, and the wavefield is mirrored across the lower boundary. A comparison of the convergence behaviour of the method proposed to one using 1D approximations is shown on the right.
note that in this case, each field present within the constraint will have its own expansion. Each of these series contains derivatives of its respective function, contained within the corresponding \(\boldsymbol{\delta_{k}}\) as in Equation 45. As such, the Taylor-series approximation of a constraint specifying some relationship between multiple fields will contain derivatives of multiple fields, and thus when represented in the previously-detailed dot-product form, the vector \(\boldsymbol{\delta}\) will consist of multiple \(\boldsymbol{\delta_{k}}\).
In the context of the linear system shown in Equation 45, rows corresponding to such boundary conditions will span multiple previously-separate blocks, thereby linking them accordingly. As the process of fitting the extrapolant relies on the inversion of the left-hand-side matrix, it follows that these linked blocks will need to be inverted in tandem as they are no longer separated. As a boundary condition row in such a scenario maps multiple \(\boldsymbol{\delta_{k}}\) onto a single boundary forcing value, and each of the previously-separate blocks maps its respective \(\boldsymbol{\delta_{k}}\) onto the corresponding \(f_{k}(\mathbf{X_{k}})\), the inverse of the resultant block will consequently map all \(f_{k}(\mathbf{X_{k}})\) present onto any given derivative.
As a simple example, consider some pair of 1D functions \(f\) and \(h\) upon which the condition
\[f(x_{b})+h(x_{b})=0, \tag{46}\]
is to be applied at the boundary. Approximating this constraint with 2nd-order Taylor series expanded around some \(x_{0}\) yields
\[\begin{split}& f(x_{b})+(x_{b}-x_{0})\frac{\partial f}{\partial x }(x_{b})+\frac{(x_{b}-x0)^{2}}{2}\frac{\partial^{2}f}{\partial x^{2}}(x_{b}) \\ &+h(x_{b})+(x_{b}-x_{0})\frac{\partial h}{\partial x}(x_{b})\\ &+\frac{(x_{b}-x0)^{2}}{2}\frac{\partial^{2}h}{\partial x^{2}}(x_ {b})=0.\end{split} \tag{47}\]
Representing this as a dot product of two vectors
\[\begin{pmatrix}\mathbf{a_{f}}\\ \mathbf{a_{h}}\end{pmatrix}\cdot\begin{pmatrix}\boldsymbol{\delta_{f}}\\ \boldsymbol{\delta_{h}}\end{pmatrix}=0 \tag{48}\]
where
\[\mathbf{a_{f}}^{T}=\mathbf{a_{h}}^{T}=\left(1,(x_{b}-x_{0}),\frac{(x_{b}-x0)^ {2}}{2}\right), \tag{49}\]
\[\boldsymbol{\delta_{f}}^{T}=\left(f(x_{b}),\frac{\partial f}{\partial x}(x_{b }),\frac{\partial^{2}f}{\partial x^{2}}\right), \tag{50}\]
and
\[\boldsymbol{\delta_{h}}^{T}=\left(h(x_{b}),\frac{\partial h}{\partial x}(x_{b }),\frac{\partial^{2}h}{\partial x^{2}}\right). \tag{51}\]
To avoid confusion, note that \(\mathbf{a_{f}}=\mathbf{a_{h}}\) results from the boundary condition applied and will generally not be the case, depending upon on constraints applied. A single point of \(f\) in this form is given as
\[\begin{pmatrix}\mathbf{a_{f}}\\ 0\end{pmatrix}\cdot\begin{pmatrix}\boldsymbol{\delta_{f}}\\ \boldsymbol{\delta_{h}}\end{pmatrix}=0. \tag{52}\]
Approximations of \(h\) at interior points can be formed in a similar manner. Assembling values of \(f\) and \(h\) at some set of points into a linear system using the above methodology, it is apparent that the row corresponding to the boundary condition links two otherwise separate blocks pertaining to interior points of \(f\) and \(h\). A more thorough discussion on the effect of such boundary conditions on the structure of the linear system can be found in Appendix A.
Solving the resultant linear system gives derivatives of the basis (now multiple bases) at the expansion point as in the single-field case. Given these, the function can be continued beyond the boundary to obtain values at exterior points required by interior stencils. This is achieved in the same manner as the single-field case. These function values will be some weighted sum of boundary forcing values and interior values of all fields linked to the function of interest via boundary constraints. Consequently, any stencil formed using these approximated values will span all of these fields as well.
With this approach, the need to devise application-specific approximations is removed, enabling the application of a consistent method across a wide range of boundary conditions. As such, for some given set of derivative operators and boundary conditions (within the parameters discussed above), a scheme of this class can be generated.
### The Particle Velocity Free-Surface
Returning to the zero-divergence boundary condition to be imposed on the particle velocity vector, approximating this with a second-order basis gives
\[\begin{split}&\frac{\partial v_{x}}{\partial x}(x_{0},y_{0})+(x _{b}-x_{0})\frac{\partial^{2}v_{x}}{\partial x^{2}}(x_{0},y_{0})\\ &+(y_{b}-y_{0})\frac{\partial^{2}v_{x}}{\partial x\partial y}+ \frac{\partial v_{y}}{\partial y}(x_{0},y_{0})\\ &+(x_{b}-x_{0})\frac{\partial^{2}v_{y}}{\partial x\partial y}+(y_ {b}-y_{0})\frac{\partial^{2}v_{y}}{\partial y^{2}}(x_{0},y_{0})=0,\end{split} \tag{53}\]
yielding the characteristic row of \(\mathbf{A}\):
\[\begin{split}\mathbf{a}^{T}=\bigg{(}0,1,0,(x_{b}-x_{0}),(y_{b}-y_ {0}),0,\\ 0,0,1,0,(x_{b}-x_{0}),(y_{b}-y_{0})\bigg{)}.\end{split} \tag{54}\]
There will be one such row for every boundary point. Returning to the geometry shown in Figure 7, boundary conditions are imposed at a single set of points for all fields to ensure consistency, and as such the particle velocity boundary condition will be imposed on these same points. Whilst the support region will nominally be of a smaller radius in this case and only contain the middle three boundary points, it happens that for this particular geometry, there is insufficient information within this radius, and thus it must be expanded to that shown in Figure 7 to constrain the basis.
To prevent errors and nonphysical effects introduced by overextended velocity field extrapolations, \(\eta\) is set to zero for both velocity fields. As such, the support region for each particle velocity component is as shown in Figure 13 alongside the corresponding \(\mathbf{A}\). The structure of \(\mathbf{A}^{+}\) is shown in Figure 14:
it is apparent that this matrix is dense. Given it maps values of both velocity components and boundary forcing values onto derivatives of both basis functions, each of these derivatives will be approximated as a weighted sum of values of both components. Consequently, any extrapolations using these bases will also be in terms of both \(v_{x}\) and \(v_{y}\) and thus stencils completed using these extrapolations will span both these fields too.
Note that a staggered scheme is used to prevent the emergence of checkerboard instability when solving this particular formulation. As such the base stencil used to construct the modified FD operators will be backward-staggered.
The necessity of expanding the support region in this case is worth revisiting. The particle velocity free-surface and accompanying higher-order conditions \(\nabla^{2}\boldsymbol{\nabla}\cdot\mathbf{v}(\mathbf{x_{b}})=0\), \(\nabla^{4}\boldsymbol{\nabla}\cdot\mathbf{v}(\mathbf{x_{b}})=0\), and so forth concern multiple fields. Consequently, the number of boundary constraints within a support region of some given radius is likely to be low relative to the number of fields, requiring an enlarged support region in some cases.
### Convergence Testing
To explore the convergence behaviour of the boundary treatment devised, the previous setup was replicated for testing with the 1st-order acoustic wave equation. As before, a 4th-order spatial discretization was used and the timestep was set to 10% of the critical timestep.
As in the previous test, initial convergence is just short of 4th-order, before gradually flattening around a grid increment of 0.01, at which point convergence is around 2nd-order as timestepping error saturates the solution. Note that convergence is somewhat less smooth beyond this point with a handful of blackspots where error is anomalously high compared to the prevailing trend. However, broadly speaking, the maximum error in the scheme continues to fall as the discretization is refined. Investigation of these anomalies found them to be specific to very particular grid sizes (adding or removing a single node is sufficient to prevent the more prominent spike), although the reason for this remi
Figure 14: The structure of the Moore-Penrose pseudoinverse of \(\mathbf{A}\) shown in Figure 13.
Figure 13: Assembling \(\mathbf{A}\) for the particle velocity fields. Horizontal arrows correspond with \(v_{x}\) points used to construct the extrapolant, whilst vertical arrows are the respective \(v_{y}\) points. The black cross designates the expansion point. Note that only the subgrids of concern are shown for clarity. Note how the interior points of each field correspond with an independent block, whilst the boundary condition rows span both.
Figure 15: Convergence of the numerical scheme for the 1st-order formulation of the acoustic wave equation using the same setup shown in Figure 12a. Note that scales are slightly different in this figure.
ever, it appears that the error introduced by the boundary is rapidly eclipsed by other sources as the discretization is refined, implying that it is unlikely to be a significant source of error in practical applications.
## 2 Implementation
Given a set of symbolic equations that hold on the boundary surface and a discretized signed-distance function (SDF) encapsulating the boundary position, a suitable numerical scheme and thus modified stencils implementing the immersed boundary can be automatically devised. A framework to do so, Schism, was developed as a plugin for Devito. This was done not only to expediate and simplify the implementation of the following test cases but to explore synergies between this generalised immersed boundary method and code generation. Due to the high-level nature of the abstractions created, the introduction of an immersed boundary to a numerical model written in Devito can be achieved with only a handful of additional lines of code and a qualitative understanding of what is being done behind the scenes.
This enabled all the examples shown in this paper to be implemented with a common codebase - only the top-level model specification is changed between examples. The unprecedented flexibility of this approach enables a wide range of geophysical scenarios to be condensed into an understandable, repurposable form, enabling maximum code reuse. Whilst a comprehensive overview of the mechanisms by which this was achieved is beyond the scope of this paper, the proceeding examples all leverage this functionality.
## 3 Examples
Reflecting the wide range of relevant geophysical applications, we present a suite of examples showcasing our approach. These examples are designed to resemble particular problems of interest and are based on real-world topography.
### 2D 2nd-order acoustic free-surface
The first example is based on an East-West profile across the summit of Mount St Helens, Washington, USA. The summit collapse during the 1980 eruption and subsequent lava dome formation within the crater resulted in near-vertical crater walls and a mixture of concavity and convexity on the crater floor. As a stratovolcano, Mount St Helens has steep, uneven flanks rising over a kilometre from the surrounding landscape, making it an ideal stress test for the method proposed.
The topographic surface was represented internally as an SDF discretised onto the FD grid. This representation has several advantages; its mathematical properties lend themselves to straightforward geometry handling, and it ensures that the resolution of the surface and resultant accuracy of the surface representation within the numerical scheme are consistent with the interior discretization. Note however that this representation enables the surface to be located with much finer precision than the FD discretization, despite its discretization on the same grids, and the SDF can be constructed from extremely high-resolution digital elevation models (DEMs) without the requirement to downsample or alias the raw data to match the FD grid. This is beneficial for real-world applications where such data obtained from satellites and drones may be structured or unstructured (or compounds of multiple such datasets), heavily oversampled versus the discretization required for numerical-dispersion-free wavefield propagation, and typically with a vertical precision in the order of centimetres.
Material properties are kept consistent throughout the model domain, such that all topographic interactions observed are a product of the boundary treatment rather than any material discontinuity: a convention continued throughout the examples presented. Variable material parameters are a separable concern and are essentially trivial to implement (particularly with the abstraction layers used in this study). Furthermore, their introduction runs the risk of inconsistency between implicit interfaces represented by material parameters and the explicit interface encapsulated by the immersed boundary. On this basis, it is not recommended to include any material contrast at the surface.
As aforementioned, a free-surface boundary condition is imposed on the pressure field on the topographic surface. The remaining edges of the computational domain have zero Dirichlet boundary conditions imposed for simplicity. In practice, one would apply a damping boundary condition of choice along these edges, but again this is a separable concern and straightforwardly combined with the method presented in this paper.
A Ricker wavelet with a peak frequency of 8Hz is injected below the lava dome, at an elevation of 1250m, in a location loosely reminiscent of tremors induced by shallow magma movement within the conduit. Placing the source close to the surface maximises observed interaction between wavefield and topography. \(\Delta x=\Delta y=30m\) and the Courant number is set to 0.5. The spatial discretization used is fourth-order accurate: a precedent continued to all other examples shown.
We see in Figure 16 that the uneven topography results in several distinct reflections, with further minor reverberations trailing the main wavefront. Also apparent is the focusing and defocusing effect of concave and convex topography respectively, and the diffraction of the wavefront around obstructions. This yields a much more complex series of arrivals than would be observed for a flat surface, although the horizontally-propagating wavefront is only mildly distorted, in agreement with previously-published findings.
### 2D 2nd-order acoustic rigid-surface
The free surface is not the only boundary condition of geophysical interest in the context of wavefield propagation. For acoustic waves propagating in the air, the Earth represents an extremely dense and essentially immobile surface: zero particle velocity at the interface corresponds to a rigid surface. The Mount St Helens profile used in the previous example once again features, although the surface now forms the lower bound of the domain.
In an imitation of a typical infrasound propagation scenario, a 1Hz Ricker source is placed at an elevation of 2600m above the lava dome. Located only slightly above the crater rim, this location was chosen to capture reverberation within the crater without trapping the majority of the wave within.
Figure 17 shows the propagating wavefield, including the multiple distinct reflected wavefronts. Note the reversed polarity of these wavefronts versus those found in the previous example. Considerable reverberation within the crater is observed, alongside the wavefront diffracting over obstacles (most notably the crater rim).
### 3D 2nd-order acoustic free-surface
Whilst in some cases, wave propagation within a 3D physical domain can be adequately approximated along some suitable transect, this generally relies on some kind of consistency within the omitted dimension. Such approximations are possibly less suitable in the setting of a volcanic edifice, containing strong topographic variation along all directions: particularly true for Mount St Helens due to the collapsed northern flank. In such settings, full 3D modelling may be necessary to achieve realistic wavefield propagation.
The setup (besides the obvious) for this example was much the same as the prior 2D free-surface example. Figure 18 shows the results of wavefield interaction with the volcanic topography. The first arrivals radiate outwards with minimal interruption, diffracting around obstacles in their path, with complex frills of reflected wavefronts from larger-scale surfaces forming a layered series of distinct arrivals, leaving minor reverberations in their wake. With only the first arrivals remaining relatively unscathed, this illustrates the error in assuming that a flat surface will adequately reproduce wavefield behaviour observed in rough terrain.
Wavefield slices shown in Figure 19 are at first glance similar to those of their 2D approximation in Figure 16: the position, shape, and relative amplitude of major wavefronts are the same. On closer inspection however, greater complexity emerges: the main wavefronts contain additional reflections and smaller-scale differences are visible,
Figure 16: Snapshots of the wavefield interacting with a free surface at 375ms, 750ms, 1125ms, and 1500ms. Celerity is 2.5km/s throughout the model. The black line designates the isosurface \(s(x,y)=0\) on the SDF, coinciding with the surface. The wavefield shown in each snapshot is normalised for clarity.
Figure 17: Snapshots of the wavefield interacting with a rigid surface at 2.5s, 5s, 7.5s, and 10s. Celerity is 350m/s throughout the model. Parameters are altered in this example to better reflect infrasound propagation problems to which this boundary condition is applicable. The wavefield shown in each snapshot is normalised for clarity.
Figure 18: Render of the 3D free-surface wavefield at 1125ms and topography. Slices of the wavefield are shown aligned and diagonal to each compass direction for clarity. Wavefield transparency is scaled with amplitude to emphasise the wavefronts.
particularly around the crater rim where some out-of-plane reverberations make themselves known.
### 3D 2nd-order acoustic rigid-surface
The previous rigid-surface example is similarly extended to 3D, the results of which are shown in Figure 20. In this case, likely due to the concavity of the geometry in the vicinity of the source, even more pronounced out-of-plane reflections are observed, particularly within the confines of the caldera where complex and protracted reverberations can be clearly seen. Again, the most prominent reflected wavefronts become more confused in 3D, exhibiting less coherency and continuity due to the wide range of paths taken by its constituent reflections.
These effects are particularly apparent when comparing Figure 21 to Figure 17, demonstrating the strongly 3D nature of wavefield propagation in this scenario.
### 2D 1st-order acoustic free-surface
As aforementioned, the detailed immersed boundary approach is equally applicable to the 1st-order formulation of the acoustic wave equation, and to this end, the setup from the prior 2D 2nd-order acoustic free-surface example is revisited, the results of which are shown in Figure 22.
The same reflection ge
Figure 21: Snapshots of the wavefield interacting with the 3D rigid surface at 2.5s, 5s, 7.5s, and 10s. The transect is chosen to match that used for the 2D examples (slices on the x-z plane). Celerity is 350m/s throughout the model. The wavefield shown in each snapshot is normalised for clarity.
Figure 20: Render of the 3D rigid-surface wavefield at 7.5s and topography. Slices of the wavefield are shown aligned and diagonal to each compass direction for clarity. Wavefield transparency is scaled with amplitude as before.
Figure 19: Slices through the wavefield interacting with the 3D free-surface on the profile used for the 2D examples (the x-z plane). Snapshots were taken at 375ms, 750ms, 1125ms, and 1500ms respectively. Celerity is 2.5km/s throughout the model. The black line designates the isosurface \(s(x,y,z)=0\) on the SDF, coinciding with the surface. The wavefield shown in each snapshot is normalised for clarity.
ample is observed, with the additional particle velocity highlighting the partitioning of energy between horizontal and vertical particle motion, dependent on the orientation of the reflector. The success of the vector boundary condition implementation is clear in the clean, artefact-free reflections observed.
### 3D 1st-order acoustic free-surface
As is precedent at this point, the previous 2D model is extensible to 3D to demonstrate the capabilities of this approach, once again reusing the prior parameterisations.
From Figure 23, it is apparent that the pattern of radiation is identical the that with the 2nd-order formulation (note that the wavelet shape changes between the two, as the source time series was kept the same). Figure 24 shows snapshots of the wavefields as the wave propagates. The y particle velocity field exemplifies the strongly 3D nature of interaction with topography, with reflected energy into the page strongly apparent. As with the 2nd-order formulation, reflections become more complex and confused in 3D due to the nature of the topography.
For this run, it was found that particle velocity stencils did become somewhat larger than would be desirable at a small handful of points, highlighting limitations of the Taylor series as a basis. It is anticipated that with an improved choice of basis (and potentially linear system setup and solver), the support region could be reduced to a more manageable size. Alternatively, the construction of a more targeted support region may aid in alleviating this issue. However, for this particular run, a basis-order-reduction strategy (as used by Mulder, 2017) was used at points where insufficient information was present to construct the extrapolant in an effort to rein in this stencil growth.
## 5 Conclusions
A general immersed boundary treatment is presented, enabling a consistent methodology to be applied across multiple wave equations and boundary conditions. The boundary is encapsulated by modified FD stencils with a circular support region and spatially-variant coefficients, using an N-dimensional Taylor-series extrapolation scheme to continue the field beyond the edge of the domain. As the approach proposed naturally accommodates the implementation of higher-dimensional and vector boundary conditions, it is not necessary to make any application-specific approximations to the boundary. The efficacy of this approach was demonstrated via convergence tests and a range of numerical examples featuring real-world topography, implementing both free and rigid surfaces with the first and second-order acoustic wave equations in 2D and 3D. The one-size-fits-all nature of the method presented enabled the development of a high-level framework, Schism, allowing all of these examples to be implemented via a broadly common codebase. This approach to immersed boundary implementation synergises with emerging code-generation approaches to FD kernel implementation, which was leveraged throughout this paper.
## 6 Code availability and reproducibility
Schism is an open-source codebase and can be found at github.com/devitocodes/schism. This repository contains all the examples shown in this paper (alongside others),
Figure 23: Render of the 3D pressure wavefield at 1125ms and topography. Slices of the wavefield are shown aligned and diagonal to each compass direction for clarity. Wavefield transparency is scaled with amplitude to emphasise the wavefronts.
Figure 24: Snapshots of the pressure (left), x particle velocity (middle left), y particle velocity (middle right), and z particle velocity (right) wavefields at 375ms, 750ms, 1125ms, and 1500ms. The y-axis is oriented into the page. Celerity is 2.5km/s throughout the model, density is homogeneous throughout. The black line designates the isosurface \(s(x,y,z)=0\) on the SDF, coinciding with the surface. The wavefields shown in each snapshot are normalised for clarity.
and a suite of unit tests. Schism can be installed from this repository as a Python module using Pip, or alternatively, a Dockerfile to run the code is also included. The codebase at the time of publication is archived on Zenodo at [https://zenodo.org/record/8167794](https://zenodo.org/record/8167794).
## Acknowledgements
We wish to thank Wim Mulder for providing assistance with replicating his exact solutions, and Tim MacArthur for exploring the capabilities of our codebase in his own experiments. We also extend thanks to the rest of the Devito team and wider community for their feedback and support, without which this work would not have been possible. This work was funded as part of EPSRC DTP training grant EP/R513052/1.
## Appendix A Boundary effect on matrix structure
The boundary serves both to truncate the support region (as no data lies beyond it) and to introduce additional constraints to the polynomial fitting. This profoundly alters the linear system that must be solved to fit the polynomial, particularly where boundary conditions linking fields are present: in the case of the particle velocity boundary condition for the free surface for example.
In free space, the stencil has an uninterrupted circular footprint comprising entirely of interior points. The corresponding matrix has a block-diagonal structure, as shown in Figure 10, with each block corresponding to one of the fields. Each block can be inverted separately, meaning that the resultant polynomial extrapolations have no impact on one another. Note that splitting the matrix up in this manner considerably reduces the computational cost of finding the matrix inverse (or pseudoinverse as required), and it is thus desirable to do so where possible.
Inserting a 45degfree-surface boundary cutting across our support region, we see in Figure 11-2 that both the internal structure of the blocks and the overarching structure of the matrix are altered. Most notably, the inclusion of particle velocity boundary conditions has led to the merging of the corresponding blocks, meaning that they must be solved together, and resulting polynomial extrapolations will be dependent on both fields. As the pressure boundary conditions only concern the pressure field, this block retains its independence as before, although some rows are lost due to corresponding to now-external points, and several additional rows are added by the introduction of boundary conditions.
Whilst the matrices corresponding to such support regions will have consistently overdetermined blocks in free space, this may not be the case when boundaries are introduced. If a block becomes underdetermined, the radius of the support regions for functions contained therein can be expanded, thereby adding additional rows to the block until sufficient information is present.
Figure A-1: A circular support region in free space and its corresponding matrix structure for the first-order acoustic wave equation (nonzero elements are black, zero elements are white). Staggered particle velocity subgrids are omitted to prevent excessive cluttering. Individual blocks are highlighted in green, and from top-left to bottom-right correspond to pressure, horizontal particle velocity, and vertical particle velocity. It is clear that this matrix can be split into three smaller systems which can be solved individually; each polynomial can be fitted independently of the others. |
2309.13413 | Integrable sigma models with complex and generalized complex structures | Using the general method presented by Mohammedi \cite{NM} for the
integrability of a sigma model on a manifold, we investigate the conditions for
having an integrable deformation of the general sigma model on a manifold with
a complex structure. On a Lie group, these conditions are satisfied by using
the zeros of the Nijenhuis tensor. We then extend this formalism to a manifold,
especially a Lie group, with a generalized complex structure. We demonstrate
that, for the examples of integrable sigma models with generalized complex
structures on the Lie groups $\mathbf{A_{4,8}}$ and $\mathbf{A_{4,10}}$, under
special conditions, the perturbed terms of the actions are identical to the WZ
terms. | A. Rezaei-Aghdam, A. Taghavi | 2023-09-23T15:55:08Z | http://arxiv.org/abs/2309.13413v2 | # Integrable sigma models with complex and generalized complex structures
###### Abstract
We investigate conditions for having an integrable deformation of the general sigma model on a manifold with complex structure; such that on a Lie group these conditions are satisfied (using zeros of Nijenhuis tensor). Then we extend this formalism to a manifold (and especially a Lie group) with generalized complex structure.
Introduction
Two-dimensional integrable \(\sigma\) models and their deformations have always considerable attention for people from the early days of their study [1, 2]. Integrable deformation of the principal chiral model on SU(2) is firstly presented in [3, 4, 5] (for general Lie group see [6]). The Yang-Baxter (or \(\eta\)) deformation of chiral model as a generalization of [4, 5] was introduced by Klimcik [7, 8, 9]. Furthermore, the \(\lambda\) deformation where proposed in [10] is a generalization of [3]. The Yang-Baxter integrable deformation [7] is based on R-operators that satisfy the (modified) classical Yang-Baxter equation ((m)CYBE) [11, 12]. The integrable sigma model on a Lie group with complex structure also was studied in [13] as a special case of the Yang-Baxter sigma model [7]. Recently Mohammedi proposed a deformation of \(WZW\) models using two invertible linear operators [14], such that this model consists of Yang-Baxter deformation as a special case.
Here we try to construct an integrable sigma model on a general manifold with a complex structure on it. We obtain conditions, such that under those the model is integrable. We show that for a special case that the manifold is a Lie group, these conditions will be satisfied automatically. Then using this method we construct an integrable sigma model on a manifold (and especially on a Lie group) with a generalized complex structure [15, 16]. The plane of the paper is as follows:
In section 2 we review the general method presented by Mohammedi in [17] for the integrability of a sigma model on a manifold. In section 3 we present an integrable sigma model on a manifold with a complex structure on it; then we apply the method of section 2 to investigate the integrability of the model and present two examples. In section 4 we construct an integrable sigma model on a general Lie group with a complex structure, in this case, the integrability conditions are automatically satisfied as a consequence of the Nijenhuis condition. We also give two examples on \(\mathbf{A_{4,8}}\) (Heisenberg) and \(\mathbf{G_{6,23}}\) Lie groups. Then in section 5, as a generalization, we present an integrable sigma model on a manifold and a Lie group with generalized complex structures. Furthermore, we give two examples of \(\mathbf{A_{4,8}}\) and \(\mathbf{A_{4,10}}\) (Nappi-Witten) Lie groups at the end of this section. In section 6.1 we perturb the \(WZW\) model using of generalized complex structure on their Lie group. For comparison of our work with Mohammedi's one [14], we present the generalized complex structure formulas on a metric Lie algebra in terms of operator formalism. Some concluding remarks are given in section 7.
Review of the zero curvature representation and integrability conditions for non-linear sigma models
In this section for presentation of notation we first review some aspects of the general formalism introduced by Mohammedi [17] for the integrability of a non-linear sigma model on a manifold. Consider the following two-dimensional sigma model action:
\[S=\int_{\Sigma}\ dzd\overline{z}(G_{\mu\nu}(x)+B_{\mu\nu}(x))\partial x^{\mu} \overline{\partial}x^{\nu}, \tag{1}\]
where \(x^{\mu}(z,\overline{z})\) (\(\mu=1,2,...,d\)) are coordinates of \(d\) dimensional manifold \(M\), with \(G_{\mu\nu}\) and \(B_{\mu\nu}\) as invertible metric and anti-symmetric tensor fields on it and \((z,\overline{z})\) are coordinates of the worldsheet \(\Sigma\). The equations of motion of this model have the following form :
\[\overline{\partial}\partial X^{\lambda}+\Omega^{\lambda}{}_{\mu\nu}\partial x ^{\mu}\overline{\partial}x^{\nu}=0, \tag{2}\]
where
\[\Omega^{\lambda}{}_{\mu\nu}=\Gamma^{\lambda}{}_{\mu\nu}-H^{\lambda}{}_{\mu \nu}, \tag{3}\]
such that \(\Gamma^{\lambda}{}_{\mu\nu}\) are Christoffel coefficients and the components of torsion \(H^{\lambda}{}_{\mu\nu}\) are given by
\[H^{\lambda}{}_{\mu\nu}=\frac{1}{2}G^{\lambda\eta}(\partial_{\eta}B_{\mu\nu}+ \partial_{\nu}B_{\eta\mu}+\partial_{\mu}B_{\nu\eta}). \tag{4}\]
According to [17], one can construct the following linear system, whose consistency conditions (a zero curvature representation) are equivalence to the equations of motion (2)
\[[\partial+\partial x^{\mu}.\alpha_{\mu}(x)]\psi=0,\]
\[[\overline{\partial}+\overline{\partial}x^{\nu}.\beta_{\nu}(x)]\psi=0, \tag{5}\]
where the matrices \(\alpha_{\mu}\) and \(\beta_{\mu}\) are functions of coordinates \(x^{\mu}\). The compatibility condition of this linear system yields the equations of motion if the matrices \(\alpha_{\mu}(x)\) and \(\beta_{\mu}(x)\) satisfies the following relation [17]
\[\partial_{\mu}\beta_{\nu}-\partial_{\nu}\alpha_{\mu}+[\alpha_{\mu},\beta_{\nu} ]=\Omega^{\lambda}{}_{\mu\nu}\mu_{\lambda}, \tag{6}\]
such that with \(\beta_{\mu}-\alpha_{\mu}=\mu_{\mu}\) the equation (6) can be rewritten as
\[F_{\mu\nu}=-(\nabla_{\mu}\mu_{\nu}-\Omega^{\lambda}{}_{\mu\nu}\mu_{\lambda}), \tag{7}\]
where the field strength \(F_{\mu\nu}\) and covariant derivative with respect to the matrices \(\alpha_{\mu}\) are given as follows:
\[F_{\mu\nu}=\partial_{\mu}\alpha_{\nu}-\partial_{\nu}\alpha_{\mu}+[\alpha_{\mu },\alpha_{\nu}]\hskip 28.452756pt,\hskip 28.452756pt\nabla_{\mu}V=\partial_{\mu}V+[ \alpha_{\mu},V]. \tag{8}\]
In addition by splitting symmetric and anti-symmetric parts of (7), we have the following relations [17]1:
Footnote 1: Note that the equation (9) is a gauged version of a matrix-valued Killing equation. Indeed, if \([\alpha_{\mu},\mu_{\nu}]+[\alpha_{\nu},\mu_{\mu}]=0\), then this equation is simplify as Killing one \(\partial_{\mu}\mu_{\nu}+\partial_{\nu}\mu_{\mu}-2T^{\lambda}{}_{\mu\nu}\mu_{ \lambda}=0\)[17].
\[0=\nabla_{\mu}\mu_{\nu}+\nabla_{\nu}\mu_{\mu}-2\Gamma^{\lambda}{}_{\mu\nu}\mu _{\lambda}, \tag{9}\]
\[F_{\mu\nu}=-\frac{1}{2}(\nabla_{\mu}\mu_{\nu}-\nabla_{\nu}\mu_{\mu})-H^{ \lambda}{}_{\mu\nu}\mu_{\lambda}. \tag{10}\]
In this manner, the integrability condition of the sigma model (1) is equivalent to finding the matrices \(\alpha_{\mu}\) and \(\mu_{\mu}\) such that they satisfy the relation (7) or the relations (9) and (10).
## 3 Integrable sigma model with complex structure on manifold
Here we will try to construct an integrable sigma model on a manifold with a complex structure. Let \(M\) be a differential manifold, the pair \((M,J)\) is called an almost complex manifold if there exists a tensor field \(J\) of \((1,1)\) type such that at each point \(p\) of \(M\), \(J_{p}^{2}=-1\); tensor field \(J\) is called the almost complex structure. An almost complex structure \(J\) on a manifold \(M\) is integrable if and only if its Nijenhuis tensor vanishes [18]
\[N(X,Y)=0\hskip 28.452756pt,\hskip 28.452756pt\forall X,Y\in\chi(M), \tag{11}\]
where \(\chi(M)\) is the set of vector fields on \(M\) and the Nijenhuis tensor \(N:\chi(M)\otimes\chi(M)\longrightarrow\chi(M)\) is given by
\[N(X,Y)=J^{2}[X,Y]-J[JX,Y]-J[X,JY]+[JX,JY]. \tag{12}\]
In the coordinate basis, i.e the basis \(e_{\mu}\) and \(dx^{\mu}\) for vectors and dual vectors (forms) on \(M\), the almost complex structure and Nijenhuis tensor are presened as \(J=J^{\mu}{}_{\nu}e_{\mu}\otimes dx^{\nu}\) and \(N=N^{\lambda}{}_{\mu\nu}e_{\lambda}\otimes dx^{\mu}\otimes dx^{\nu}\) respectively and the integrability condition (11) can be rewritten as follows:
\[N^{\mu}{}_{\nu k}=J^{\lambda}{}_{\nu}\partial_{\lambda}J^{\mu}{}_{k}-J^{ \lambda}{}_{k}\partial_{\lambda}J^{\mu}{}_{\nu}-J^{\mu}\lambda\partial_{\nu}J ^{\lambda}{}_{k}+J^{\mu}{}_{\lambda}\partial_{k}J^{\lambda}=0, \tag{13}\]
also the relation \(J^{2}=-1\) can be given as
\[J^{\mu}{}_{\lambda}J^{\lambda}{}_{\nu}=-\delta^{\mu}{}_{\nu}. \tag{14}\]
Now on the manifold \(M\) with coordinates \(x^{\mu}\), metric \(g_{\mu\nu}\) and a complex structure \(J^{\mu}{}_{\nu}\), we propose the following deformed sigma model with complex structure:
\[S=\int\hskip 5.690551ptdzd\overline{z}(g_{\mu\nu}+kg_{\mu\lambda}J^{\lambda}_{ \nu})\partial x^{\mu}\overline{\partial}x^{\nu}, \tag{15}\]
where \(k\) is a constant parameter. In the following, we will prove that this model is integrable if the complex structure \(J^{\mu}{}_{\nu}\) satisfy in some extra conditions. For this work, we use the method mentioned in the previous section. By comparison of this action with (1) we see that
\[G_{\mu\nu}=g_{\mu\nu}+\frac{k}{2}(g_{\mu\lambda}J^{\lambda}{}_{\nu}+g_{\nu \lambda}J^{\lambda}{}_{\mu}), \tag{16}\]
\[B_{\mu\nu}=\frac{k}{2}(g_{\mu\lambda}J^{\lambda}{}_{\nu}-g_{\nu\lambda}J^{\lambda} {}_{\mu}). \tag{17}\]
For invertibility of the metric \(G_{\mu\nu}\) we assume the following form for \(G^{\mu\nu}\):
\[G^{\mu\nu}=ag^{\mu\nu}+b(g^{\mu\lambda}J^{\nu}{}_{\lambda}+g^{\nu\lambda}J^{ \mu}{}_{\lambda}), \tag{18}\]
where \(a\) and \(b\) are constant parameters. Now using \(G^{\mu\lambda}G_{\lambda\nu}=\delta^{\mu}{}_{\nu}\) and \(J^{\mu}{}_{\lambda}J^{\lambda}{}_{\nu}=-\delta^{\mu}{}_{\nu}\), we obtain the following conditions for \(J\) and parameters \(a\) and \(b\)
1) If \((J+g^{-1}J^{t}g)^{2}=\pm(J+g^{-1}J^{t}g)\), then we have two cases for the matrix form2 of \(J\) and \(g\), such that one can have:
Footnote 2: Here \(J\) and \(g\) are matrix forms of \(J^{\mu}{}_{\nu}\) and \(g_{\mu\nu}\) respectively.
1.1) \(J=-g^{-1}J^{t}g\),
1.2) \(J=-g^{-1}J^{t}g\pm I\).
For the case (1.1) we have \(a=1\) and the values of \(b\) is arbitrary; while for case (1.2) we have \(a=1\) and two following values for \(b\)
\[b=-\frac{k}{k+2}\ \ \ \ {\rm or,}\ \ \ \ b=\frac{k}{k-2}, \tag{19}\]
2) If \((J+g^{-1}J^{t}g)^{2}=I\), then we have
\[a=\frac{4}{4-k^{2}}\ \ \ \,\ \ \ \ b=-\frac{2k}{4-k^{2}}. \tag{20}\]
Furthermore if \((J+g^{-1}J^{t}g)^{2}=-I\), then we have
\[a=\frac{4}{4+k^{2}}\ \ \ \ \,\ \ \ \ b=-\frac{2k}{4+k^{2}}. \tag{21}\]
In the following, we use the case (1.1) i.e \(J=-g^{-1}J^{t}g\) (Hermitian condition [18], [35]) or
\[J^{\mu}{}_{\nu}=-g^{\mu\lambda}J^{\rho}{}_{\lambda}g_{\rho\nu}, \tag{22}\]
with \(a=1\) and the arbitrary \(b\), then we have \(G_{\mu\nu}=g_{\mu\nu}\). For requiring integrability of the sigma model (15) one can consider matrices \(\alpha_{\mu}\) and \(\mu_{\mu}\) as
\[\alpha_{\mu}=\lambda_{1}c_{\lambda}J^{\lambda}{}_{\mu}\ \ \ \,\ \ \ \ \mu_{\mu}=\lambda_{2}d_{\lambda}J^{\lambda}{}_{\mu}, \tag{23}\]
or using veilbein formalism (\(e_{\mu}=\widehat{e}_{\beta}e^{\beta}{}_{\mu},dx^{\mu}=e_{\alpha}{}^{\mu} \widehat{\theta}{}^{\alpha},e^{\alpha}{}_{\mu}e_{\beta}{}^{\mu}=\delta^{ \alpha}{}_{\beta},e^{\alpha}{}_{\mu}e_{\alpha}{}^{\nu}=\delta_{\mu}{}^{\nu}\)) [18] one can rewrite matrices \(\alpha_{\mu}\) and \(\mu_{\mu}\) as follows
\[\alpha_{\mu}=\lambda_{1}c_{\alpha}J^{\alpha}{}_{\beta}e^{\beta}{}_{\mu}\ \ \ \,\ \ \ \ \mu_{\mu}=\lambda_{2}d_{\alpha}J^{\alpha}{}_{\beta}e^{\beta}{}_{\mu}, \tag{24}\]
with
\[J^{\mu}{}_{\nu}=e_{\alpha}{}^{\mu}J^{\alpha}{}_{\beta}e^{\beta}{}_{\nu}, \tag{25}\]
where we assume \(c_{\alpha}\), \(d_{\alpha}\) are square matrices with constant elements, \(\lambda_{1},\lambda_{2}\) are constant parameter and \(J^{\alpha}{}_{\beta}\) are constant algebraic complex structure coefficients. Now by considering \(\mu_{\mu}=-2\alpha_{\mu}\), the symmetric part of the integrability condition of the sigma model, i.e equ (9) is given by
\[\lambda_{2}d_{\alpha}J^{\alpha}{}_{\beta}(-\partial_{\mu}e^{\beta}{}_{\nu}+ \partial_{\nu}e^{\beta}{}_{\mu}-2\Gamma^{\beta}{}_{\delta\gamma}e^{\delta}{}_ {\mu}e^{\gamma}{}_{\nu})=0, \tag{26}\]
where \(\Gamma^{\beta}{}_{\delta\gamma}\) has the following form [18]
\[\Gamma^{\beta}{}_{\delta\gamma}=e^{\beta}{}_{\nu}e_{\beta}{}^{\mu}(\partial_{ \mu}e_{\gamma}{}^{\nu}+e_{\gamma}{}^{\lambda}\Gamma^{\nu}{}_{\mu\lambda}), \tag{27}\]
so by applying the Maurer-Cartan equation 3[18]
Footnote 3: Here \(C^{\gamma}{}_{\alpha\beta}\) are functions of coordinates on the manifold and are not constant.
\[C^{\gamma}{}_{\alpha\beta}=e^{\gamma}{}_{\nu}(e_{\alpha}{}^{\mu}\partial_{\mu}e _{\beta}{}^{\nu}-e_{\beta}{}^{\mu}\partial_{\mu}e_{\alpha}{}^{\nu}), \tag{28}\]
the relation (26) can be rewritten as
\[\lambda_{2}d_{\alpha}J^{\alpha}{}_{\beta}(C^{\beta}{}_{\delta\gamma}-2\Gamma^ {\beta}{}_{\delta\gamma})e^{\delta}{}_{\mu}e^{\gamma}{}_{\nu}=0. \tag{29}\]
Then by inserting the torsion of the sigma model (15)
\[H^{\lambda}{}_{\mu\nu}=\frac{k}{2}(C^{\delta}{}_{\alpha\beta}J^{\gamma}{}_{ \delta}+C^{\gamma}{}_{\delta\alpha}J^{\delta}{}_{\beta}+C^{\gamma}{}_{\beta \delta}J^{\delta}{}_{\alpha})e^{\alpha}{}_{\mu}e^{\beta}{}_{\nu}e_{\gamma}{}^ {\lambda}, \tag{30}\]
and using the Nijenhuis condition (13) in the veilbein formalism i.e
\[C^{\gamma}{}_{\beta\alpha}-C^{\gamma}{}_{\delta\sigma}J^{\delta}{}_{\beta}J^ {\sigma}{}_{\alpha}+C^{\delta}{}_{\sigma\alpha}J^{\gamma}{}_{\delta}J^{ \sigma}{}_{\beta}+C^{\delta}{}_{\beta\sigma}J^{\gamma}{}_{\delta}J^{\sigma}{} _{\alpha}=0, \tag{31}\]
we have the following relation for the antisymmetric part of the integrability condition, i.e. equ (10)
\[\lambda_{2}(d_{\delta}d_{\sigma}-d_{\sigma}d_{\delta})J^{\delta}{}_{\alpha}J^ {\sigma}{}_{\beta}-2kd_{\gamma}C^{\gamma}{}_{\sigma\delta}J^{\delta}{}_{ \alpha}J^{\sigma}{}_{\beta}=0. \tag{32}\]
In this manner, we obtain algebraic relations (29) and (32) as integrability conditions of the deformed sigma model (15) with complex structure on the manifold \(M\). Let us investigate some examples.
**3.1 Examples**
Here we consider two examples for the integrable sigma model (15).
**a**) As the first example, we consider the 2d sausage model. The metric of the 2d sausage model is given by [20]
\[ds^{2}=h(\frac{dr^{2}}{(1-r^{2})(1+\chi^{2}r^{2})}+\frac{1-r^{2}}{1+\chi^{2}r^ {2}}d\phi^{2}), \tag{33}\]
the integrability of sausage model is conjectured in [20] and then proven in [21]. When the parameter \(\chi\) is zero, the metric correspond to round \(S^{2}\) of radius \(\sqrt{h}\). For the real value of \(\chi\) it is proven that the sausage model is a Yang-Baxter sigma model [22]. For \(\chi^{2}=-1\) case the metric (33) reduce to the metric of \(\frac{SO(1,2)}{SO(2)}\) gauged \(WZW\) model [23]. Here we will try to deform this model with complex structure \(J\) and show that the deformed model is integrable. The tensor field \(J\) that satisfies the integrable complex structure conditions (13), (14) and the Hermitian condition (22) can be obtained as follows
\[(J^{\mu}{}_{\nu})=\left(\begin{array}{cc}0&1-r^{2}\\ \frac{-1}{1-r^{2}}&0\end{array}\right), \tag{34}\]
where the veilbein and an algebraic component of \(J^{\alpha}{}_{\beta}\) are given as
\[(e^{\alpha}{}_{\mu})=\left(\begin{array}{cc}\sqrt{\frac{h}{(1-r^{2})(1+\chi ^{2}r^{2})}}&0\\ 0&\sqrt{\frac{1-r^{2}}{1+\chi^{2}r^{2}}}\end{array}\right)\ \ \ \,\ \ \ \ \ (J^{\alpha}{}_{\beta})=\left( \begin{array}{cc}0&1\\ -1&0\end{array}\right). \tag{35}\]
In this way one can consider the integrability condition (29) and (32) that satisfy by setting \(\chi^{2}=-1\), such that the arbitrary matrices \(d_{\delta}\) and \(d_{\gamma}\) commute with themselves. In this case the integrable sigma model (15) can be written as
\[S=\int dzd\overline{z}[\frac{1}{(1-r^{2})(1+\chi^{2}r^{2})}\partial r\overline {\partial}r+\frac{1-r^{2}}{1+\chi^{2}r^{2}}\partial\varphi\overline{\partial }\varphi+\frac{k}{(1+\chi^{2}r^{2})}(\partial r\overline{\partial}\varphi- \partial\varphi\overline{\partial}r)]. \tag{36}\]
**b**) As the second example, we consider four-dimensional manifold with the following spherical metric [24]
\[ds^{2}=f(r)dr^{2}+e(r)r^{2}d\theta^{2}+e(r)r^{2}sin^{2}\theta d\phi^{2}+h(r)dz^ {2}, \tag{37}\]
where \(f(r)\), \(e(r)\) and \(h(r)\) are arbitrary functions of \(r\). For this metric the integrable complex structure \(J\) which satisfies the relations (13), (14) and (22) can be obtained as
\[(J^{\lambda}{}_{\mu})=\left(\begin{array}{cccc}0&0&0&-\sqrt{\frac{h(r)}{f(r)} }\\ 0&0&-sin\theta&0\\ 0&\frac{1}{sin\theta}&0&0\\ \sqrt{\frac{f(r)}{h(r)}}&0&0&0\end{array}\right), \tag{38}\]
such that the veilbein and algebraic components of \(J^{\alpha}{}_{\beta}\) are as follows:
\[(e^{\alpha}{}_{\mu})=\left(\begin{array}{cccc}\sqrt{f(r)}&0&0&0\\ 0&\sqrt{e(r)r^{2}}&0&0\\ 0&0&\sqrt{e(r)r^{2}sin^{2}\theta}&0\\ 0&0&0&\sqrt{h(r)}\end{array}\right)\ \ \ \,\ \ \ \ (J^{\alpha}{}_{\beta})=\left( \begin{array}{cccc}0&0&0&-1\\ 0&0&-1&0\\ 0&1&0&0\\ 1&0&0&0\end{array}\right). \tag{39}\]
Now by choosing matrices \(d_{2}=d_{3}=0\) and arbitrary values for matrices \(d_{1}\) and \(d_{4}\) (such that two matrices \(d_{1}\) and \(d_{4}\) commute with each other), \(h^{\prime}(r)=0\) (\(h(r)=D\)) and \(e(r)=\frac{C}{r^{2}}\), the metric (37) and complex structure (38) will satisfy (29) and (32), so this model is integrable. The action of this model is given by:
\[S=\int dzd\overline{z}[f(r)\partial r\overline{\partial}r+C(\partial\theta \overline{\partial}\theta+sin^{2}\theta\partial\phi\overline{\partial}\phi)+D \partial z\overline{\partial}z \tag{40}\]
\[+k\sqrt{f(r)D}(\partial r\overline{\partial}z-\partial z\overline{\partial}r) +kCsin\theta(\partial\theta\overline{\partial}\phi-\partial\phi\overline{ \partial}\theta)]. \tag{41}\]
## 4 Integrable sigma model with complex structure on Lie group
In the case that \(M\) is a Lie group \(G\), using the vielbein formalism
\[\forall g\in G\ \ \ \,\ \ \ \ (g^{-1}\partial g)^{\alpha}=e^{\alpha}{}_{\mu} \partial x^{\mu}, \tag{42}\]
\[g_{\mu\nu}=e^{\alpha}{}_{\mu}g_{\alpha\beta}e^{\beta}{}_{\nu}\ \ \ \,\ \ \ \ J^{\mu}{}_{\nu}=e_{\alpha}{}^{\mu}J^{\alpha}{}_{\beta}e^{\beta}{}_{\nu}, \tag{43}\]
the action (15) can be rewritten as follows:4
Footnote 4: Note that this model is a special case of Yang-Baxter sigma model (with \(J^{2}=-1\)) [8]; and also studied in [13] but here for generalize of this model to the case of generalized complex structure we give it in this form and also give new prove of its integrability and new examples.
\[S=\int(g^{-1}\partial g)^{\alpha}(g_{\alpha\beta}+kg_{\alpha\delta}J^{\delta}{ }_{\beta})(g^{-1}\overline{\partial}g)^{\beta}dzd\overline{z}, \tag{44}\]
where \(g_{\alpha\beta}\) is the metric on Lie algebra \(\mathfrak{g}\) of Lie group \(G\) and \(J^{\alpha}{}_{\beta}\) is an endomorphism of \(\mathfrak{g}\) i.e \(J:\mathfrak{g}\longrightarrow\mathfrak{g}\), and the indices \(\alpha,\beta,...\) are the Lie algebra indices. Now one can repeat calculations of the previous subsection with the following ansatz
\[\alpha_{\mu}=\lambda_{1}J^{\alpha}{}_{\beta}e^{\beta}{}_{\mu}T_{\alpha}\ \ \ \,\ \ \ \ \mu_{\mu}=\lambda_{2}J^{\alpha}{}_{\beta}e^{\beta}{}_{\mu}T_{\alpha}, \tag{45}\]
where \(T_{\alpha}\) are the basis of the Lie algebra \(\mathfrak{g}\) with the commutation relations as
\[[T_{\alpha},T_{\beta}]=f^{\gamma}{}_{\alpha\beta}T_{\gamma}, \tag{46}\]
where \(f^{\gamma}{}_{\alpha\beta}\) are the structure constants of the Lie algebra that satisfy the Maurer-Cartan equation5
Footnote 5: Indeed in eq (28) the functions of coordinates \(C^{\gamma}{}_{\alpha\beta}\) replaced with structure constants \(f^{\gamma}{}_{\alpha\beta}\).
\[f^{\gamma}{}_{\alpha\beta}=e^{\gamma}{}_{\nu}(e_{\alpha}{}^{\mu}\partial_{\mu }e_{\beta}{}^{\nu}-e_{\beta}{}^{\mu}\partial_{\mu}e_{\alpha}{}^{\nu}). \tag{47}\]
Furthermore by using (43) the relations (14) and (22) can be replaced with the following algebraic form 6
\[J^{\alpha}{}_{\beta}J^{\beta}{}_{\gamma}=-\delta^{\alpha}{}_{\gamma}, \tag{48}\]
\[J^{\alpha}{}_{\beta}=-g_{\beta\gamma}J^{\gamma}{}_{\delta}g^{\delta\alpha}, \tag{49}\]
where \(g_{\alpha\beta}\) is the ad-invariant metric that satisfies [33]:
\[f^{\gamma}{}_{\alpha\beta}g_{\gamma\delta}+f^{\gamma}{}_{\alpha\delta}g_{ \gamma\beta}=0. \tag{50}\]
Then using the Maurer-Cartan equation and the algebraic form of Nijenhuis condition (13)
\[f^{\gamma}{}_{\beta\alpha}-J^{\sigma}{}_{\beta}J^{\delta}{}_{\alpha}f^{\gamma }{}_{\sigma\delta}+J^{\gamma}{}_{\sigma}J^{\delta}{}_{\alpha}f^{\sigma}{}_{ \beta\delta}+J^{\sigma}{}_{\beta}J^{\gamma}{}_{\delta}f^{\delta}{}_{\sigma \alpha}=0, \tag{51}\]
after some calculation one can conclude the relation (9) automatically satisfies7, and the relation (10) reduces to the algebraic Nijenhuis relation (51) by setting \(\lambda_{1}=-\frac{\lambda_{2}}{2}\) and \(\lambda_{2}=-2k\). In this manner, we have shown that the sigma model (44) is integrable if and only if the endomorphism \(J^{\alpha}{}_{\beta}\) is Hermitian complex structure on \(\mathfrak{g}\). As in the previous subsection, \(k\) is a spectral parameter and from (5) we have the following Lax pairs:
Footnote 7: Note that eq (9) show that the \(J^{\alpha}{}_{\beta}e^{\beta}{}_{\mu}T_{\alpha}\) are killing vectors of the metric \(g_{\alpha\beta}\).
\[[\partial+kJ^{\alpha}{}_{\beta}e^{\beta}{}_{\mu}T_{\alpha}\partial x^{\mu}]\psi =0,\]
\[[\overline{\partial}-kJ^{\alpha}{}_{\beta}e^{\beta}{}_{\mu}T_{\alpha} \overline{\partial}x^{\mu}]\psi =0. \tag{52}\]
Note that here the matrix forms of Lie algebra bases \(T_{\alpha}\) play the role of matrix forms of \(\alpha_{\mu}\) and \(\beta_{\mu}\).
**4.1 Examples**
**c)** We first consider an example on the four-dimensional Heisenberg Lie group \({\bf H_{4}}\). The Lie algebra of this Lie group is isomorphic with \({\bf A_{4,8}}\) in the classification of four-dimensional real Lie algebras [25]. We have the following commutation relations for \(A_{4,8}\) Lie algebra [26]:
\[[P_{2},T]=P_{2}\ \ \ \,\ \ \ \ [P_{2},J]=P_{1}\ \ \ \,\ \ \ \ [T,J]=J. \tag{53}\]
To calculate the vierbein we parameterize the corresponding Lie group \({\bf H_{4}}\) with coordinates \(x^{\mu}=\{x,y,u,v\}\), so that its \(g\) elements can be written as:
\[g=e^{xT_{1}}e^{yT_{2}}e^{vT_{4}}e^{uT_{3}}, \tag{54}\]
with the generators \(T_{\alpha}=\{P_{1},P_{2},J,T\}\). Thus the veirbein (42) and ad-invariant metric (50) has the following form [11, 26]
\[(e^{\alpha}{}_{\mu})=\left(\begin{array}{cccc}1&ue^{v}&0&0\\ 0&e^{v}&0&0\\ 0&0&1&u\\ 0&0&0&1\end{array}\right)\,\ (g_{\alpha\beta})=\left(\begin{array}{cccc}0&0&0&-k _{0}\\ 0&0&k_{0}&0\\ 0&k_{0}&0&0\\ -k_{0}&0&0&0\end{array}\right), \tag{55}\]
where \(k_{0}\) is an arbitrary real constant. Now using a type of complex structure for this Lie algebra as follows [35]
\[(J^{\alpha}{}_{\beta})=\left(\begin{array}{cccc}0&0&-1&0\\ 0&0&0&-1\\ 1&0&0&0\\ 0&1&0&0\end{array}\right), \tag{56}\]
one can construct the action (44) as
\[S=k_{0}\int dzd\overline{z}\{-(\partial x\overline{\partial}v+\partial v \overline{\partial}x)+e^{v}(u\partial y\overline{\partial}v-u\partial v \overline{\partial}y+\partial u\overline{\partial}y+\partial y\overline{ \partial}u)\]
\[+k(\partial v\overline{\partial}u-\partial u\overline{\partial}v)-ue^{v}( \partial y\overline{\partial}v-\partial v\overline{\partial}y)+ke^{v}( \partial y\overline{\partial}x-\partial x\overline{\partial}y)\}. \tag{57}\]
On the other hand, we know that the \(WZW\) model on a Lie group \(G\) is defined as
\[S_{WZW}=\frac{K}{4\pi}\int_{\Sigma}dzd\overline{z}\;e^{\alpha}{}_{\mu}g_{\alpha \beta}e^{\beta}{}_{\nu}\;\partial x^{\mu}\overline{\partial}x^{\nu}+\frac{K}{2 4\pi}\int_{B}\;\;d^{3}\sigma\varepsilon^{ijk}e^{\alpha}{}_{\mu}g_{\alpha\delta }f^{\delta}{}_{\beta\gamma}e^{\beta}{}_{\nu}e^{\gamma}{}_{\lambda}\partial_{i} x^{\mu}\partial_{j}x^{\nu}\partial_{k}x^{\lambda}, \tag{58}\]
where the worldsheet \(\Sigma\) is the boundary of 3-dimensional bulk \(B\) (with coordinates \(\sigma^{i}\)). Indeed the first line of the model (57) is the action of the \(WZW\) model for Heisenberg Lie group \({\bf H_{4}}\) (with \(K=4\pi\)) [26] and the second line of this action is the perturbed term for the \(WZW\) action. In this manner, the integrable sigma model with complex structure on Heisenberg Lie group \({\bf H_{4}}\) is equivalent to the integrable perturbed \(WZW\) sigma model. Note that our model (57) is similar to the model \(R_{V}\) (with \(\rho=0\)) in [11] but with other group representations.
**d)** As another example, we construct (44) model on six-dimensional Lie group \({\bf G_{6,23}}\). The \({\bf g_{6,23}}\) Lie algebra have the following commutations [27, 28] :
\[[T_{2},T_{3}]=T_{1}\;\;\;\;\;,\;\;\;\;[T_{2},T_{6}]=T_{3}\;\;\;\;,\;\;\;\;[T_ {3},T_{6}]=T_{4}. \tag{59}\]
Now using \(g=e^{x_{1}T_{1}}e^{x_{2}T_{2}}e^{x_{3}T_{3}}e^{x_{4}T_{4}}e^{x_{5}T_{5}T_{6}}\) as a Lie group element with coordinates \(x^{\mu}=\{x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\}\) and the generators \(\{T_{1},T_{2},T_{3},T_{4},T_{5},T_{6}\}\), the algebraic metric and veilbeins related to this Lie algebra are given as [28] :
\[(e^{\alpha}{}_{\mu})=\left(\begin{array}{cccccc}1&x_{3}&0&0&0&0\\ 0&1&0&0&0&0\\ 0&x_{6}&1&0&0&0\\ 0&\frac{1}{2}x_{6}^{2}&x_{6}&1&0&0\\ 0&0&0&0&1&0\\ 0&0&0&0&0&1\end{array}\right)\;,\;(g_{\alpha\beta})=\left(\begin{array}{cccccc }0&0&0&0&0&m_{2}\\ 0&m_{1}&0&m_{2}&m_{3}&m_{4}\\ 0&0&-m_{2}&0&0&0\\ 0&m_{2}&0&0&0&0\\ 0&m_{3}&0&0&m_{5}&m_{6}\\ m_{2}&m_{4}&0&0&m_{6}&m_{7}\end{array}\right), \tag{60}\]
where \(m_{1},...,m_{7}\) are arbitrary real parameters. Using (48-51) and metric (60) after some calculation, the algebraic integrable Hermitian complex structure \(J\) can be obtained as follows :
\[(J^{\alpha}{}_{\beta})=\left(\begin{array}{cccccc}-a&0&0&\frac{a^{2}+1}{b} &0&0\\ 0&-a&0&0&0&-\frac{a^{2}+1}{b}\\ 0&0&0&0&-\frac{1}{c}&0\\ -b&0&0&a&0&0\\ 0&0&c&0&0&0\\ 0&b&0&0&0&a\end{array}\right), \tag{61}\]
where \(a\), \(b\) and \(c\) are arbitrary real parameters and \(m_{3}=0,m_{4}=\frac{am_{1}}{b},m_{5}=-\frac{m_{2}}{c^{2}},m_{6}=0,m_{7}=\frac{ (a^{2}+1)m_{1}}{b^{2}}\). Then our two-dimensional integrable sigma model (44) can be written as:
\[S=\int dzd\overline{z}\{m_{1}(\partial x_{2}\overline{\partial}x_{2}+\frac{(a^ {2}+1)}{b^{2}}\partial x_{6}\overline{\partial}x_{6})+m_{2}(-\partial x_{3} \overline{\partial}x_{3}-\frac{1}{c^{2}}\partial x_{5}\overline{\partial}x_{ 5}+\partial x_{6}\overline{\partial}x_{1}+\partial x_{1}\overline{\partial}x_ {6}+\partial x_{4}\overline{\partial}x_{2}+\partial x_{2}\overline{\partial}x_{ 4})\]
\[+\frac{bm_{2}x_{3}+am_{1}}{b}(\partial x_{6}\overline{\partial}x_{2}+\partial x _{2}\overline{\partial}x_{6})-m_{2}bk(\partial x_{2}\overline{\partial}x_{1} -\partial x_{1}\overline{\partial}x_{2})-m_{2}ka(\partial x_{6}\overline{ \partial}x_{1}-\partial x_{1}\overline{\partial}x_{6})\]
\[-m_{2}kx_{6}a(\partial x_{3}\overline{\partial}x_{2}-\partial x_{2}\overline{ \partial}x_{3})-m_{2}ka(\partial x_{4}\overline{\partial}x_{2}-\partial x_{2} \overline{\partial}x_{4})-\frac{m_{2}}{c}kx_{6}(\partial x_{5}\overline{ \partial}x_{2}-\partial x_{2}\overline{\partial}x_{5})\]
\[+\frac{m_{2}}{c}k(\partial x_{3}\overline{\partial}x_{5}-\partial x_{5} \overline{\partial}x_{3})+\frac{m_{2}}{b}k(1+a^{2})(\partial x_{6}\overline{ \partial}x_{4}-\partial x_{4}\overline{\partial}x_{6})+\frac{km_{2}}{b}x_{6}(1+ a^{2})(\partial x_{6}\overline{\partial}x_{3}-\partial x_{3}\overline{\partial}x_{6})\]
\[+\frac{k}{2b}(m_{2}x_{6}^{2}(1+a^{2})+2m_{1}-2m_{2}abx_{3})(\partial x_{6} \overline{\partial}x_{2}-\partial x_{2}\overline{\partial}x_{6})\}, \tag{62}\]
one can compare this model with \(WZW\) model (58) on \({\bf g_{6,23}}\) which is given by :
\[S_{WZW_{96,23}}=\frac{K}{4\pi}\int dzd\overline{z}\{m_{1}(\partial x_{2} \overline{\partial}x_{2}+\frac{(a^{2}+1)}{b^{2}}\partial x_{6}\overline{ \partial}x_{6})+m_{2}(-\partial x_{3}\overline{\partial}x_{3}-\frac{1}{c^{2}} \partial x_{5}\overline{\partial}x_{5}+\partial x_{6}\overline{\partial}x_{1}+ \partial x_{1}\overline{\partial}x_{6}\]
\[+\partial x_{4}\overline{\partial}x_{2}+\partial x_{2}\overline{\partial}x_{4})+ \frac{bm_{2}x_{3}+am_{1}}{b}(\partial x_{6}\overline{\partial}x_{2}+\partial x_ {2}\overline{\partial}x_{6})-m_{2}x_{3}(\partial x_{2}\overline{\partial}x_{6} -\partial x_{6}\overline{\partial}x_{2})\big{\}}. \tag{63}\]
Indeed model (62) can be considered as a perturbation of the \(WZW\) model with the following perturbed term i.e.
\[S=S_{WZW_{g_{6,23}}}+\int dzd\overline{z}\{m_{1}(-m_{2}bk(\partial x_{2} \overline{\partial}x_{1}-\partial x_{1}\overline{\partial}x_{2})-m_{2}ka( \partial x_{6}\overline{\partial}x_{1}-\partial x_{1}\overline{\partial}x_{6})\]
\[-m_{2}kx_{6}a(\partial x_{3}\overline{\partial}x_{2}-\partial x_{2}\overline{ \partial}x_{3})-m_{2}ka(\partial x_{4}\overline{\partial}x_{2}-\partial x_{2} \overline{\partial}x_{4})-\frac{m_{2}}{c}kx_{6}(\partial x_{5}\overline{ \partial}x_{2}-\partial x_{2}\overline{\partial}x_{5})\]
\[+\frac{m_{2}}{c}k(\partial x_{3}\overline{\partial}x_{5}-\partial x_{5} \overline{\partial}x_{3})+\frac{m_{2}}{b}k(1+a^{2})(\partial x_{6}\overline{ \partial}x_{4}-\partial x_{4}\overline{\partial}x_{6})+\frac{km_{2}}{b}x_{6}( 1+a^{2})(\partial x_{6}\overline{\partial}x_{3}-\partial x_{3}\overline{ \partial}x_{6})\]
\[+\frac{k}{2b}(m_{2}x_{6}^{2}(1+a^{2})+2m_{1})(\partial x_{6}\overline{\partial }x_{2}-\partial x_{2}\overline{\partial}x_{6})\}, \tag{64}\]
by setting \(K=4\pi\) and \(ka=-1\). In addition one can investigate the conformality of this model up to one loop; such that the one loop \(\beta\) function equations [29] are given by:
\[B^{g}{}_{\mu\nu}=-\alpha^{{}^{\prime}}[R_{\mu\nu}-(H^{2})_{\mu\nu}+\nabla_{ \mu}\nabla_{\nu}\phi]=0,\]
\[B^{B}{}_{\mu\nu}=-\alpha^{{}^{\prime}}[-\nabla^{\lambda}H_{\lambda\mu\nu}+H_{ \mu\nu}{}^{\lambda}\nabla_{\lambda}\phi]=0,\]
\[B^{\phi}{}_{\mu\nu}=-\alpha^{{}^{\prime}}[R-\frac{1}{3}H^{2}-\frac{1}{2} \nabla^{2}\phi+\frac{1}{2}(\nabla\phi)^{2}]=0, \tag{65}\]
where \(H^{2}{}_{\mu\nu}=H_{\mu\rho\lambda}H^{\rho\lambda}{}_{\nu}\) and \(H^{2}=H_{\mu\rho\lambda}H^{\mu\rho\lambda}\). The \(Ricci\) components (\(R_{\mu\nu}\)) of this sigma model are zero and the torsion components are given as follows:
\[H^{1}{}_{25}=\frac{k}{2c}\ \,\ \ H^{4}{}_{56}=\frac{k}{2c}\ \,\ \ H^{5}{}_{26}=\frac{kc}{2}, \tag{66}\]
so all of the one loop \(\beta\) function equations are satisfy with \(\phi=const\). Now in the following section we will generalize our model (44) for the complex structure to the model with generalized complex structure.
## 5 Integrable sigma with generalized complex structure
Let us first have a short review of concepts and notations related to generalized complex structure [15], [16].
### Review of generalized complex structure
Generalized complex structure on a manifold \(M\) (with even dimension) is an endomorphism \({\cal J}:TM\oplus T^{*}M\longrightarrow TM\oplus T^{*}M\), such that \({\cal J}\) is invariant with respect to inner product \(\langle\,\ \rangle\) on \(TM\oplus T^{*}M\)
\[\forall X,Y\in TM\,\ \xi,\eta\in T^{*}M:\ \ \ \ \ \ \ \ \langle{\cal J}(X+\xi),{\cal J}(Y+\eta)\rangle=\langle X+\xi,Y+\eta\rangle, \tag{67}\]
and \({\cal J}^{2}=-1\). Using the Courant bracket on a smooth section of \(TM\oplus T^{*}M\)[30]
\[[X+\xi,Y+\eta]_{C}=[X,Y]+L_{X}\eta-L_{Y}\xi-\frac{1}{2}d_{M}[i_{X}\eta-i_{Y} \xi], \tag{68}\]
generalized complex structure \({\cal J}\) is an integrable structure if the generalized Nijenhuis tensor is zero [15, 16]
\[N_{\cal J}(X+\xi,Y+\eta)=[X+\xi,Y+\eta]_{C}+{\cal J}[X+\xi,{\cal J}(Y+\eta)]_ {C}+{\cal J}[{\cal J}(X+\xi),(Y+\eta)]_{C}-[{\cal J}(X+\xi),{\cal J}(Y+\eta)] _{C}=0. \tag{69}\]
One can consider the almost generalized complex structure in the following block form [16]:
\[{\cal J}=\left(\begin{array}{cc}J&P\\ Q&-J^{*}\end{array}\right), \tag{70}\]
where \(J=J^{\mu}{}_{\nu}\partial_{\mu}\otimes dx^{\nu}\), \(P=P^{\mu\nu}\partial_{\mu}\wedge\partial_{\nu}\) and \(Q=Q_{\mu\nu}dx^{\mu}\wedge dx^{\nu}\). By applying the above block form in \(\mathcal{J}^{2}=-1\), we have the following relations for tensors \(J\), \(P\) and \(Q\):
\[P^{\nu\lambda}+P^{k\nu}=0\ \ \ \,\ \ \ \ Q_{\nu k}+Q_{k\nu}=0, \tag{71}\]
\[J^{\nu}{}_{\mu}J^{\mu}{}_{k}+P^{\nu\mu}Q_{\mu k}+\delta^{\nu}{}_{k}=0, \tag{72}\]
\[J^{\nu}{}_{\mu}P^{\mu k}+J^{k}{}_{\mu}P^{\mu\nu}=0, \tag{73}\]
\[Q_{\nu\mu}J^{\mu}{}_{k}+Q_{k\mu}J^{\mu}{}_{\nu}=0. \tag{74}\]
Furthermore by using Courant bracket definition (68), the integrability condition of generalized complex structure (69) can be written as the following tensor relations [31]:
\[\mathbf{A}^{\nu k\mu}=P^{\nu\lambda}\partial_{\lambda}P^{k\mu}+P^{k\lambda} \partial_{\lambda}P^{\mu\nu}+P^{\mu\lambda}\partial_{\lambda}P^{\nu k}=0, \tag{75}\]
\[\mathbf{B}^{k\mu}{}_{\nu}=J^{\lambda}{}_{\nu}\partial_{\lambda}P^{k\mu}+P^{k \lambda}(\partial_{\nu}J^{\mu}{}_{\lambda}-\partial_{\lambda}J^{\mu}{}_{\nu}) -P^{\mu\lambda}(\partial_{\nu}J^{k}{}_{\lambda}-\partial_{\lambda}J^{k}{}_{ \nu})-\partial_{\nu}(J^{k}{}_{\lambda}P^{\lambda\mu})=0, \tag{76}\]
\[\mathbf{C}^{\mu}{}_{\nu k}=J^{\lambda}{}_{\nu}\partial_{\lambda}J^{\mu}{}_{k} -J^{\lambda}{}_{k}\partial_{\lambda}J^{\mu}{}_{\nu}-J^{\mu}{}_{\lambda} \partial_{\nu}J^{\lambda}{}_{k}+J^{\mu}{}_{\lambda}\partial_{k}J^{\lambda}{}_ {\nu}+P^{\mu\lambda}(\partial_{\lambda}Q_{\nu k}+\partial_{\nu}Q_{k\lambda}+ \partial_{k}Q_{\lambda\nu})=0, \tag{77}\]
\[\mathbf{D}_{\nu k\mu}=J^{\lambda}{}_{\nu}(\partial_{\lambda}Q_{k\mu}+\partial _{k}Q_{\mu\lambda}+\partial_{\mu}Q_{\lambda k})+J^{\lambda}{}_{k}(\partial_{ \lambda}Q_{\mu\nu}+\partial_{\mu}Q_{\nu\lambda}+\partial_{\nu}Q_{\lambda\mu}) +J^{\lambda}{}_{\mu}(\partial_{\lambda}Q_{\nu k}+\partial_{\nu}Q_{k\lambda}+ \partial_{k}Q_{\lambda\nu})\]
\[-\partial_{\nu}(Q_{k\lambda}J^{\lambda}{}_{\mu})-\partial_{k}(Q_{\mu\lambda}J ^{\lambda}{}_{\nu})-\partial_{\mu}(Q_{\nu\lambda}J^{\lambda}{}_{k})=0, \tag{78}\]
these relations are necessary conditions for the integrability of generalized complex structure.
### Construction of the integrable sigma model
#### 5.2.1 Model on manifold
Now using the components of generalized complex structure on \(TM\) and \(T^{*}M\) we propose the following sigma model action on the manifold \(M\) with coordinates \(x^{\mu}\) and metric \(g_{\mu\nu}\):
\[S=\int\ \ dzd\overline{z}(g_{\mu\nu}+kg_{\mu\lambda}J^{\lambda}{}_{\nu}+k^{{}^{ \prime}}Q_{\mu\nu}+k^{{}^{\prime\prime}}g_{\mu\lambda}P^{\lambda\gamma}g_{\gamma \nu})\partial x^{\mu}\overline{\partial}x^{\nu}, \tag{79}\]
where \(k\), \(k^{{}^{\prime}}\) and \(k^{{}^{\prime\prime}}\) are non-zero constants. Note that by comparing (79) with (1), \(G_{\mu\nu}\) and \(B_{\mu\nu}\) (the symmetric and antisymmetric parts) of this model has the following form:
\[G_{\mu\nu}=g_{\mu\nu}+\frac{k}{2}(g_{\mu\lambda}J^{\lambda}{}_{\nu}+g_{\nu \lambda}J^{\lambda}{}_{\mu}), \tag{80}\]
\[B_{\mu\nu}=\frac{k}{2}(g_{\mu\lambda}J^{\lambda}{}_{\nu}-g_{\nu\lambda}J^{ \lambda}{}_{\mu})+k^{{}^{\prime}}Q_{\mu\nu}+k^{{}^{\prime\prime}}g_{\mu\lambda} P^{\lambda\gamma}g_{\gamma\nu}, \tag{81}\]
the metric \(G_{\mu\nu}\) must be invertible (\(G^{\mu\lambda}G_{\lambda\nu}=\delta^{\mu}{}_{\nu}\)), so by assuming \(G^{\mu\lambda}\) is given by
\[G^{\mu\nu}=ag^{\mu\nu}+b(g^{\mu\lambda}J^{\nu}{}_{\lambda}+g^{\nu\lambda}J^{ \mu}{}_{\lambda}), \tag{82}\]
with constants \(a\) and \(b\); similar to section 3 this yields the same conditions (19-21). By imposing \(a=1\) with \(b\) arbitrary and \(J^{\mu}{}_{\nu}\) satisfies the Hermitian condition
\[J^{\mu}{}_{\nu}=-g^{\mu\lambda}J^{\rho}{}_{\lambda}g_{\rho\nu}, \tag{83}\]
then \(G_{\mu\nu}\) and \(B_{\mu\nu}\) have the following form:
\[G_{\mu\nu}=g_{\mu\nu}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ B_{\mu\nu}=kg_{\mu\lambda}J^{\lambda}{}_{\nu}+k^{{}^{\prime}}Q_{\mu\nu}+k^{{}^{ \prime\prime}}g_{\mu\lambda}P^{\lambda\gamma}g_{\gamma\nu}. \tag{84}\]
Now by using relations (71-78), one can obtain conditions on the tensors \(J,P\), \(Q\). For integrability of the model (79) one must impose further condition (7) (or conditions (9) and (10)) on (84). When \(M\) is a Lie group these conditions are very simple relative to the general manifold \(M\). So we consider the model on a Lie group.
#### 5.2.2 Model on Lie group
In the case that \(M\) is a Lie group \(G\), using the veilbein formalism
\[\forall g\in G\ \ \ \,\ \ \ \ (g^{-1}\partial g)^{\alpha}=e^{\alpha}{}_{\mu} \partial x^{\mu}, \tag{85}\]
the algebraic structure (43) and
\[P^{\mu\nu}=e_{\alpha}{}^{\mu}P^{\alpha\beta}e_{\beta}{}^{\nu}\ \ \ \ \ \ \ \,\ \ \ \ \ \ \ \ \ \ Q_{\mu\nu}=e^{\alpha}{}_{\mu}Q_{\alpha\beta}e^{\beta}{}_{\nu}, \tag{86}\]
then the action (79) can be rewritten as
\[S=\int dzd\overline{z}(g^{-1}\partial g)^{\alpha}(g_{\alpha\beta}+kg_{\alpha \delta}J^{\delta}{}_{\beta}+k^{{}^{\prime}}Q_{\alpha\beta}+k^{{}^{\prime\prime} }g_{\alpha\delta}P^{\delta\sigma}g_{\sigma\beta})(g^{-1}\overline{\partial}g )^{\beta}. \tag{87}\]
Now by applying the formalism of ref [17], as mentioned in section 2 one can investigate the integrability conditions of this sigma model. For this model, the Christoffel and torsion are given by
\[\Gamma^{\lambda}{}_{\mu\nu}=\frac{1}{2}(\partial_{\mu}e^{\alpha}{}_{\nu}+ \partial_{\nu}e^{\alpha}{}_{\mu})e_{\alpha}{}^{\lambda}, \tag{88}\]
\[H^{\lambda}{}_{\mu\nu}=\frac{1}{2}[k(f^{\alpha}{}_{\delta\gamma}J^{\delta}{}_ {\beta}-f^{\alpha}{}_{\delta\beta}J^{\delta}{}_{\gamma}-f^{\delta}{}_{\beta \gamma}J^{\alpha}{}_{\delta})+k^{\prime}(f^{\delta}{}_{\gamma\sigma}Q_{ \delta\beta}g^{\alpha\sigma}+f^{\delta}{}_{\sigma\beta}Q_{\delta\gamma}g^{ \alpha\sigma}-f^{\delta}{}_{\gamma\beta}Q_{\delta\sigma}g^{\alpha\sigma})\]
\[+k^{\prime\prime}(f^{\alpha}{}_{\delta\gamma}P^{\delta\sigma}g_{\sigma\beta}- f^{\alpha}{}_{\delta\beta}P^{\delta\sigma}g_{\sigma\gamma}-f^{\delta}{}_{ \gamma\beta}P^{\sigma\alpha}g_{\delta\sigma})]e^{\sigma}{}_{\mu}e^{\sigma^{ \prime}}{}_{\nu}e_{\alpha}{}^{\lambda}, \tag{89}\]
where \(f^{\alpha}{}_{\beta\sigma}\) are structure constants of the Lie algebra \(\mathfrak{g}\). We assume \(\alpha_{\mu}\) and \(\mu_{\mu}\) have the following forms :
\[\alpha_{\mu}=(\lambda_{1}J^{\alpha}{}_{\gamma}+\lambda_{2}P^{\alpha\delta}g_{ \delta\gamma}+\lambda_{3}g^{\alpha\delta}Q_{\delta\gamma})e^{\gamma}{}_{\mu}T _{\alpha}\ \,\ \ \ \mu_{\mu}=(\lambda^{\prime}_{1}J^{\alpha}{}_{\gamma}+\lambda^{\prime}_{2}P^{ \alpha\delta}g_{\delta\gamma}+\lambda^{\prime}_{3}g^{\alpha\delta}Q_{\delta \gamma})e^{\gamma}{}_{\mu}T_{\alpha}, \tag{90}\]
where \(\{\lambda_{1},\lambda_{2},\lambda_{3},\lambda^{\prime}_{1},\lambda^{\prime}_{2 },\lambda^{\prime}_{3}\}\) are arbitrary spectral parameters and \(T_{\alpha}\) are the bases of the Lie algebras \(\mathfrak{g}\) which they satisfy the commutation relations (46). One can rewrite the conditions of the generalized complex structure (71-74) and its integrability (75-78) as the following algebraic relations [32]
\[P^{\alpha\beta}+P^{\beta\alpha}=0\ \ \ \,\ \ \ \ Q_{\alpha\beta}+Q_{\beta\alpha}=0, \tag{91}\]
\[J^{\alpha}{}_{\delta}J^{\delta}{}_{\beta}+P^{\alpha\delta}Q_{\delta\beta}+ \delta^{\alpha}{}_{\beta}=0, \tag{92}\]
\[J^{\alpha}{}_{\delta}P^{\delta\beta}+J^{\beta}{}_{\delta}P^{\delta\alpha}=0, \tag{93}\]
\[Q_{\alpha\delta}J^{\delta}{}_{\beta}+Q_{\beta\delta}J^{\delta}{}_{\alpha}=0, \tag{94}\]
\[{\bf A}^{\alpha\beta\gamma}=f^{\alpha}{}_{\delta\sigma}P^{\beta\sigma}P^{ \gamma\delta}+f^{\gamma}{}_{\delta\sigma}P^{\beta\delta}P^{\alpha\sigma}+f^{ \beta}{}_{\delta\sigma}P^{\alpha\delta}P^{\gamma\sigma}=0, \tag{95}\]
\[{\bf B}^{\beta\gamma}{}_{\alpha}=f^{\delta}{}_{\sigma\alpha}P^{\beta\sigma}J^ {\gamma}{}_{\delta}+f^{\delta}{}_{\alpha\sigma}P^{\sigma\sigma}J^{\delta}{}_{ \delta}+f^{\gamma}{}_{\sigma\delta}P^{\beta\delta}J^{\sigma}{}_{\alpha}+f^{ \beta}{}_{\sigma\delta}P^{\gamma\sigma}J^{\delta}{}_{\alpha}=0, \tag{96}\]
\[{\bf C}^{\alpha}{}_{\beta\gamma}=f^{\alpha}{}_{\beta\gamma}-f^{\alpha}{}_{ \delta\sigma}J^{\delta}{}_{\beta}J^{\sigma}{}_{\gamma}-f^{\delta}{}_{\gamma \sigma}J^{\alpha}{}_{\delta}J^{\sigma}{}_{\beta}+f^{\delta}{}_{\beta\sigma}J^{ \alpha}{}_{\delta}J^{\sigma}{}_{\gamma}+f^{\delta}{}_{\sigma\beta}P^{\alpha \sigma}Q_{\delta\gamma}+f^{\delta}{}_{\gamma\sigma}P^{\alpha\sigma}Q_{\delta \beta}=0, \tag{97}\]
\[{\bf D}_{\alpha\beta\gamma}=f^{\delta}{}_{\alpha\sigma}J^{\sigma}{}_{\beta}Q_{ \delta\gamma}+f^{\delta}{}_{\gamma\sigma}J^{\sigma}{}_{\beta}Q_{\alpha\delta}+f^{ \delta}{}_{\gamma\sigma}J^{\sigma}{}_{\alpha}Q_{\delta\beta}+f^{\delta}{}_{ \beta\sigma}J^{\sigma}{}_{\alpha}Q_{\gamma\delta}+f^{\delta}{}_{\beta\sigma}J^{ \sigma}{}_{\gamma}Q_{\delta\alpha}+f^{\delta}{}_{\alpha\sigma}J^{\sigma}{}_{ \gamma}Q_{\beta\delta}=0, \tag{98}\]
where \({\bf C}^{\alpha}{}_{\beta\gamma}\) is the generalized Nijenhuis equation. Now we try to use the integrability conditions of the generalized complex structure (91-98) in the integrability condition (7) of the generalized complex sigma model (87), in this manner we conclude the relation
\[A_{1}+B_{1}+C_{1}+D_{1}+E=0. \tag{99}\]
Where from the integrability condition (7), \(C_{1}\) has the following form
\[C_{1}=\lambda_{1}(\lambda_{1}+\lambda^{\prime}_{1})f^{\alpha}{}_{\delta\sigma}J^ {\delta}{}_{\gamma}J^{\sigma}{}_{\beta}+k\frac{\lambda^{\prime}_{1}}{2}(f^{ \delta}{}_{\gamma\beta}J^{\sigma}{}_{\delta}J^{\alpha}{}_{\sigma}-f^{\delta}{}_{ \gamma\sigma}J^{\alpha}{}_{\delta}J^{\sigma}{}_{\beta}+f^{\delta}{}_{\beta \sigma}J^{\alpha}{}_{\delta}J^{\sigma}{}_{\gamma})\]
\[+k^{\prime}\frac{\lambda^{\prime}_{2}}{2}(f^{\delta}{}_{\sigma\beta}P^{\alpha \sigma}Q_{\delta\gamma}+f^{\delta}{}_{\gamma\sigma}P^{\alpha\sigma}Q_{\delta \beta}+f^{\delta}{}_{\beta\gamma}P^{\alpha\sigma}Q_{\delta\sigma})), \tag{100}\]
then using the relation (97) one can obtain the following different conditions for the term \(C_{1}\):
\[1)\ {\bf if}\ \lambda_{1}(\lambda_{1}+\lambda_{1}^{\prime})=k\frac{\lambda_{1}^{ \prime}}{2}=k^{\prime}\frac{\lambda_{2}^{\prime}}{2}\ \ \ \ \ {\bf then}\ \ \ \ C_{1}=0, \tag{101}\]
\[2)\ {\bf if}\ \lambda_{1}(\lambda_{1}+\lambda_{1}^{\prime})=k\frac{\lambda_{1}^{ \prime}}{2}\ \ \ {\bf then}\ \ C_{1}=(k\frac{\lambda_{1}^{\prime}}{2}-k^{\prime}\frac{ \lambda_{1}^{\prime}}{2})(f^{\delta}{}_{\beta\sigma}P^{\alpha\sigma}Q_{\delta \gamma}+f^{\delta}{}_{\sigma\gamma}P^{\alpha\sigma}Q_{\delta\beta}+f^{\delta}{ }_{\beta\gamma}P^{\alpha\sigma}Q_{\sigma\delta}), \tag{102}\]
\[3)\ {\bf if}\ \lambda_{1}(\lambda_{1}+\lambda_{1}^{\prime})=k^{\prime}\frac{ \lambda_{2}^{\prime}}{2}\ \ \ {\bf then}\ \ C_{1}=(k\frac{\lambda_{1}^{\prime}}{2}-k^{\prime}\frac{ \lambda_{2}^{\prime}}{2})(f^{\delta}{}_{\sigma\gamma}J^{\alpha}{}_{\delta}J^ {\sigma}{}_{\beta}+f^{\delta}{}_{\sigma\beta}J^{\alpha}{}_{\delta}J^{\sigma} +f^{\delta}{}_{\beta\gamma}P^{\alpha\sigma}Q_{\sigma\delta}), \tag{103}\]
\[4)\ {\bf if}\ \ k\frac{\lambda_{1}^{\prime}}{2}=k^{\prime}\frac{\lambda_{2}^{ \prime}}{2}\
\[+k\frac{\lambda_{2}^{\prime}}{2}(f^{\delta}{}_{\sigma\beta}P^{\alpha\sigma}J^{ \delta^{\prime}}{}_{\delta}g_{\delta^{\prime}\gamma}-f^{\delta}{}_{\sigma\gamma}P ^{\alpha\sigma}J^{\delta^{\prime}}{}_{\delta}g_{\delta^{\prime}\beta}). \tag{112}\]
Finally by using the integrability condition (98) and considering the following sentences for the \(D_{1}\) term
\[D_{1}=\lambda_{1}(\lambda_{3}+\lambda_{3}^{\prime})f^{\alpha}{}_{\delta\sigma} J^{\delta}{}_{\gamma}g^{\sigma\sigma^{\prime}}Q_{\sigma\sigma}+\ \lambda_{3}(\lambda_{1}+\lambda_{1}^{\prime})f^{\alpha}{}_{\delta\sigma}J^{ \delta}{}_{\beta}g^{\sigma\sigma^{\prime}}Q_{\sigma^{\prime}\gamma}+k\frac{ \lambda_{3}^{\prime}}{2}(f^{\delta}{}_{\sigma\gamma}J^{\sigma}{}_{\beta}g^{ \alpha\sigma^{\prime}}Q_{\sigma^{\prime}\delta}\]
\[-f^{\delta}{}_{\sigma\beta}J^{\sigma}{}_{\gamma}g^{\alpha\sigma^{\prime}}Q_{ \sigma^{\prime}\delta}-f^{\delta}{}_{\beta\gamma}J^{\sigma}{}_{\delta}g^{ \alpha\sigma^{\prime}}Q_{\sigma^{\prime}\sigma})+k^{\prime}\frac{\lambda_{1}^ {\prime}}{2}(-f^{\delta}{}_{\sigma\gamma}J^{\alpha}{}_{\delta^{\prime}}g^{ \delta^{\prime}\sigma}Q_{\delta\beta}-f^{\delta}{}_{\beta\sigma}J^{\alpha}{}_ {\delta^{\prime}}g^{\delta^{\prime}\sigma}Q_{\delta\gamma}-f^{\delta}{}_{ \gamma\beta}J^{\alpha}{}_{\sigma}g^{\sigma\sigma^{\prime}}Q_{\delta\sigma^{ \prime}}), \tag{113}\]
then one can obtain the following different conditions for the \(D_{1}\) term
\[1)\ {\bf if}\ \lambda_{1}(\lambda_{3}+\lambda_{3}^{\prime})=\lambda_{3}( \lambda_{1}+\lambda_{1}^{\prime})=k\frac{\lambda_{3}^{\prime}}{2}=k^{\prime} \frac{\lambda_{1}^{\prime}}{2}\ \ \ \ \ {\bf then}\ \ \ \ \ D_{1}=0, \tag{114}\]
\[2)\ {\bf if}\ \lambda_{1}(\lambda_{3}+\lambda_{3}^{\prime})=\lambda_{3}( \lambda_{1}+\lambda_{1}^{\prime})=k\frac{\lambda_{3}^{\prime}}{2}\]
\[{\bf then}\ \ \ D_{1}=(k^{\prime}\frac{\lambda_{1}^{\prime}}{2}-k\frac{\lambda_{ 3}^{\prime}}{2})(f^{\delta}{}_{\beta\sigma}Q_{\delta\gamma}J^{\sigma}{}_{ \delta^{\prime}}g^{\alpha\delta^{\prime}}-f^{\delta}{}_{\gamma\sigma}Q_{\delta \beta}J^{\sigma}{}_{\delta^{\prime}}g^{\alpha\delta^{\prime}}+f^{\delta}{}_{ \beta\gamma}J^{\sigma^{\prime}}{}_{\delta}Q_{\sigma\sigma^{\prime}}g^{\alpha \sigma}), \tag{115}\]
\[3)\ {\bf if}\ \lambda_{1}(\lambda_{3}+\lambda_{3}^{\prime})=\lambda_{3}( \lambda_{1}+\lambda_{1}^{\prime})=k^{\prime}\frac{\lambda_{1}^{\prime}}{2}\]
\[{\bf then}\ \ \ D_{1}=(k\frac{\lambda_{3}^{\prime}}{2}-k^{\prime}\frac{\lambda_{1}^ {\prime}}{2})(f^{\delta}{}_{\sigma\gamma}J^{\sigma}{}_{\beta}Q_{\delta^{ \prime}\delta}g^{\alpha\delta^{\prime}}-f^{\delta}{}_{\sigma\beta}J^{\sigma}{ }_{\gamma}Q_{\delta^{\prime}\delta}g^{\alpha\delta^{\prime}}+f^{\delta}{}_{ \gamma\beta}J^{\delta^{\prime}}{}_{\delta}Q_{\sigma\delta^{\prime}}g^{\alpha \sigma}), \tag{116}\]
\[4)\ {\bf if}\ \frac{\lambda_{3}^{\prime}}{2}=k^{\prime}\frac{\lambda_{1}^ {\prime}}{2}\]
\[{\bf then}\ \ \ \ D_{1}=(\lambda_{1}(\lambda_{3}+\lambda_{3}^{\prime})-k\frac{ \lambda_{3}^{\prime}}{2})f^{\delta}{}_{\sigma\sigma^{\prime}}g^{\alpha\sigma} J^{\sigma^{\prime}}{}_{\gamma}Q_{\delta\beta}+(k\frac{\lambda_{3}^{\prime}}{2}- \lambda_{3}(\lambda_{1}+\lambda_{1}^{\prime}))f^{\delta}{}_{\sigma\sigma^{ \prime}}g^{\alpha\sigma}J^{\sigma^{\prime}}{}_{\beta}Q_{\delta\gamma}. \tag{117}\]
In addition to the above sentences \(A_{1},B_{1},C_{1}\) and \(D_{1}\) we have the following sentence for the \(E\) term which is given from the integrability condition (7) as follows:
\[E=\lambda_{2}(\lambda_{3}+\lambda_{3}^{\prime})f^{\alpha}{}_{\sigma \delta}P^{\sigma\sigma^{\prime}}g_{\sigma^{\prime}\gamma}g^{\delta\delta^{ \prime}}Q_{\delta^{\prime}\beta}+\lambda_{3}(\lambda_{2}+\lambda_{2}^{\prime})f ^{\alpha}{}_{\sigma\delta}g^{\sigma\sigma^{\prime}}Q_{\sigma^{\prime}\gamma}P^{ \delta\delta^{\prime}}g_{\delta^{\prime}\beta}+\lambda_{3}(\lambda_{3}+\lambda_ {3}^{\prime})f^{\alpha}{}_{\sigma\delta}g^{\sigma\sigma^{\prime}}Q_{\sigma^{ \prime}\gamma}g^{\delta\delta^{\prime}}Q_{\delta^{\prime}\beta}\]
\[+k^{\prime}\frac{\lambda_{3}^{\prime}}{2}(f^{\delta}{}_{\sigma\gamma}P^{ \sigma\sigma^{\prime}}g_{\sigma^{\prime}\beta}g^{\alpha\delta^{\prime}}Q_{ \delta^{\prime}\delta}-f^{\delta}{}_{\sigma\beta}P^{\alpha\sigma^{\prime}}g_{ \sigma^{\prime}\gamma}g^{\alpha\delta^{\prime}}Q_{\delta^{\prime}\delta}-f^{ \delta}{}_{\gamma\beta}P^{\sigma\sigma^{\prime}}g_{\delta\sigma}g^{\alpha\delta^{ \prime}}Q_{\delta^{\prime}\delta^{\prime}})\]
\[+k^{\prime}\frac{\lambda_{3}^{\prime}}{2}(-f^{\delta}{}_{\sigma\gamma}g^{ \sigma\sigma^{\prime}}Q_{\delta\beta}g^{\alpha\delta^{\prime}}Q_{\delta^{\prime} \sigma^{\prime}}+f^{\delta}{}_{\sigma\beta}g^{\sigma\sigma^{\prime}}Q_{\delta \gamma}g^{\alpha\delta^{\prime}}Q_{\delta^{\prime}\sigma^{\prime}}-f^{\delta}{}_ {\gamma\beta}g^{\sigma\sigma^{\prime}}Q_{\delta\sigma^{\prime}}-f^{\delta}{}_{ \gamma\beta}g^{\sigma\sigma^{\prime}}Q_{\delta\sigma}g^{\alpha\delta^{\prime}}Q_{ \delta^{\prime}\sigma^{\prime}})\]
\[-k^{\prime\prime}\frac{\lambda_{2}^{\prime}}{2}f^{\delta}{}_{\gamma\beta}P^{ \delta^{\prime}\sigma^{\prime}}g_{\delta\delta^{\prime}}P^{\alpha\sigma}g_{\sigma \sigma^{\prime}}+(\lambda_{1}+\frac{\lambda_{1}^{\prime}}{2})f^{\delta}{}_{ \beta\gamma}J^{\alpha}{}_{\delta}+(\lambda_{2}+\frac{\lambda_{2}^{\prime}}{2})f^{ \delta}{}_{\beta\gamma}P^{\alpha\sigma}g_{\sigma\delta}+(\lambda_{3}+\frac{ \lambda_{3}^{\prime}}{2})f^{\delta}{}_{\beta\gamma}g^{\alpha\sigma}Q_{\sigma \delta}. \tag{118}\]
So for investigating the integrability of the sigma model (87) with generalized complex structure, it is sufficient to show that the generalized complex structures satisfy the Hermitian condition (49) and sum of one of the equations (101-104) and one of the equations (106,107) and one of the equations (109,112) and one of the equations (1114-117) and (118) must be zero (i. e. (99)).
Let us investigate these conditions for some examples.
**Example e)** As the first example, we consider the generalized complex structure on four-dimensional real Lie group \({\bf A_{4,8}}\). By using (91)-(98) one can obtain the following algebraic forms [32]:
\[(J^{\alpha}{}_{\beta})=\left(\begin{array}{cccc}0&0&-1&0\\ 0&0&0&-1\\ 1&0&0&0\\ 0&1&0&0\end{array}\right)\,\ (Q_{\alpha\beta})=\left(\begin{array}{cccc}0&1&0&-a \\ -1&0&a&0\\ 0&-a&0&-1\\ a&0&1&0\end{array}\right),\ (P^{\alpha\beta})=\left(\begin{array}{cccc}0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{array}\right), \tag{119}\]
then, by using (55) one can be construct the integrable sigma model (87) as follows :
\[S_{A_{4,8}}=\int dzd\overline{z}[-k_{0}(\partial v\overline{\partial}x+\partial x \overline{\partial}v)+k_{0}e^{v}(\partial u\overline{\partial}y+\partial y \overline{\partial}u+u\partial y\overline{\partial}v-u\partial v\overline{ \partial}y)\]
\[+(k^{\prime}+kk_{0})(\partial v\overline{\partial}u-\partial u\overline{ \partial}v)+(k^{\prime}-kk_{0})e^{v}(\partial x\overline{\partial}y-\partial y \overline{\partial}x)-k_{0}e^{v}(u\partial y\overline{\partial}v-u\partial v \overline{\partial}y)\]
\[+k^{\prime}a(\partial v\overline{\partial}x-\partial x\overline{\partial}v) +k^{\prime}ae^{v}(\partial y\overline{\partial}u-\partial u\overline{ \partial}y)], \tag{120}\]
to investigate the integrability of this generalized complex sigma model with metric \(g_{\alpha\beta}\) (55) because of \(P=0\) so the relations (105) and (108) are automatically zero. Then by choosing the conditions (101) and (114) it is sufficient to investigate the condition (118) satisfy, indeed this condition is satisfy by setting
\[\lambda_{1}^{\prime}=-2\lambda_{1},\lambda_{3}^{\prime}=-2\lambda_{3}. \tag{121}\]
Furthermore, by comparing this model with the \(WZW\) model on \(H_{4}\)[26] and re-scaling \(K=4\pi\), the second and third line of the action (120) are the perturbed terms of the \(WZW\) action. From this view the model (120) is an integrable perturbed \(WZW\) model.
**Example f)** As another example, we construct the generalized complex structure on four-dimensional real Lie group \({\bf A_{4,10}}\)[25] with the following Lie algebra commutators (Nappi-Witten group) [33]
\[[J,P_{i}]=\varepsilon_{ij}P_{j}\ \ \ \ \,\ \ \ \ [P_{i},P_{j}]=\varepsilon_{ij}T\ \ \ \ \,\ \ \ \ [T,J]=[T,P_{i}]=0, \tag{122}\]
this Lie algebra is the generalization of the 2D Poincare algebra which sets \(T=0\). We parameterize the corresponding Lie group \({\bf A_{4,10}}\) with coordinates \(x^{\mu}=\{v,a_{1},a_{2},u\}\) and consider the following parametrization for the group element [33]
\[g=e^{\Sigma a_{i}P_{i}}e^{uJ+vT}, \tag{123}\]
with the generators \(T_{\alpha}=\{T,P_{1},P_{2},J\}\), so the veirbein and ad-invariant metric (50) for this Lie algebra are as follows:
\[(e^{\alpha}{}_{\mu})=\left(\begin{array}{cccc}1&\frac{1}{2}a_{2}&-\frac{1}{ 2}a_{1}&0\\ 0&cosu&sinu&0\\ 0&-sinu&cosu&0\\ 0&0&0&1\end{array}\right)\,\ (g_{\alpha\beta})=\left(\begin{array}{cccc}0&0&0&k_{ 0}\\ 0&k_{0}&0&0\\ 0&0&k_{0}&0\\ k_{0}&0&0&0\end{array}\right), \tag{124}\]
where \(k_{0}\) is a constant. The corresponding components of the generalized complex structure (which satisfies the relations (91)-(98)) are as follows
\[(J^{\alpha}{}_{\beta})=\left(\begin{array}{cccc}a&-ab&b&0\\ 0&0&1&0\\ 0&-1&0&0\\ 0&0&0&a\end{array}\right)\,\ (Q_{\alpha\beta})=\left(\begin{array}{cccc}0&0&0&1+a^{2} \\ 0&0&0&-ba^{2}-b\\ 0&0&0&0\\ -1-a^{2}&ba^{2}+b&0&0&0\end{array}\right),\ (P^{\alpha\beta})=\left(\begin{array}{cccc}0&0&0&1 \\ 0&0&0&0\\ 0&0&0&0\\ -1&0&0&0\end{array}\right), \tag{125}\]
where \(a\) and \(b\) are real constants, such that for satisfying the Hermitian condition (22) we set \(a=b=0\). One can construct the sigma model (87) as follows
\[S_{A_{4,10}}=\int dzd\overline{z}[k_{0}(\partial v\overline{\partial}u+ \partial u\overline{\partial}v+\partial a_{1}\overline{\partial}a_{1}+ \partial a_{2}\overline{\partial}a_{2}-\frac{a_{1}}{2}(\partial u\overline{ \partial}a_{2}+\partial a_{2}\overline{\partial}u)\]
\[+\frac{a_{2}}{2}(\partial u\overline{\partial}a_{1}+\partial a_{1}\overline{ \partial}u))+\frac{a_{1}}{2}(k^{\prime}-k^{\prime\prime}k_{0}^{2})(\partial u \overline{\partial}a_{2}-\partial a_{2}\overline{\partial}u)+\frac{a_{2}}{2}(k ^{\prime}-k^{\prime\prime}k_{0}^{2})(\partial a_{1}\overline{\partial}u- \partial u\overline{\partial}a_{1})\]
\[+kk_{0}(\partial a_{1}\overline{\partial}a_{2}-\partial a_{2}\overline{ \partial}a_{1})+(k^{\prime}-k^{\prime\prime}k_{0}^{2})(\partial v\overline{ \partial}u-\partial u\overline{\partial}v)], \tag{126}\]
this model is integrable if we use the sum of the conditions (104), (106), (112), (117) and the relation (118) in (99), so we conclude the following conditions must be satisfied
\[-\lambda_{1}(\lambda_{1}+\lambda_{1}^{\prime})+\frac{k}{2}\lambda_{1}^{\prime}- \lambda_{2}^{\prime}\frac{k_{0}^{2}k^{\prime\prime}}{2}+\lambda_{3}^{\prime}( \frac{k^{\prime\prime}}{2}-\frac{k^{\prime}}{2k_{0}^{2}})+k_{0}\lambda_{2}- \frac{\lambda_{3}}{k_{0}}+\frac{1}{2}(k_{0}\lambda_{2}^{\prime}-\frac{\lambda_ {3}^{\prime}}{k_{0}})=0,\]
\[(k_{0}\lambda^{\prime}_{2}-\frac{\lambda^{\prime}_{3}}{k_{0}})(\lambda_{1}(\lambda_ {1}+\lambda^{\prime}_{1})-\frac{k}{2}\lambda^{\prime}_{1})+\lambda^{\prime}_{1}( \lambda_{1}+\frac{\lambda^{\prime}_{1}}{2})=0,\]
\[(k_{0}\lambda^{\prime}_{2}-\frac{\lambda^{\prime}_{3}}{k_{0}})\lambda_{1}=(k_{0 }\lambda_{2}-\frac{\lambda_{3}}{k_{0}})\lambda^{\prime}_{1}. \tag{127}\]
Now comparing our model with the \(WZW\) model on \({\bf A_{4,10}}\) Lie group [33]8
Footnote 8: Note that our ad-invariant metric (124) is different from [33], so our \(WZW\) model on \({\bf A_{4,10}}\) is different from [33].
\[S_{WZW_{44,10}}=\frac{Kk_{0}}{4\pi}\int dzd\overline{z}[\partial v\overline{ \partial}u+\partial u\overline{\partial}v+\partial a_{1}\overline{\partial}a_ {1}+\partial a_{2}\overline{\partial}a_{2}-\frac{a_{1}}{2}(\partial u \overline{\partial}a_{2}+\partial a_{2}\overline{\partial}u)+\frac{a_{2}}{2 }(\partial u\overline{\partial}a_{1}+\partial a_{1}\overline{\partial}u)\]
\[+\frac{a_{1}}{2}(\partial a_{2}\overline{\partial}u-\partial u\overline{ \partial}a_{2})+\frac{a_{2}}{2}(\partial u\overline{\partial}a_{1}-\partial a _{1}\overline{\partial}u)], \tag{128}\]
one can rewrite the sigma model (126) as the following perturbed \(WZW\) model
\[S=S_{WZW_{44,10}}+\int dzd\overline{z}\{k_{0}(\partial u\overline{\partial}v- \partial v\overline{\partial}u)+kk_{0}(\partial a_{1}\overline{\partial}a_{2 }-\partial a_{2}\overline{\partial}a_{1})\}\;, \tag{129}\]
where \(k^{\prime}-k^{\prime\prime}k_{0}^{2}=-k_{0}\) and \(K=4\pi\). Furthermore one can show the models (120) and (126) are conformal invariance up to one loop. The \({\bf A_{4,8}}\) example (120) has only one non-zero component \(Ricci\) tensor, \(Ric_{44}=-\frac{1}{2}\) and six components for torsion \(H^{\lambda}{}_{\mu\nu}\)
\[H^{1}{}_{21}=\frac{e^{v}(k^{\prime}-kk_{0})}{2k_{0}},H^{1}{}_{32}=\frac{e^{v}k ^{\prime}a}{2k_{0}},H^{2}{}_{42}=\frac{k^{\prime}a}{2k_{0}}\]
\[H^{3}{}_{41}=\frac{(k^{\prime}-kk_{0})}{2k_{0}},H^{3}{}_{43}=-\frac{k^{\prime} a}{2k_{0}},H^{4}{}_{42}=\frac{e^{v}(k^{\prime}-kk_{0})}{2k_{0}}\;, \tag{130}\]
by setting \(k^{\prime}=kk_{0}\), \(ka=\pm 1\) and \(\phi=const\), all the components of \(\beta\) functions (65) are equal to zero. For the sigma model on Lie group \({\bf A_{4,10}}\) (126) we have only one non-zero component for \(Ricci\) tensor, \(Ric_{44}=\frac{1}{2}\) and non-zero components of torsion are given as
\[H^{1}{}_{32}=\frac{(k^{\prime}-k^{\prime\prime}k_{0}^{2})}{2k_{0}},H^{1}{}_{42 }=-\frac{a_{1}(k^{\prime}-k^{\prime\prime}k_{0}^{2})}{4k_{0}},H^{1}{}_{43}=- \frac{a_{2}(k^{\prime}-k^{\prime\prime}k_{0}^{2})}{4k_{0}},\]
\[H^{2}{}_{43}=\frac{(k^{\prime}-k^{\prime\prime}k_{0}^{2})}{2k_{0}},H^{3}{}_{42 }=-\frac{(k^{\prime}-k^{\prime\prime}k_{0}^{2})}{2k_{0}}, \tag{131}\]
this model satisfies \(\beta\) function equations (65) by setting \(k^{\prime}=k^{\prime\prime}k_{0}^{2}\pm k_{0}\) and \(\phi=const\).
## 6 Perturbed \(Wzw\) model with generalized complex structure
As a generalization of the perturbed chiral model (87); we propose a perturbed \(WZW\) model with generalized complex structure by the following action
\[S=S_{WZW}+\frac{K}{4\pi}\int dzd\overline{z}e^{\alpha}{}_{\mu}(kg_{\alpha \delta}J^{\alpha}{}_{\beta}+k^{\prime}Q_{\alpha\beta}+k^{\prime\prime}g_{ \alpha\delta}P^{\delta\sigma}g_{\sigma\beta})e^{\beta}{}_{\nu}\partial x^{ \mu}\overline{\partial}x^{\nu}, \tag{132}\]
such that torsion of the above model is the sum of torsion of the \(WZW\) action (58) as
\[H_{\mu\nu\lambda}=Kg_{\alpha\delta}f^{\delta}{}_{\beta\gamma}e^{\alpha}{}_{\mu }e^{\beta}{}_{\nu}e^{\gamma}{}_{\lambda}\;, \tag{133}\]
and torsion of the perturbed section which obtains from (89). To investigate the conditions of integrability for this model we assume that the \(\alpha_{\mu}\) and \(\beta_{\mu}\) matrices are given as follows
\[\alpha_{\mu}=(\lambda_{1}J^{\alpha}{}_{\gamma}+\lambda_{2}P^{\alpha\delta}g_{ \delta\gamma}+\lambda_{3}g^{\alpha\delta}Q_{\delta\gamma}+\lambda_{4}\delta^{ \alpha}{}_{\gamma})e^{\gamma}{}_{\mu}T_{\alpha},\]
\[\mathcal{P}^{t}=-\mathcal{P}\qquad,\qquad\quad\mathcal{Q}^{t}=-\mathcal{Q}, \tag{139}\]
\[J^{2}+\mathcal{P}\mathcal{Q}+I=0, \tag{140}\]
\[(J\mathcal{P})^{t}=-(J\mathcal{P}), \tag{141}\]
\[(J\mathcal{Q})^{t}=-(J\mathcal{Q}), \tag{142}\]
\[[\mathcal{P}(X),\mathcal{P}(Y)]-\mathcal{P}[X,\mathcal{P}(Y)]-\mathcal{P}[ \mathcal{P}(X),Y]=0, \tag{143}\]
\[[\mathcal{P}(X),J(Y)]-[J(X),\mathcal{P}(Y)]=-J^{t}([\mathcal{P}(X),Y]+[X, \mathcal{P}(Y)]), \tag{144}\]
\[[X,Y]-[J(X),J(Y)]+J[J(X),Y]+J[X,J(Y)]+\mathcal{P}[X,\mathcal{Q}(Y)]+\mathcal{P}[ \mathcal{Q}(X),Y]=0, \tag{145}\]
\[[J(X),\mathcal{Q}(Y)]+[\mathcal{Q}(X),J(Y)]+J^{t}([\mathcal{Q}(X),Y]+[X, \mathcal{Q}(Y)])-\mathcal{Q}([J(X),Y]+[X,J(Y)])=0, \tag{146}\]
where transpose of an operator \(O\) is defined as
\[\forall X,Y\in\mathfrak{g}\;,\;\;\;\;\;<OX,Y>=<X,O^{t}Y>. \tag{147}\]
Furthermore one can rewrite our sigma model (87) as the following form:
\[S=\int dzd\overline{z}[<g^{-1}\partial g,g^{-1}\overline{\partial}g>+<g^{-1} \partial g,(kJ-k^{\prime}\mathcal{Q}-k^{\prime\prime}\mathcal{P})g^{-1} \overline{\partial}g>], \tag{148}\]
and the integrability conditions (101-117) and (118) (for the model (87)) can be rewritten as the operator condition (99). In this form, one can compare our model with the model presented at [14]. By comparing (148) with the model (22) of [14] (with \(\lambda=0\) in (22)) then we have \(Q^{\prime-1}=I+kJ-k^{\prime}\mathcal{Q}-k^{\prime\prime}\mathcal{P}\) and \(P^{\prime}=Q^{\prime t}\) in (22) of [14]9, of course comparing of (99) with relation (21) of [14] is very complicated (because of inverse action on the operator \(Q^{\prime}\)). Here we consider result of the example (**e**) as follows:
Footnote 9: For prevent of confusion here we use the operators \(P^{\prime}\) and \(Q^{\prime}\) instead of \(P\) and \(Q\) in [14].
\[Q^{\prime-1}=I+kJ-k^{\prime}\mathcal{Q}-k^{\prime\prime}\mathcal{P}=\left( \begin{array}{cccc}1-\frac{k^{\prime}a}{k_{0}}&0&-k-\frac{k^{\prime}}{k_{0}} &0\\ 0&1-\frac{k^{\prime}a}{k_{0}}&0&-k-\frac{k^{\prime}}{k_{0}}\\ k-\frac{k^{\prime}}{k_{0}}&0&1+\frac{k^{\prime}}{k_{0}}&0\\ 0&k-\frac{k^{\prime}}{k_{0}}&0&1+\frac{k^{\prime}}{k_{0}}\end{array}\right), \tag{149}\]
and one can check the above operator \(Q^{\prime}\) and \(P^{\prime}=Q^{\prime t}\) are satisfying relation (21) of [14]. In this manner our model only on metric Lie algebra is equivalent of [14], such that the operators \(J,\mathcal{P},\mathcal{Q}\) must satisfy the relations of generalized complex structure (139-146) and the following operator condition of (118) (with \(P=0\) and (121)) as
\[\lambda_{3}(\lambda_{3}+\lambda_{3}^{\prime})[\mathcal{Q}X,\mathcal{Q}Y]+ \frac{k^{\prime}}{2}\lambda_{3}^{\prime}(\mathcal{Q}^{2}[X,Y]-\mathcal{Q}[X, \mathcal{Q}Y]-\mathcal{Q}[\mathcal{Q}X,Y])=0. \tag{150}\]
One can also perform this work for the perturbed \(WZW\) model (132).
## 7 Conclusion
We obtain conditions for having an integrable deformation of a general sigma model on a manifold with a complex structure on it and show that these conditions are automatically satisfied on a Lie group (using zeros of Nijenhuis torsion). Then we extend this formalism to the models on the manifold (Lie group) with generalized complex structure on it. We compare our results with Mohammedi's one [14]. One can investigate similar works with the Poisson structure and Poisson-Nijenhuis structure [34] on a Lie group [35], and construct integrable deformation of sigma models with these structures. These works are under investigation. |
2309.05271 | AutoFuse: Automatic Fusion Networks for Deformable Medical Image
Registration | Deformable image registration aims to find a dense non-linear spatial
correspondence between a pair of images, which is a crucial step for many
medical tasks such as tumor growth monitoring and population analysis.
Recently, Deep Neural Networks (DNNs) have been widely recognized for their
ability to perform fast end-to-end registration. However, DNN-based
registration needs to explore the spatial information of each image and fuse
this information to characterize spatial correspondence. This raises an
essential question: what is the optimal fusion strategy to characterize spatial
correspondence? Existing fusion strategies (e.g., early fusion, late fusion)
were empirically designed to fuse information by manually defined prior
knowledge, which inevitably constrains the registration performance within the
limits of empirical designs. In this study, we depart from existing
empirically-designed fusion strategies and develop a data-driven fusion
strategy for deformable image registration. To achieve this, we propose an
Automatic Fusion network (AutoFuse) that provides flexibility to fuse
information at many potential locations within the network. A Fusion Gate (FG)
module is also proposed to control how to fuse information at each potential
network location based on training data. Our AutoFuse can automatically
optimize its fusion strategy during training and can be generalizable to both
unsupervised registration (without any labels) and semi-supervised registration
(with weak labels provided for partial training data). Extensive experiments on
two well-benchmarked medical registration tasks (inter- and intra-patient
registration) with eight public datasets show that our AutoFuse outperforms
state-of-the-art unsupervised and semi-supervised registration methods. | Mingyuan Meng, Michael Fulham, Dagan Feng, Lei Bi, Jinman Kim | 2023-09-11T07:05:02Z | http://arxiv.org/abs/2309.05271v1 | # AutoFuse: Automatic Fusion Networks for Deformable Medical Image Registration
###### Abstract
Deformable image registration aims to find a dense non-linear spatial correspondence between a pair of images, which is a crucial step for many medical tasks such as tumor growth monitoring and population analysis. Recently, Deep Neural Networks (DNNs) have been widely recognized for their ability to perform fast end-to-end registration. However, DNN-based registration needs to explore the spatial information of each image and fuse this information to characterize spatial correspondence. This raises an essential question: what is the optimal fusion strategy to characterize spatial correspondence? Existing fusion strategies (e.g., early fusion, late fusion) were empirically designed to fuse information by manually defined prior knowledge, which inevitably constrains the registration performance within the limits of empirical designs. In this study, we depart from existing empirically-designed fusion strategies and develop a data-driven fusion strategy for deformable image registration. To achieve this, we propose an Automatic Fusion network (AutoFuse) that provides flexibility to fuse information at many potential locations within the network. A Fusion Gate (FG) module is also proposed to control how to fuse information at each potential network location based on training data. Our AutoFuse can automatically optimize its fusion strategy during training and can be generalizable to both unsupervised registration (without any labels) and semi-supervised registration (with weak labels provided for partial training data). Extensive experiments on two well-benchmarked medical registration tasks (inter- and intra-patient registration) with eight public datasets show that our AutoFuse outperforms state-of-the-art unsupervised and semi-supervised registration methods.
Keywords:deformable Image Registration, Data-driven Fusion, Unsupervised Learning, Semi-supervised Learning.
## 1 Introduction
Image registration is a fundamental requirement for medical image analysis and has been an active research focus for decades [1, 2]. Image registration spatially aligns medical images acquired from different patients, time-points, or scanners, which is a crucial step for a variety of medical tasks such as tumor growth monitoring and population analysis [3]. Due to anatomy variations among patients or pathological changes such as tumor growth, medical images usually carry non-linear local deformations, especially for complex organs such as the cerebral cortex in the brain [4]. Therefore, different from the common natural image registration tasks (e.g., panorama stitching [5]) that aim to minimize the global misalignment caused by parallax, medical image registration heavily relies on deformable registration and this motivates the current research focus [2, 6]. For example, many medical image registration studies assume that the images have been affinely aligned after image preprocessing to remove global misalignment and thereby mainly focus on deformable registration with non-linear local deformations [7-16].
Deformable image registration aims to find a dense non-linear spatial correspondence (transformation) between a pair of fixed and moving images. Through the spatial transformation, the moving image can be warped to align with the fixed image. Traditional registration methods usually formulate deformable image registration as a time-consuming iterative optimization problem [17, 18]. Recently, deep registration methods based on Deep Neural Networks (DNNs) have been widely used for their ability to perform fast end-to-end registration [3, 6]. These methods learn a mapping from image pairs to spatial transformations based on training data,
which have shown superior performance on both registration accuracy and speed when compared to traditional registration methods and hence are regarded as state-of-the-art methods [10-15].
Early deep registration methods train DNNs in a fully supervised setting and require ground truth transformations between images as the labels [19, 20]. However, ground truth transformations are unavailable. Therefore, imperfect transformation labels (estimated by traditional methods) or synthetic image pairs (with artificial transformations) are used as alternatives, which inevitably introduces inherited registration errors (from imperfect labels) or human experimental bias (from synthetic data) [14]. To remove the reliance on labels, recent deep registration methods have been developed to use image similarity metrics (e.g., mean square error) to train DNNs in a fully unsupervised setting [7-15]. In addition, weak anatomical labels (e.g., organ segmentation masks) have been used to complement the registration process and have been shown to improve registration performance [7]. Such weak labels enable DNNs to be trained in a semi-supervised setting where both unlabeled and weakly-labeled data are used [21-28].
To achieve end-to-end registration, deep registration methods need to explore the spatial information of each image and fuse this information to characterize spatial correspondence. Registration performance can be limited by poorly designed fusion strategies that hinder the characterization of spatial correspondence between images, while carefully designed fusion strategies can optimize the fusion of spatial information and greatly improve registration performance. This raises an essential question for image registration: _What is the optimal fusion strategy to characterize the spatial correspondence between images?_ In addition, for semi-supervised registration with weak anatomical labels, anatomical segmentation can be jointly performed to improve image registration [21-28]. For this aim, semi-supervised deep registration methods also need to explore task-specific information for each task (registration and segmentation) and fuse this information to leverage the synergy between the two tasks. This raises another essential question: _What is the optimal fusion strategy to leverage the synergy between joint registration and segmentation?_ For the first question, deep registration methods commonly adopt early fusion [7-11], middle fusion [13, 16, 29], or late fusion [30], as shown in Fig. 1(a-c). For the second question, semi-supervised deep registration methods usually adopt loss fusion [21-23], feature fusion [24-26], or input fusion [27, 28], as shown in Fig. 1(d-f). Recent studies designed sophisticated fusion strategies and demonstrated that fusion strategies are a major influential factor to improve image registration [12, 15, 31-34, 51, 52]. Nevertheless, these sophisticated fusion strategies were empirically designed to fuse information by manually defined prior knowledge, which inevitably constrains the registration performance within the limits of empirical designs. The possible search space for fusion strategies is too large to be manually searched and the optimal fusion strategy could vary depending on data (e.g., medical images acquired from different scanners/organs or with different ranges of deformations), which inherently limits the development of empirically-designed fusion strategies.
Fig. 1: Illustration of existing empirically-designed fusion strategies (a-f) and our data-driven fusion strategy (g) in unsupervised and semi-supervised settings. (a) Early fusion: \(I_{f}\) and \(I_{m}\) are concatenated as input. (b) Middle fusion: \(I_{f}\) and \(I_{m}\) enter separate encoders with intermediate features fused. (c) Late fusion: \(I_{f}\) and \(I_{m}\) enter separate networks with resultant features fused. (d) Loss fusion: \(\psi\) and \(S_{f}/S_{m}\) are mutually constrained by joint loss functions. (e) Feature fusion: multi-task networks are used with partial feature shared. (f) Input fusion: \(S_{f}/S_{m}\) are fed as input for registration. (g) Data-driven fusion: fusion strategy is optimized during training. Legend: FG = Fusion Gate (FG) module; \(I_{f}=\) fixed image; \(I_{m}=\) moving image; \(\psi=\) registration output; \(S_{f}/S_{m}=\) segmentation output for \(I_{f}/I_{m}\) (only for semi-supervised registration).
In this study, we depart from the empirically-designed fusion strategies and develop a data-driven fusion strategy for deformable image registration, generalized to both unsupervised and semi-supervised registration. To achieve this, we propose (i) an Automatic Fusion network (AutoFuse) that provides flexibility to fuse information at many potential locations within the network, and (ii) a Fusion Gate (FG) module to control how to fuse information at each potential network location. As shown in Fig. 1(g), the FG modules enable our AutoFuse to optimize its fusion strategy based on training data during training, thus making a fundamental shift from existing empirically-designed fusion strategies to a data-driven fusion strategy. Moreover, our data-driven fusion strategy can be generalized to both Convolutional Neural Network (CNN) and transformer architectures and produce consistent improvements in the both CNN and transformer variants of AutoFuse. To the best of our knowledge, our AutoFuse is the first deep registration method that departs from empirically-designed fusion strategies and introduces a data-driven fusion strategy for both unsupervised and semi-supervised deformable image registration. Experiments on two well-benchmarked medical registration tasks (3D inter-patient brain image registration and 4D intra-patient cardiac image registration) with eight public datasets show that our AutoFuse outperforms the state-of-the-art unsupervised and semi-supervised registration methods.
## 2 Related Work
### Unsupervised Medical Image Registration
Unsupervised registration methods are widely adopted as they do not require any data annotations. Traditional methods usually formulate deformable registration as an iterative optimization problem, which iteratively updates the spatial transformation for each image pair to maximize the similarity metrics between images [17, 18]. To avoid the time-consuming iterative optimization in traditional methods, unsupervised deep registration methods have been proposed [7-15], in which DNNs were globally optimized to produce spatial transformations that maximize the similarity metrics of a set of training data and then were employed on unseen testing data without the need for further optimization.
Most existing unsupervised deep registration methods adopted the basic early, middle, or late fusion strategies [7-11, 13, 29, 30]. Recently, some sophisticated fusion strategies have also been used to improve unsupervised registration [12, 15, 31-34]. Chen et al. [31] proposed to use cross-stitch units [35] to extract and fuse features for affine registration. Also for affine registration, Chen et al. [32] proposed a dual-channel squeeze-fusion-excitation co-attention module to fuse information. For deformable registration, Zhang et al. [12] proposed a Dual Transformer Network (DTN) that adopted both early and middle fusion strategies. Shi et al. [15] used a transformer network (named XMorpher) that includes dual parallel feature extraction networks with information fused by cross-attention. More recently, Ma et al. [33] proposed a Separate Encoding Network (SEN) that includes three parallel encoders to extract features (from separate images and concatenated image pairs) and fuse these features at multiple scales. Chen et al. [34] proposed a dual-stream transformer-based network (named TransMatch) that extracts features through symmetrical dual encoders with self-attention and then adopts cross-attention to realize feature matching and fusion. However, as we have mentioned, these sophisticated fusion strategies are empirically-designed and inherently limited.
### Semi-supervised Medical Image Registration
In addition to fully unsupervised registration, anatomical information has been introduced to improve registration. As anatomical segmentation labels can be used to evaluate whether images are aligned well in anatomy, segmentation labels hence can be regarded as weak labels for registration. Early weakly-supervised deep registration methods leveraged the weak labels to directly supervise the network's training and achieved better performance than their unsupervised counterparts [7, 16, 36]. However, these methods required segmentation labels to be available for all training data and thus cannot leverage the easily accessible unlabeled data as an enhancement for training. To ease the reliance on segmentation labels, semi-supervised deep registration methods were proposed, where only a small number of weak labels together with unlabeled data are required for training [21-28].
Semi-supervised registration methods usually perform joint registration and segmentation to leverage their synergy. For this aim, existing semi-supervised methods commonly adopted loss, feature, or input fusion strategies. Loss fusion is crucial and almost all semi-supervised methods adopted this strategy to some extent [21-28]. With loss fusion, the segmentation outputs can serve as weak anatomical labels to constrain the registration outputs, while the registration outputs can augment segmentation labels to constrain the segmentation outputs. The loss fusion strategy can be adopted alone [21-23] or jointly with the feature fusion strategy [24-26] or input fusion strategy [27, 28]. Recently, there have been sophisticated fusion strategies proposed for semi-supervised registration [51, 52]. Khor et al. [51] proposed a deformable registration network (named AC-DMiR) that realizes anatomically-constrained attention-guided feature fusion to maximize the information flow between the registration and segmentation tasks. Ma et al. [52] proposed a global-local transformation network (GL-Net) that implements both input and loss fusion through a region similarity constraint. However, these fusion strategies for semi-supervised registration are also empirically-designed and inherently limited.
## 3 Method
Image registration aims to find a spatial transformation \(\psi\) that warps a moving image \(I_{m}\) to a fixed image \(I_{f}\), so that the warped image \(I_{m\psi}=I_{m}\circ\psi\) is spatially aligned with the fixed image \(I_{f}\). In this study, we assume that the \(I_{m}\) and \(I_{f}\) are two single-channel, grayscale volumes defined in a 3D spatial domain \(\Omega\subset\mathbb{R}^{3}\), which is consistent with common medical image registration studies [7-16, 33, 34]. The \(\psi\) is parameterized as a diffeomorphic deformation field to ensure the invertibility and topology-preservation of spatial transformations [8, 9, 15, 17]. We parametrized the deformable registration problem as a function \(\mathcal{R}_{\theta}(I_{f},I_{m})=\psi\) using our AutoFuse (detailed in Section 3.1). The \(\theta\) is a set of learnable parameters that can be learned in the fully unsupervised setting (detailed in Section 3.2) or in the semi-supervised setting (detailed in Section 3.3).
### Automatic Fusion Networks (AutoFuse)
Fig. 2 shows the architecture of our AutoFuse, which consists of three branches with Unet-style encoder-decoder structure. Two branches, denoted by \(B_{m}\) and \(B_{f}\), extract features from \(I_{m}\) and \(I_{f}\) separately, while the other branch, denoted by \(B_{\textit{fuse}}\), first extracts features from concatenated \(I_{m}\) and \(I_{f}\) and then fuse the features from \(B_{m}\) and \(B_{f}\) via FG modules (detailed in Section 3.1.1).
Fig. 2: Overview of the proposed Automatic Fusion network (AutoFuse), including the architecture of (a) AutoFuse, (b) Fusion Gate (FG) modules, and (c) Efficient Large Kernel (ELK) blocks. Note that the skip connections of each branch are omitted in this figure for the sake of the clarity. The branches \(B_{m}\) and \(B_{f}\) share the same weights.
Each branch consists of successive Conv blocks and each Conv block is composed of two 3\(\times\)3\(\times\)3 convolutional layers followed by LeakyReLU activation (with parameter 0.2) and instance normalization. The encoder uses average pooling layers to reduce the resolution of feature maps, while the decoder uses upsampling layers to increase the resolution of feature maps. FG modules are embedded after the Conv blocks of \(B_{\mathit{fuse}}\) for feature fusion. Formally, let \(F_{\mathit{m}}^{i}\), \(F_{\mathit{f}}^{i}\), and \(F_{\mathit{fuse}}^{i}\) be the features from the \(i^{th}\) Conv block of \(B_{\mathit{m}}\), \(B_{\mathit{f}}\), and \(B_{\mathit{fuse}}\). Let \(\mathcal{F}_{i}\) be the FG module after the \(i^{th}\) Conv block and \(F_{\mathcal{F}}^{i}\) be the fused features from the \(\mathcal{F}_{i}\). We can derive \(F_{\mathcal{F}}^{i}=\mathcal{F}_{i}(F_{\mathit{m}}^{i},F_{\mathit{f}}^{i},F_{ \mathit{fuse}}^{i})\) and the \(F_{\mathcal{F}}^{i}\) will be fed into the next Conv block of \(B_{\mathit{fuse}}\) for later fusion. Skip connections are used between the encoder and decoder of each branch. For \(B_{\mathit{fuse}}\), the outputs of FG modules are propagated to the decoder through skip connections. The \(B_{\mathit{M}}\) and \(B_{\mathit{F}}\) share weights and their kernel numbers are half of the \(B_{\mathit{fuse}}\). The convolutional kernel numbers used in our experiments are presented in Appendix A.
To obtain the diffeomorphic deformation field \(\psi\), the output of \(B_{\mathit{fuse}}\) is fed into two parallel convolutional layers to produce a mean map \(\mu\) and a variance map \(\Sigma\). Then, a stationary velocity field \(\nu\) is sampled from the \(\mu\) and \(\Sigma\), and is converted into the \(\psi\) through seven steps of scaling and squaring integration [8]. Following [8], the feature maps of \(B_{\mathit{fuse}}\) are unsampled three times in the decoder, and the sampling and integration operations are performed at half of the original scale, which ensures the network can fit into the GPU memory. Accordingly, the FG module is also not used at the original scale.
When our AutoFuse is employed for semi-supervised registration, the outputs of \(B_{\mathit{m}}\) and \(B_{\mathit{f}}\) are fed into a softmax-activated convolutional layer to produce segmentation masks \(S_{\mathit{m}}\) and \(S_{\mathit{f}}\). The \(S_{\mathit{m}}\) and \(S_{\mathit{f}}\) are propagated back and concatenated with the \(I_{\mathit{m}}\) and \(I_{\mathit{f}}\) as the input of \(B_{\mathit{fuse}}\).
#### 3.1.1 Fusion Gate (FG) Module
To selectively fuse the features from different branches, we propose the FG module based on attention mechanisms. As shown in Fig. 2(b), for the FG module \(\mathcal{F}_{i}\), the \(F_{\mathit{m}}^{i}\) and \(F_{\mathit{f}}^{i}\) are first fused as \(F_{\mathit{mf}}^{i}\) using a 3\(\times\)3\(\times\)3 convolutional layer. Then, the \(F_{\mathit{mf}}^{i}\) and \(F_{\mathit{fuse}}^{i}\) are concatenated to learn two adaptive weight maps \(w_{\mathit{mf}}^{i}\) and \(w_{\mathit{fuse}}^{i}\) by two 1\(\times\)1\(\times\)1 convolutional layers and Softmax function. With the \(w_{\mathit{mf}}^{i}\) and \(w_{\mathit{fuse}}^{i}\), the weighted summation of \(F_{\mathit{mf}}^{i}\) and \(F_{\mathit{fuse}}^{i}\) is fed into an Efficient Large Kernel (ELK) block (detailed in Section 3.1.2) for feature refinement. Finally, we can derive \(F_{\mathcal{F}}^{i}=ELK(w_{\mathit{mf}}^{i}F_{\mathit{mf}}^{i}+w_{\mathit{fuse }}^{i}F_{\mathit{fuse}}^{i})\) as the output of \(\mathcal{F}_{i}\).
Through the proposed FG module, our AutoFuse can automatically optimize its fusion strategy during training. All FG modules serve as gates to control how to fuse information in the AutoFuse. In the unsupervised setting, early, middle, and late fusion can be potentially employed as needed. For example, if all the learned \(w_{\mathit{mf}}^{i}\) are zero maps, the \(F_{\mathcal{F}}^{i}\) will be fully determined by \(F_{\mathit{fuse}}^{i}\) and the AutoFuse will employ early fusion; if the learned \(w_{\mathit{mf}}^{i}\) are one maps in the encoder and zero maps in the decoder, the AutoFuse will employ middle fusion. However, as the learned \(w_{\mathit{mf}}^{i}\) and \(w_{\mathit{fuse}}^{i}\) are not explicitly encouraged to be zero or one maps, the AutoFuse usually will employ all early, middle, and late fusion but impose different weights. Similarly in the semi-supervised setting, loss, feature, and input fusion also can be potentially employed as needed. We expect our AutoFuse can search over a large possible space of fusion strategies and finally find an optimal strategy based on training data.
#### 3.1.2 Efficient Large Kernel (EKL) Block
The ELK block is a memory-efficient variant of the Large Kernel (LK) block proposed by Jia et al. [10]. Jia et al. [10] employed large kernel (5\(\times\)5\(\times\)5) convolution in LK blocks to increase the effective receptive field of a vanilla U-Net [24] and showed that the U-Net with LK blocks (LKU-Net) can outperform the recent state-of-the-art transformer-based registration method, TransMorph [11]. We followed Jia et al.'s study [10] but modified the original LK block into the memory-efficient ELK block. To reduce memory consumption, the original large kernel convolutional layer is replaced by three parallel convolutional layers with large kernels only
in one direction. As shown in Fig. 2(c), there are four parallel convolutional layers with a kernel size of 3\(\times\)3\(\times\)3, 5\(\times\)1\(\times\)1, 1\(\times\)5\(\times\)1, and 1\(\times\)1\(\times\)5 in each ELK block. The outputs of these four parallel layers are concatenated and fed into a 1\(\times\)1\(\times\)1 convolutional layer for integration. An identity shortcut is used between the input and output of each ELK block.
#### 3.1.3 Transformer-based Variant
Transformers have been widely adopted in many medical image applications for their capabilities to capture long-range dependency [53]. Recently, transformers have also been used for image registration and shown superior performance than their CNN counterparts [11]. To explore the generalizability of the proposed data-driven fusion strategy in transformer-based networks, we propose a transformer-based variant of AutoFuse, named AutoFuse-Trans, by replacing its Conv blocks with Swin transformer blocks [54]. The first Conv block of each branch is replaced by a patch embedding layer with patch size of 2\(\times\)2\(\times\)2 [54], where the image input is converted into sequence embeddings at half of the original image scale and enter the following swin transformer blocks. For feature downsampling and upsampling, the average pooling layers and upsampling layers are replaced by patch merging layers and patch expanding layers [55]. The detailed architectural settings, including window size, embedding dimensions, and attention head numbers, are presented in Appendix A.
### Unsupervised Learning
In the unsupervised setting, the learnable parameters \(\theta\) of our AutoFuse are optimized using an unsupervised loss \(\mathcal{L}_{\text{uns}}\) that does not require labels. The \(\mathcal{L}_{\text{uns}}\) consists of two terms \(\mathcal{L}_{\text{sim}}\) and \(\mathcal{L}_{reg}\), where the \(\mathcal{L}_{sim}\) is an image similarity term that penalizes the differences between the warped image \(I_{m\text{-}up}\) and the fixed image \(I_{f}\), while the \(\mathcal{L}_{reg}\) is a regularization term that encourages smooth and invertible diffeomorphic transformations \(\psi\).
For the \(\mathcal{L}_{sim}\), we adopt negative local normalized cross-correlation (NCC), a similarity metric that has been widely used in deformable image registration methods [7-14]. For the \(\mathcal{L}_{reg}\), we first adopt the Kullback-Leibler divergence (KL) between the true and approximate posteriors based on the predicted mean map \(\mu\) and variance map \(\Sigma\)[8], in which a smoothing precision parameter \(\lambda\) is used to balance the registration accuracy and transformation smoothness. In addition, as the \(\psi\) is not invertible at the voxel \(p\) where the Jacobian determinant is negative (i.e., \(|\psi(p)|\leq 0\)) [38], we also adopt a Jacobian Determinant-based loss (JD) [4] to explicitly penalize the negative Jacobian determinants of \(\psi\). Consequently, the \(L_{\text{uns}}\) is finally defined as:
\[\mathcal{L}_{\text{uns}}=-NCC(I_{f},I_{m\text{-}\psi})+\sigma KL_{\lambda}( \mu,\Sigma)+\mu D(\psi), \tag{1}\]
where the \(\sigma,\lambda\) and \(\mu\) are three regularization parameters. In the experiments, we only adjusted \(\lambda\) and \(\mu\) but fixed \(\sigma\) as 0.01 to make the value of KL term close to the NCC term.
### Semi-supervised Learning
In the semi-supervised setting, the learnable parameters \(\theta\) of our AutoFuse are optimized using a semi-supervised loss \(\mathcal{L}_{semi}\) that requires segmentation labels available for partial training data. The \(L_{semi}\) is defined as:
\[\mathcal{L}_{semi}=\mathcal{L}_{\text{uns}}+\alpha\mathcal{L}_{seg}+\beta \mathcal{L}_{\text{fuse}}, \tag{2}\]
where the \(\mathcal{L}_{seg}\) is a segmentation term that constrains the predicted segmentation masks \(S_{m}\) and \(S_{f}\), while the \(\mathcal{L}_{\text{fuse}}\) is a fusion term that imposes mutual constraints between registration and segmentation (i.e., loss fusion). The \(\alpha\) and \(\beta\) are two balancing parameters that were set to be 1 as default in our experiments.
For the \(\mathcal{L}_{seg}\), we calculate the sum of Dice [39] and Focal losses [40] (denoted by \(FocalDice\) ). Let \(L_{m}\) and \(L_{f}\) be the segmentation labels of \(I_{m}\) and \(I_{f}\). The \(\mathcal{L}_{seg}\) is defined as:
\[\mathcal{L}_{seg}=FocalDice(L_{m},S_{m})+FocalDice\big{(}L_{f},S_{f}\big{)}. \tag{3}\]
For the \(\mathcal{L}_{fuse}\), we warp the \(S_{m}\) and \(L_{m}\) to be \(S_{m\cdot\psi}=S_{m}\circ\psi\) and \(L_{m\cdot\psi}=L_{m}\circ\psi\), and calculate the Dice and Focal losses between the warped and fixed segmentation masks. The \(\mathcal{L}_{fuse}\) is defined as:
\[\mathcal{L}_{fuse}=FocalDice\big{(}L_{f},S_{m\cdot\psi}\big{)}+ FocalDice\big{(}S_{f},L_{m\cdot\psi} \big{)}+FocalDice\big{(}L_{f},L_{m\cdot\psi}\big{)}. \tag{4}\]
The \(\mathcal{L}_{semi}\) can adapt to semi-supervised image pairs that consist of a labeled image and an unlabeled image. For example, if the \(L_{m}\) is unavailable and the \(L_{m}\)-related terms in Eq.(3) and Eq.(4) are invalid (not calculated), the \(L_{f}\) still can serve as a coarse segmentation label to constrain the \(S_{m}\), while the \(S_{m}\) can serve as an anatomical label (with the \(L_{f}\)) to constrain the \(\psi\). With this design, both labeled and unlabeled data can be effectively used for training.
## 4 Experimental Setup
### Datasets and Preprocessing
We evaluated our AutoFuse with two well-benchmarked medical registration tasks (3D inter-patient brain image registration and 4D intra-patient cardiac image registration), which involved a total of eight public medical image datasets:
For inter-patient brain image registration, we adopted seven public 3D brain Magnetic Resonance Imaging (MRI) image datasets that have been widely used for brain image registration evaluation [3, 6]. We collected 414 brain MRI images with segmentation labels from OASIS [41] and randomly split them into 314, 20, and 80 images for training, validation, and testing. We also collected 2,656 unlabeled brain MRI images from four public datasets, ADNI [42], ABIDE [43], ADHD [44], and IXI [45], and used them for training. This results in a large semi-supervised training set consisting of 2,656 unlabeled and 314 labeled images. In addition, we used two public brain MRI datasets with segmentation labels, Mindboggle [46] and Buckner [47], for independent testing. The Mindboggle and Buckner datasets contain 100 and 40 images, which were merely used for testing and fully independent from training and validation. A total of 35, 62, and 110 anatomical structures were segmented as labels in the OASIS, Mindboggle, and Buckner datasets. In the unsupervised setting, the segmentation labels were fully independent from the training process and were merely used for evaluation. In the semi-supervised setting, we preprocessed the OASIS segmentation labels to reduce the label channels for more efficient training, where the symmetric anatomical structures in the left and right brain hemispheres were merged following [21] and this resulted in 19 remaining anatomical structures. Examples of segmentation labels are provided in Fig. 3, which delineate the labeled anatomical structures with different colors. Note that only the preprocessed OASIS labels are available for a small proportion (\(\sim\)10%) of training set, while the registration performance is evaluated with the original segmentation labels on three different testing sets (OASIS, Mindboggle, and Buckner). We followed the existing literatures [7-15] and performed inter-patient registration for evaluation, where we randomly picked 100 image pairs from each of the OASIS, Mindboggle, and Buckner testing sets, resulting in a total of 300 testing image pairs. We performed standard brain MRI preprocessing steps, including brain extraction, intensity normalization, and affine registration, with FreeSurfer [47] and FLIRT [48]. All images were affine-transformed and resampled to align with the MNI-152 brain template [49] with 1mm isotropic voxels, which were then cropped into 144\(\times\)192\(\times\)160 voxels.
Fig. 3: Examples of the segmentation labels used for semi-supervised learning (a) and evaluation on the OASIS (b), Mindboggle (c), and Buckner (d) datasets. The labeled anatomical structures are delineated with different colors. 2D cross-sectional slices are visualized for illustration.
For intra-patient cardiac image registration, we adopted the public ACDC dataset [56] that contains 4D cardiac cine-MRI images of 150 patients. Each 4D cine-MRI image contains tens of 3D MRI frames acquired from different time-points including the End-Diastole (ED) and End-Systole (ES) frames. The ED was defined as the first frame when the mitral valve was closed and the ES was defined as the first frame when the aortic valve was closed. ED and ES delineate the two ends of cardiac cycle and show the largest deformation in a cardiac cycle [57]. The ACDC dataset provides 100 cine-MRI images in the training set and 50 cine-MRI images in the testing set, where we further randomly divided the training set into 90 and 10 cine-MRI images for training and validation. Three anatomical regions, including left ventricular cavity, right ventricular cavity, and myocardium, were segmented as labels in the ED and ES frames of each cine-MRI image, resulting in a semi-supervised dataset consisting of labeled ED/ES frames and unlabeled non-ED/ES frames. The examples of cardiac cine-MRI are provided in Fig. 4, where the three labeled anatomical regions are colored in the ED and ES frames. The segmentation labels were used for semi-supervised learning and evaluation, which were fully independent from the training process in the unsupervised setting. Following [58, 59], we aim to register the ED and ES frames of the same patient. The intra-patient ED and ES frames were registered with each other (ED-to-ES and ES-to-ED), where 100 testing image pairs were derived from the testing set. All cine-MRI frames were resampled with a voxel spacing of 1.5\(\times\)1.5\(\times\)3.15 mm and cropped to 128\(\times\)128\(\times\)32 around the center. The voxel intensity was normalized to range [0, 1] through max-min normalization.
### Implementation Details
We implemented the AutoFuse using PyTorch on a NVIDIA V100 GPU with 32 GB memory. We used an ADAM optimizer with a learning rate of 0.0001 and a batch size of 1. We set \(\lambda=50\text{ and }\mu=10^{-5}\) to ensure that the percentage of voxels with negative Jacobian determinants is less than 0.05% (refer to the regularization analysis in Section 5.1.3). Our code is publicly available at [https://github.com/MungoMeng/Registration-AutoFuse](https://github.com/MungoMeng/Registration-AutoFuse).
For inter-patient brain image registration, the AutoFuse was trained for a total of 100,000 iterations with inter-patient image pairs. In the unsupervised setting, the training image pairs were randomly picked from the training set. In the semi-supervised setting, the AutoFuse was first trained for 50,000 iterations using labeled images only and then was trained for another 50,000 iterations with semi-supervised image pairs that consist of a labeled image and an unlabeled image. Validation was performed after every 1,000 training iterations and the model achieving the highest validation result was preserved for final testing.
For intra-patient cardiac image registration, the AutoFuse was trained for a total of 50,000 iterations with intra-patient image pairs. In the unsupervised setting, the AutoFuse was first trained for 40,000 iterations with image pairs randomly picked from the training set (including both ED/ES and non-ED/ES frames), and then was trained for another 10,000 iterations with image pairs consisting of ED/ES frames only. In the semi-supervised setting, the AutoFuse was first trained for 40,000 iterations with semi-supervised image pairs that consist of a labeled ED/ES frame and an unlabeled non-ED/ES frame, and then was also trained for another 10,000 iterations with image pairs consisting of ED/ES frames only.
Figure 4: Examples of the cine-MRI images in the ACDC dataset. From left to right are the ED frame, the intermediate frames from ED to ES, and the ES frame. The three labeled anatomical regions are colored in the ED and ES frames. 2D cross-sectional slices are visualized for illustration.
### Comparison Methods
Our AutoFuse was extensively compared to the state-of-the-art registration methods. In the unsupervised setting, two traditional methods and eight deep registration methods were compared. The two traditional methods are SyN [17] and NiftyReg [18], and we ran them using cross-correlation as the similarity measure. The eight deep registration methods are VoxelMorph (VM) [7], Diffeomorphic VoxelMorph (DifVM) [8], LKU-Net [10], TransMorph [11], DTN [12], XMorpher [15], SEN [33], and TransMatch [34]. The VM and DifVM are two commonly benchmarked methods for medical image registration [9-15]. The LKU-Net and TransMorph are two state-of-the-art methods that employ LK blocks and transformers to improve VM. The DTN, XMorpher, SEN, and TransMatch are four recent methods that employ sophisticated fusion strategies, which have been discussed in Section 2.1.
In the semi-supervised setting, six deep registration methods were compared, including weakly-supervised VM (WS-VM) [7], RSegNet [23], JPR-Net [25], PC-Reg [28], AC-DMiR [51], and GL-Net [52]. The WS-VM is the weakly-supervised variant of VM, which was trained only with labeled data. The RSegNet is a semi-supervised method that employs loss fusion alone, while the JPR-Net and PC-Reg further incorporated feature fusion and input fusion, respectively. The AC-DMiR and GL-Net are two recent semi-supervised methods that employs sophisticated fusion strategies as discussed in Section 2.2.
We followed the corresponding references to implement the deep registration methods with two modifications: (i) We adopted NCC as the similarity loss for all deep registration methods for a fair comparison, and (ii) The JPR-Net was modified to adapt to 3D images, where spherical operations were replaced by 3D operations while network topology was unchanged.
### Experimental Settings
Our AutoFuse was compared with the existing unsupervised and semi-supervised registration methods for both brain and cardiac image registration. We adopted standard evaluation metrics for medical image registration [4, 7-16, 21-28]: The registration accuracy was evaluated using the Dice similarity coefficients (DSC) between the warped and fixed segmentation labels, while the smoothness and invertibility of spatial transformations were evaluated using the percentage of Negative Jacobian Determinants (NJD). Generally, a higher DSC and a lower NJD indicate a better registration performance. A two-sided \(P\) value less than 0.05 is considered to indicate a statistically significant difference between the DSC of two methods.
In the unsupervised setting, we performed an ablation study where our AutoFuse was compared to baseline methods that employ the basic early, middle, and late fusion strategies (denoted by EarlyFuse, MidFuse, and LateFuse). The EarlyFuse, MidFuse, and LateFuse have the same encoder-decoder architecture as AutoFuse but use different fusion strategies. We also built a baseline method that excludes FG modules and fuses image features at multiple scales (denoted by MultiFuse). The MultiFuse directly sums \(F^{i}_{mf}\) and \(F^{i}_{Tuse}\) without using the FG modules. For a fair comparison, we purposely adjusted the kernel numbers of each baseline method, so that all the baseline methods have similar or higher parameter numbers than the AutoFuse. The detailed architectural settings of these baseline methods are provided in Appendix B. In addition, we performed a regularization analysis on the parameters \(\lambda\) and \(\mu\) to explore the trade-off between registration accuracy (DSC) and transformation invertibility (NJD). For comparison, we also implemented an AutoFuse variant parameterizing the \(\psi\) as a displacement field (denoted by AutoFuse-disp), which excludes the diffeomorphic constraints and the KL loss term in Eq. (1).
In the semi-supervised setting, we also performed an ablation study where feature fusion, input fusion, or FG modules was removed in baseline methods. To achieve this, we removed the relevant connections or layers but did not alter the overall topology of AutoFuse. The detailed architectural settings of these baseline methods are provided in Appendix B. In addition, we performed a semi-supervised learning analysis to explore the impacts of data annotations on registration performance, in which varying numbers of labeled images were used for semi-supervised learning. For the auxiliary segmentation task, the performance of our AutoFuse is reported quantitatively and qualitatively in Appendix C.
Furthermore, we performed a qualitative comparison between our AutoFuse and the existing registration methods in both unsupervised and semi-supervised settings. We also provided a statistical interpretation on the proposed data-driven fusion strategy by calculating the mean values of the adaptive weight maps in FG modules.
## 5 Results
### Unsupervised Registration Evaluation
#### 5.1.1 Comparison with Existing Methods
Table 1 shows the quantitative comparison between the AutoFuse and existing registration methods for inter-patient brain image registration in the unsupervised setting. We report the DSC and NJD results on three testing sets that involve different anatomical structures for evaluation (OASIS, Mindboggle, and Buckner). Our AutoFuse achieved significantly higher DSC results (_P_<0.05) than all the comparison methods across the three testing sets. Our AutoFuse-Trans further improved the DSC results over the original AutoFuse, which outperformed the comparison methods by a larger margin. As a diffeomorphic registration method, our AutoFuse also achieved the best NJD results. The compared diffeomorphic methods (DifVM and DTN) also achieved similar NJD results to our AutoFuse, but their DSC results were significantly worse (_P_<0.05). Compared with diffeomorphic methods, non-diffeomorphic methods (VM, TransMorph, LKU-Net, SEN, XMorpher, and TransMatch) achieved competitive performance in DSC, but their NJD results were obviously worse (~2.0% vs ~0.05%). In addition, the runtime results show that our AutoFuse is much faster than the traditional registration methods (SyN and NiftyReg) while being similar to the existing deep registration methods.
Table 2 shows the quantitative comparison between the AutoFuse and existing registration methods for intra-patient cardiac image registration in the unsupervised setting. For cardiac image registration, diffeomorphic methods (DifVM and DTN) showed advantages in both DSC and NJD. For example, the DifVM achieved better DSC and NJD results than its non-diffeomorphic counterpart (VM). Our AutoFuse and AutoFuse-Trans, as diffeomorphic registration methods, also showed advantages in NJD and achieved significantly higher DSC results (_P_<0.05) than all the comparison methods.
\begin{table}
\begin{tabular}{|c|c c|c c|c c|c c|} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{OASIS (\%)} & \multicolumn{2}{c|}{Mindboggle (\%)} & \multicolumn{2}{c|}{Buckner (\%)} & \multicolumn{2}{c|}{Runtime (second)} \\ \cline{2-9} & DSC \(\uparrow\) & NJD \(\downarrow\) & DSC \(\uparrow\) & NJD \(\downarrow\) & DSC \(\uparrow\) & NJD \(\downarrow\) & CPU \(\downarrow\) & GPU \(\downarrow\) \\ \hline Before registration & 61.0\({}^{*}\) & / & 34.7\({}^{*}\) & / & 40.6\({}^{*}\) & / & / & / \\ \hline SyN [17] & 76.2\({}^{*}\) & 0.204 & 52.8\({}^{*}\) & 0.216 & 56.4\({}^{*}\) & 0.147 & 3427 & / \\ NiftyReg [18] & 78.7\({}^{*}\) & 0.246 & 56.7\({}^{*}\) & 0.264 & 61.0\({}^{*}\) & 0.188 & 159 & / \\ \hline VM [7] & 77.3\({}^{*}\) & 1.997 & 55.2\({}^{*}\) & 2.532 & 58.9\({}^{*}\) & 2.220 & **2.84** & **0.32** \\ DifVM [8] & 78.4\({}^{*}\) & 0.041 & 52.8\({}^{*}\) & 0.043 & 57.4\({}^{*}\) & 0.039 & 2.92 & 0.34 \\ LKU-Net [10] & 79.3\({}^{*}\) & 1.792 & 57.4\({}^{*}\) & 2.217 & 61.2\({}^{*}\) & 1.992 & 3.34 & 0.42 \\ TransMorph [11] & 79.2\({}^{*}\) & 1.908 & 57.1\({}^{*}\) & 2.400 & 60.8\({}^{*}\) & 2.183 & 3.68 & 0.45 \\ DTN [12] & 78.9\({}^{*}\) & 0.060 & 56.1\({}^{*}\) & 0.060 & 60.1\({}^{*}\) & 0.056 & 3.31 & 0.41 \\ XMorpher [15] & 79.0\({}^{*}\) & 1.688 & 57.0\({}^{*}\) & 2.228 & 61.0\({}^{*}\) & 1.800 & 4.18 & 0.47 \\ SEN [33] & 78.2\({}^{*}\) & 1.976 & 56.2\({}^{*}\) & 2.370 & 60.4\({}^{*}\) & 2.115 & 3.17 & 0.36 \\ TransMatch [34] & 79.8\({}^{*}\) & 1.411 & 57.6\({}^{*}\) & 1.520 & 62.0\({}^{*}\) & 1.306 & 3.06 & 0.35 \\ \hline AutoFuse (ours) & 80.8 & 0.025 & 59.0 & 0.031 & 63.5 & 0.024 & 3.26 & 0.38 \\ AutoFuse-Trans (ours) & **81.3** & **0.024** & **59.8** & **0.029** & **64.1** & **0.022** & 5.87 & 0.65 \\ \hline \end{tabular}
* **Bold**: the best result in each column. ***:**_P_<0.05, in comparison to AutoFuse. \(\uparrow\): the higher is better. \(\downarrow\): the lower is better.
\end{table}
Table 1: Quantitative comparison for unsupervised brain image registration.
#### 5.1.2 Ablation Study
Table 3 shows the DSC results of the ablation study for inter-patient brain image registration in the unsupervised setting. The NJD results are omitted as all methods adopted the same regularization settings and achieved similar NJD results. The parameter numbers are also reported in Table 3, where the four baseline methods have similar or higher parameter numbers than our AutoFuse. Among the baseline methods, the MultiFuse achieved the highest DSC results, followed by MidFuse, EarlyFuse, and LateFuse. Our AutoFuse outperformed the MultiFuse and achieved significantly higher DSC results (\(P\)<0.05) than all the baseline methods. Note that ELK blocks were not used by AutoFuse in this ablation study for a fair comparison.
#### 5.1.3 Regularization Analysis
Table 4 shows the validation results of the AutoFuse using different regularization settings for inter-patient brain image registration in the unsupervised setting. The AutoFuse-disp with \(\mu\) = 0 did not impose any explicit constraints on the negative Jacobian determinants and obtained the worst NJD. Using the JD loss (set \(\mu=10^{-5}\)) enabled the AutoFuse-disp to achieve a better NJD with a slightly degraded DSC. By incorporating diffeomorphic constraints, our AutoFuse showed the potential to outperform the baseline AutoFuse-disp on both DSC and NJD, but its DSC results were dramatically degraded as the \(\lambda\) increased: The AutoFuse
\begin{table}
\begin{tabular}{|c|c c|c c|} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{ACDC (\%)} & \multicolumn{2}{c|}{Runtime (second)} \\ \cline{2-5} & DSC \(\uparrow\) & NJD \(\downarrow\) & CPU \(\downarrow\) & GPU \(\downarrow\) \\ \hline Before registration & 59.0\({}^{*}\) & / & / & / \\ \hline SyN [17] & 74.7\({}^{*}\) & 0.154 & 401 & / \\ \hline VM [7] & 75.4\({}^{*}\) & 0.440 & **0.36** & **0.04** \\ DifVM [8] & 77.3\({}^{*}\) & 0.051 & 0.49 & 0.06 \\ LKU-Net [10] & 77.0\({}^{*}\) & 0.427 & 0.80 & 0.08 \\ TransMorph [11] & 76.9\({}^{*}\) & 0.497 & 0.89 & 0.09 \\ DTN [12] & 77.8\({}^{*}\) & 0.088 & 0.77 & 0.08 \\ XMorpher [15] & 76.5\({}^{*}\) & 0.353 & 1.25 & 0.11 \\ SEN [33] & 75.9\({}^{*}\) & 0.452 & 0.43 & 0.06 \\ TransMatch [34] & 77.0\({}^{*}\) & 0.256 & 0.40 & 0.06 \\ \hline AutoFuse (ours) & 79.6 & **0.036** & 0.72 & 0.07 \\ AutoFuse-Trans (ours) & **80.2** & 0.045 & 1.66 & 0.15 \\ \hline \end{tabular}
\end{table}
Table 2: Quantitative comparison for unsupervised cardiac image registration.
\begin{table}
\begin{tabular}{|c|c|c c c|} \hline Method & Parameter Num & OASIS (\%) & Mindboggle (\%) & Buckner (\%) \\ \hline EarlyFuse & 8.06M & 78.2\({}^{*}\) & 56.0\({}^{*}\) & 60.3\({}^{*}\) \\ MidFuse & 8.17M & 78.3\({}^{*}\) & 56.2\({}^{*}\) & 60.5\({}^{*}\) \\ LateFuse & 8.17M & 77.6\({}^{*}\) & 55.3\({}^{*}\) & 59.4\({}^{*}\) \\ MultiFuse & 9.71M & 78.6\({}^{*}\) & 56.5\({}^{*}\) & 61.1\({}^{*}\) \\ \hline AutoFuse\({}^{\ddagger}\) (ours) & 8.75M & **80.0** & **58.5** & **62.9** \\ \hline \end{tabular}
\end{table}
Table 3: DSC results of the ablation study for unsupervised brain image registration.
with \(\lambda=50\) and \(\mu=0\) achieved the best DSC result with a NJD \(>\) 1.0%, while the AutoFuse with \(\lambda=400\) and \(\mu=0\) achieved the worst DSC result with a NJD \(<\) 0.05%. Our AutoFuse with \(\lambda=50\) and \(\mu=10^{-5}\) achieved the overall best validation results, which obtained the second-highest DSC with a NJD \(<\) 0.05%. Compared to increasing the \(\lambda\), adding the JD loss (set \(\mu=10^{-5}\)) enabled our AutoFuse to achieve better NJD with a smaller decrease in DSC.
### Semi-supervised Registration Evaluation
#### 5.2.1 Comparison with Existing Methods
Table 5 shows the quantitative comparison between the AutoFuse and existing registration methods for inter-patient brain image registration in the semi-supervised setting. Consistent with the unsupervised results in Table 1, our AutoFuse also achieved the best DSC and NJD results for semi-supervised registration across three testing sets (OASIS, Mindboggle, and Buckner). Compared to the unsupervised setting, our AutoFuse gained 3.7%, 1.2%, and 1.5% DSC improvements (\(P\)\(<\)0.05) in the semi-supervised setting, which enabled our AutoFuse to achieve significantly higher DSC results (\(P\)\(<\)0.05) than the compared semi-supervised registration methods. In addition, our AutoFuse-Trans also improved the DSC results over the original AutoFuse in the semi-supervised setting and thus outperformed the comparison methods by a larger margin. The runtime results show that our AutoFuse required similar runtime to the state-of-the-art semi-supervised deep registration methods.
Table 6 shows the quantitative comparison between the AutoFuse and existing registration methods for intra-patient cardiac image registration in the semi-supervised setting. Also consistent with the unsupervised results in Table 2, our AutoFuse and
\begin{table}
\begin{tabular}{|c|c|c c|} \hline \multicolumn{2}{|c|}{Method} & \multicolumn{1}{c}{DSC (\%) \(\uparrow\)} & \multicolumn{1}{c|}{NJD (\%) \(\downarrow\)} \\ \cline{2-5} \multicolumn{1}{|c|}{\multirow{2}{*}{AutoFuse-disp}} & \(\mu=0\) & 80.8 & 1.741 \\ \cline{2-5} \multicolumn{1}{|c|}{\multirow{2}{*}{\(\mu=10^{-5}\)}} & \(\mu=10^{-5}\) & 80.5 & 0.059 \\ \hline \multirow{4}{*}{\(\begin{array}{c}\text{AutoFuse}\\ \text{(ours)}\end{array}\)} & \(\lambda=50\), \(\mu=0\) & **81.4** & 1.074 \\
AutoFuse-Trans achieved the best DSC and NJD results for intra-patient cardiac image registration in the semi-supervised setting, which showed significantly higher DSC results (_P_\(<\)0.05) than all the comparison methods. Compared to the unsupervised setting, our AutoFuse gained 6.1% DSC improvement (_P_\(<\)0.05) for cardiac image registration in the semi-supervised setting.
#### 5.2.2 Ablation Study
Table 7 presents the DSC results of the ablation study for inter-patient brain image registration in the semi-supervised setting. The NJD results are omitted as all methods adopted the same regularization settings and achieved similar NJD results. Compared to employing loss fusion alone, incorporating feature and input fusion both improved the DSC results, where feature fusion contributed to larger DSC improvements than input fusion. The baseline method that employed all loss, feature, and input fusion achieved the highest DSC results among baseline methods. Nevertheless, our AutoFuse (the last row in Table 7) achieved significantly higher DSC results (_P_\(<\)0.05) than all the baseline methods. Also, ELK blocks were not used in this ablation study for a fair comparison.
#### 5.2.3 Semi-supervised Learning Analysis
Fig. 5 delineates the DSC results of the AutoFuse trained with different numbers of labeled images for inter-patient brain image registration in the semi-supervised setting. A total of 2656 unlabeled images and 314 labeled images (from OASIS) were used in this analysis, where different numbers of labeled images with or without unlabeled images were used to train our AutoFuse. As shown in Fig. 5, leveraging both labeled and unlabeled images enabled higher DSC results than leveraging unlabeled images alone, in which the DSC results became higher as the number of labeled images increased. In addition, compared to using labeled images alone, leveraging both unlabeled and labeled images also enabled our AutoFuse to achieve better DSC results.
\begin{table}
\begin{tabular}{|c|c c|c c|} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{ACDC (\%)} & \multicolumn{2}{c|}{Runtime (second)} \\ \cline{2-5} & DSC \(\uparrow\) & NJD \(\downarrow\) & CPU \(\downarrow\) & GPU \(\downarrow\) \\ \hline Before registration & 59.0\({}^{*}\) & / & / & / \\ \hline WS-VM [7] & 79.8\({}^{*}\) & 0.427 & **0.36** & **0.04** \\ RSegNet [23] & 83.0\({}^{*}\) & 0.089 & 0.45 & 0.05 \\ JPR-Net [25] & 82.6\({}^{*}\) & 0.442 & 0.52 & 0.06 \\ PC-Reg [28] & 82.5\({}^{*}\) & 0.117 & 1.36 & 0.14 \\ AC-DMiR [51] & 82.9\({}^{*}\) & 0.419 & 1.57 & 0.15 \\ GL-Net [52] & 83.2\({}^{*}\) & 0.385 & 1.24 & 0.13 \\ \hline AutoFuse (ours) & 85.7 & 0.062 & 1.02 & 0.12 \\ AutoFuse-Trans (ours) & **86.2** & **0.047** & 2.21 & 0.23 \\ \hline \end{tabular}
\end{table}
Table 6: Quantitative comparison for semi-supervised cardiac image registration.
\begin{table}
\begin{tabular}{|c c c|c c c|} \hline Loss fusion & Feature fusion & Input fusion & Fusion Gate\({}^{1}\) & OASIS & Mindboggle & Buckner \\ \hline \(\surd\) & \(\times\) & \(\times\) & \(\times\) & 0.805\({}^{*}\) & 0.566\({}^{*}\) & 0.611\({}^{*}\) \\ \(\surd\) & \(\surd\) & \(\times\) & \(\times\) & 0.818\({}^{*}\) & 0.576\({}^{*}\) & 0.623\({}^{*}\) \\ \(\surd\) & \(\times\) & \(\surd\) & \(\times\) & 0.811\({}^{*}\) & 0.571\({}^{*}\) & 0.618\({}^{*}\) \\ \(\surd\) & \(\surd\) & \(\surd\) & \(\times\) & 0.820\({}^{*}\) & 0.580\({}^{*}\) & 0.626\({}^{*}\) \\ \hline \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & **0.837** & **0.595** & **0.645** \\ \hline \end{tabular}
\end{table}
Table 7: DSC results of the ablation study for semi-supervised brain image registration.
### Qualitative Comparison
Fig. 6 and Fig. 7 present the qualitative comparison between the AutoFuse and existing registration methods for brain and cardiac image registration, in which the AutoFuse-Un and AutoFuse-Semi denote the AutoFuse in the unsupervised and semi-supervised settings. As shown in Fig. 6 and Fig. 7, compared to the existing unsupervised and semi-supervised registration methods, the results produced by our AutoFuse are more consistent with the fixed image, resulting in cleaner error maps for both benchmark tasks.
### Model Interpretation
Table 8 shows the mean values of the adaptive weight maps \(w^{l}_{\textit{tuse}}\) in FG modules, where the mean values of \(w^{l}_{pc}\) are omitted as the sum of \(w^{j}_{\textit{tuse}}\) and \(w^{j}_{pc}\) is equal to 1. The mean values are 0.5 initially and then are optimized during training, which could be regarded as indicators representing the ratio of information that each FG module fuses from different sources (i.e., \(B_{mff}\) or \(B_{\textit{tuse}}\)). For example, a mean value of \(w^{l}_{\textit{tuse}}\) lower than 0.5 indicates that the FG module \(\mathcal{F}_{l}\) tends to fuse less information (\(<\)50%) from the \(F^{l}_{\textit{tuse}}\) in \(B_{\textit{tuse}}\). As shown in Table 8, our AutoFuse trained with the same training data for brain image registration was evaluated on different testing sets and performed subtly different fusion behaviors for each testing set during inference. In addition, our AutoFuse tended to fuse less information from the \(B_{\textit{tuse}}\) at the FG modules with lower feature resolutions.
## 6 Discussion
Our main findings are (i) Our AutoFuse outperformed the state-of-the-art unsupervised and semi-supervised registration methods across two benchmark tasks of brain and cardiac image registration, (ii) Our data-driven fusion strategy outperformed existing
\begin{table}
\begin{tabular}{|c|c c c c c c|} \hline Testing set & \(w^{2}_{\textit{tuse}}\) & \(w^{3}_{\textit{tuse}}\) & \(w^{4}_{\textit{tuse}}\) & \(w^{5}_{\textit{tuse}}\) & \(w^{6}_{\textit{tuse}}\) & \(w^{7}_{\textit{tuse}}\) & \(w^{8}_{\textit{tuse}}\) \\ \hline OASIS & 0.546 & 0.384 & 0.212 & 0.163 & 0.187 & 0.322 & 0.507 \\ \hline Mindboggle & 0.559 & 0.375 & 0.205 & 0.184 & 0.206 & 0.331 & 0.518 \\ \hline Buckner & 0.548 & 0.388 & 0.211 & 0.166 & 0.182 & 0.315 & 0.510 \\ \hline \end{tabular}
\end{table}
Table 8: The mean value of the adaptive weight maps in FG modules during inference on different testing sets.
Figure 7: Qualitative comparison for unsupervised (upper row) and semi-supervised (bottom row) cardiac image registration. The exemplified image pair is obtained from the ACDC testing set with the labeled anatomical regions in color. Below each image is an error map that shows the differences in segmentation labels between the corresponding image and the fixed image. A cleaner map indicates a better registration. The arrows highlight the places where our AutoFuse outperforms the existing registration methods.
empirically-designed fusion strategies in the both unsupervised and semi-supervised settings, (iii) Our AutoFuse can leverage both labeled and unlabeled training data to improve the registration performance through semi-supervised learning, and (iv) Our AutoFuse can learn an adaptive fusion strategy based on training data and showed good generalizability to different unseen testing datasets by self-adapting its fusion strategy during inference.
Both the quantitative and qualitative comparisons on two benchmark tasks demonstrate that our AutoFuse outperformed the state-of-the-art unsupervised and semi-supervised registration methods, including the recent registration methods that employ sophisticated empirically-designed fusion strategies (e.g., TransMatch [34], AC-DMiR [51], etc.). In the quantitative comparison for brain image registration (Table 1 and Table 5), the Mindboggle and Buckner datasets are more difficult than the OASIS dataset as they were used for independent testing and included more anatomical structures for evaluation. This incurred lower DSC results before/after registration on the Mindboggle and Buckner datasets. Moreover, there usually exists a trade-off between DSC and NJD in inter-patient brain image registration [8, 9]. Imposing diffeomorphic constraints on spatial transformations can improve smoothness and invertibility, but this unavoidably limits the flexibility of spatial transformations and tends to degrade registration accuracy. Nevertheless, our AutoFuse, as a diffeomorphic registration method, achieved both the best DSC and NJD results, which demonstrates that our AutoFuse can perform accurate registration with highly smooth and invertible spatial transformations. In the quantitative comparison for cardiac image registration (Table 2 and Table 6), diffeomorphic registration methods showed advantages as intra-patient cardiac images carry invertible deformations without topology corruption. As a diffeomorphic method, our AutoFuse achieved better registration performance than all the comparison methods, including the compared diffeomorphic registration methods (DifVM and DTN). In addition, we found that leveraging transformer blocks in the AutoFuse produced consistent improvements for both unsupervised and semi-supervised registration on two benchmark tasks, which suggests that our data-driven fusion strategy can be generalizable to both CNN and transformer architectures.
In the ablation study for unsupervised registration (Table 3), the LateFuse achieved the worst DSC results, which suggests that late fusion is insufficient to characterize the complex spatial correspondence between images. The EarlyFuse and MidFuse fuse information at earlier stages and thus achieved better DSC results than the LateFuse. The MultiFuse fused information at all scales, which enabled it to achieve better DSC results than other baseline methods. Nevertheless, the MultiFuse is still limited by an empirically-designed fusion strategy that restricts the information fusion to manually-defined prior knowledge. Our AutoFuse leveraged FG modules to realize a data-driven fusion and thus achieved the best DSC results among all the methods. It should be noted that, for a fair comparison, we purposely adjusted each baseline method, so that they have similar or higher parameter numbers than AutoFuse (refer to Appendix B for detailed architectural settings). Moreover, ELK blocks were not used in this ablation study. These results suggest that the improvements of AutoFuse did not result from the use of extra parameters or ELK blocks, and our data-driven fusion strategy contributed to these improvements while not adding extra parameters.
In the ablation study for semi-supervised registration (Table 7), our AutoFuse also achieved better DSC results than all the baseline methods that employ the existing loss, feature, and input fusion strategies. ELK blocks were not used in this ablation study for a fair comparison and all the baseline methods share the same three-branch encoder-decoder architecture as the AutoFuse (detailed in Appendix B). These results further suggest that the improvements of AutoFuse indeed resulted from the use of our data-driven fusion strategy, and our data-driven fusion strategy outperformed the existing loss, feature, and input fusion strategies and their combinations in the semi-supervised setting. We also found that, compared to the unsupervised setting, our AutoFuse achieved significantly higher DSC results in the semi-supervised setting, which demonstrates that our AutoFuse can effectively leverage the anatomical information in segmentation labels to improve registration. The semi-supervised learning analysis (Fig. 5) also validates this finding, which shows that leveraging both labeled and unlabeled training data enabled our AutoFuse to achieve better registration performance than using labeled or unlabeled training data alone.
In addition, the quantitative comparison for brain image registration (Table 1 and Table 5) shows that our AutoFuse was well-generalized to two independent testing sets (Buckner and Mindboggle) and produced consistent improvements. This is attributed to
the fact that our AutoFuse can learn an adaptive fusion strategy based on training data. As shown in the model interpretation (Table 8), the AutoFuse adapted its fusion strategy for different testing sets during inference. Furthermore, we identified that the AutoFuse-learned fusion strategy can provide insights to facilitate the understanding of fusion strategy design. For example, as shown in Table 8, the trained AutoFuse tended to fuse less information from the fusion branch (\(B_{\textit{fuse}}\)) and grab more information from the two feature extraction branches (\(B_{m}\) and \(B_{f}\)) at the FG modules with lower feature resolutions. This reveals an insight that the afore-fused inter-image spatial information (in the fusion branch) has deteriorated after downsampling operations and, therefore, the unfused intra-image spatial information (in the two feature extraction branches) is needed to rebuild the spatial correspondence between images. This insight is consistent with the recent sophisticated fusion strategies [33, 34], where the spatial information of each image was extracted via separate encoders and then fused at multiple scales to explore their spatial correspondence. Nevertheless, compared to these empirically-designed fusion strategies, our data-driven fusion strategy provides more flexibility to optimize the fusion strategy based on data, where the information can be fused using the learned weight maps at each scale.
Our study has a few limitations. First, by using a data-driven fusion strategy, we expect that our AutoFuse can search over a large search space to find the optimal fusion strategy based on training data. However, despite the current search space being vast and including commonly-adopted fusion strategies, this study mainly focused on the optimization of fusion location (i.e., where to fuse the information within the network?), while the fusion operations were not fully investigated. In our future study, we will investigate further inclusion of other fusion operations (e.g., cross-attention transformers [15, 34]) into the search space. Moreover, this study mainly focused on brain and cardiac image registration. Our AutoFuse can be further validated with other registration applications (e.g., intra-patient lung registration and glioma registration [60]).
## 7 Conclusion
In this study, we have outlined a data-driven fusion strategy in an Automatic Fusion Network (AutoFuse) for deformable image registration. Unlike existing deep registration methods that adopt empirically-designed fusion strategies, our AutoFuse employs Fusion Gate (FG) modules to control the information fusion, which optimizes its fusion strategy based on training data for both unsupervised and semi-supervised registration. Extensive experiments on two well-benchmarked medical registration tasks (inter-patient brain image registration and intra-patient cardiac image registration) show that our AutoFuse outperforms state-of-the-art unsupervised and semi-supervised registration methods on both registration accuracy and transformation invertibility.
## Appendix A Architecture Details
Table A1 presents the kernel numbers of AutoFuse used in the experiments. In the ablation studies, ELK blocks were not used while other kernel numbers were unchanged. In addition, Table A2 presents the architectural setting of AutoFuse-Trans used in the experiments, including embedding dimensions, attention head numbers, and window size. These settings were empirically chosen to fit into our current GPU memory, which could be adjusted to fit into other computing devices.
\begin{table}
\begin{tabular}{|c|c c c c c c c c c|} \hline \(i\) = & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline \(F_{m}^{i}\) / \(F_{f}^{i}\) & 16 & 32 & 32 & 64 & 64 & 64 & 32 & 32 & / \\ \hline \(F_{use}^{i}\) & 32 & 64 & 64 & 128 & 128 & 128 & 64 & 64 & 64 \\ \hline \(F_{2}^{i}\) & / & 64 & 64 & 128 & 128 & 128 & 64 & 64 & / \\ \hline ELK & / & 32 & 32 & 64 & 64 & 64 & 32 & 32 & / \\ \hline \end{tabular}
* The last row (ELK) presents the kernel numbers of four parallel convolutional layers in each ELK block.
*
## Appendix B Baseline Methods in Ablation Studies
Fig. A1 illustrates the architecture of the baseline methods used in the ablation studies, including eight baseline methods in the unsupervised and semi-supervised settings. The kernel numbers of each baseline method are also shown in Fig. A1.
## Appendix C Segmentation Performance
Fig. A2 presents the segmentation masks predicted by our AutoFuse in the semi-supervised setting, in which the predicted segmentation masks are highly consistent with the ground-truth segmentation labels. Quantitively, our AutoFuse achieved DSC of 0.886 and 0.863 for segmentation on the OASIS and ACDC testing sets. These segmentation results imply that our AutoFuse can well identify the anatomical information in images and leverage this information to improve registration.
## Acknowledgements
This work was supported by Australian Research Council (ARC) under Grant DP200103748. The computations in this paper were run on the \(\pi\) 2.0 cluster supported by the Center for High Performance Computing at Shanghai Jiao Tong University.
|
2309.15062 | Fine structure splitting cancellation in highly asymmetric InAs/InP
droplet epitaxy quantum dots | We find the single exciton's fine structure splitting (FSS), which splits its
degenerate ground state manifold into singlets, nearly vanishes in highly
asymmetric quantum dots due to the cancellation of splitting effects with
markedly different origin. The dots simulated are those that emerge on top of
etch pits through the droplet epitaxy growth process; these etch pit dots break
square ($C_{4v}$) spatial symmetry, which has been previously associated with
small FSS. Configuration interaction calculations predict a vanishing FSS at a
specific finite etch pit displacement from the center of the dot, for a
structure far from square symmetry. We thus predict that highly asymmetric
quantum dots may still display negligible fine structure splitting, providing
new avenues for high-fidelity generation of indistinguishable, polarization
entangled photon pairs on demand. | N. R. S. van Venrooij, A. R. da Cruz, R. S. R. Gajjella, P. M. Koenraad, Craig E. Pryor, Michael E. Flatté | 2023-09-26T16:45:03Z | http://arxiv.org/abs/2309.15062v1 | # Fine structure splitting cancellation in highly asymmetric InAs/In droplet epitaxy quantum dots
###### Abstract
We find the single exciton's fine structure splitting (FSS), which splits its degenerate ground state manifold into singlets, nearly vanishes in highly asymmetric quantum dots due to the cancellation of splitting effects with markedly different origin. The dots simulated are those that emerge on top of etch pits through the droplet epitaxy growth process; these etch pit dots break square (\(C_{4v}\)) spatial symmetry, which has been previously associated with small FSS. Configuration interaction calculations predict a vanishing FSS at a specific finite etch pit displacement from the center of the dot, for a structure far from square symmetry. We thus predict that highly asymmetric quantum dots may still display negligible fine structure splitting, providing new avenues for high-fidelity generation of indistinguishable, polarization entangled photon pairs on demand.
Optically-active quantum dots embedded in a solid-state matrix, which enables gate control (_e.g._ via strain tuning), can provide on-demand emission of indistinguishable, entangled polarization photon pairs as well as other elements of quantum technologies[1; 2; 3; 4]. The fidelity of entangled polarization photon pairs emitted from the dot, however, depends on the energetic splitting (so-called fine-structure splitting, or FSS) between two "bright" exciton states[1; 5; 6]. Lowering the dot symmetry through growth kinetics from square (\(C_{4v}\)) to asymmetric (\(C_{2v}\)), combined with the electron-hole exchange interaction[7; 8], commonly provides the main source of this splitting [2; 7]. Stranski-Krastanov (SK) growth[9] relies on surface strain to form quantum dots, therefore producing dots with highly elongated bases and large FSS [10]. Some growth techniques, such as droplet epitaxy (DE)[11], regularly produce embedded quantum dots with near \(C_{4v}\) symmetry [10; 12; 13]; many such dots are more symmetric and have smaller FSS than their SK counterparts [10; 12]. However DE growth suffers from the formation of etch pits: secondary structures at the base of the dot [12; 14; 15] which can break the structural symmetry of the dot and potentially increase the FSS. The complex interaction of the quantum dot structure, exchange integrals, and the electron and hole wavefunctions forming the exciton produce several competing effects that have precise names within the literature[16; 17], including long-range exchange, short-range exchange, and band mixing terms; few calculations have attempted to include all relevant terms in the FSS calculation or discuss their interplay.
Here we identify an unexpected cancellation between FSS terms, denoted in the literature as (i) bulk band mixing, (ii) electric dipole, and (iii) short-range exchange, that emerge from the Hartree-Fock Hamiltonian evaluated to second order in the electron-hole spatial separation. As a consequence certain highly asymmetric dots exhibit negligible fine structure splitting. Our theoretical model utilizes an eight-band \(\mathbf{k}\cdot\mathbf{p}\) envelope function theory to calculate the bound electron and hole wavefunctions for realistic quantum dot geometries[18]. A configuration interaction (CI) calculation built from these states generates the single exciton energies and wavefunctions[19]. We find that simple etch pit structures break the exciton wavefunction symmetry and usually increase the FSS, however certain highly asymmetric etch pit positions may be beneficial and reduce the FSS to near zero.
A schematic of the simulated quantum dot and etch pit is in Fig. 1; the dot has a base length of 24 nm, a height of 7 nm, facets on the (101), (\(\bar{1}01\)), (011) and (0\(\bar{1}1\)) planes, and a diagonal baselength parallel to the (001) plane, coinciding with inferences from scanning tunneling microscopy (STM) measurements[15]. Because this work is primarily focused on the effects of etch pits on the excitonic fine structure we consider truncated square base quantum dots and neglect piezoelectric effects. For such dots all the \(C_{4v}\)-symmetry-breaking effects are directly related to the positioning of the etch pit. The etch pit shape was a truncated pyramid[20] with a base length of 8 nm and a height of 3 nm. The truncation of the pyramid was introduced due to insufficiently definite data on the etch pit shape, because the shape of a truncated pyramid is easier to parameterize, and as it roughly approaches the etch pit shape measured experimentally using cross sectional STM [15]. The quantum dot and etch pit geometries are projected on a cubic grid with a grid size of 1 nm\({}^{3}\). To study the effect that the etch pit position has on the quantum dot fine structure the etch pit was shifted along the diagonal to the corner of the quantum dot base in increments of \(\sqrt{2}\) nm.
The lowest energy single particle excited states of the dot correspond to adding an electron to the conduction band ("electron") or removing an electron from the valence band ("hole"). The zone-center Bloch functions of the host material are a complete set of states, hence any conduction or valence band states at finite crystal momentum can be expressed as lin
Figure 1: Schematic diagram of the quantum dot (red) and etch pit (blue). Both structures are discretized on a grid with grid spacings of \(1\times 1\times 1\) nm\({}^{3}\).
ear combinations of them. The dot's single particle states are calculated using an eight band \(k\cdot p\) envelope model described in Ref. [18]. Discrete states in the quantum dot are expressed in terms of spatially-varying coefficients (envelope functions) of the Bloch functions associated with the bulk conduction and valence zone-center Bloch functions (below referred to as conduction or valence contributions). Each of those electron and hole states have two spin orientations, and these four states form the smallest basis for a lowest-energy exciton manifold. In the absence of spin-dependent effects the electron and hole spin degeneracy would produce four degenerate excitons at zero magnetic field. However these four states are split through electronic interactions, which are dominated by the exchange interaction. We will thus focus on the four nondegenerate lowest-energy states of a single exciton, including the two spin states of the electron and hole constituents but excluding the biexciton and charged exciton states[1; 19].
We orient our discussion of symmetry breaking terms relative to the highest-symmetry case: spherical quantum dots. These have a spin degenerate lowest-energy electron state (\(S=1/2\)) and a four-fold degenerate lowest-energy hole state described by total angular momentum (spin and orbit) \(J=3/2\)[21; 22]. Without spin-dependent effects this electron and hole degeneracy produces eight exciton states, corresponding to \(J=2\) and \(J=1\). The symmetry of DE quantum dots is much lower; ideal DE quantum dots are truncated pyramids with a perfectly square base, described by \(C_{4v}\) symmetry, which breaks the four fold degeneracy of the lowest-energy hole. Depending on the quantum dot dimensions either the heavy hole (HH) or the light hole (LH) has lower energy. It is reported that short and wide quantum dots have the HH close to the band gap whereas tall and narrow dots have the LH close to the band gap [23]. This work will solely focus on the latter, which are more commonly grown.
These ideal DE quantum dots, in the presence of electron-electron interaction, exhibit four nondegenerate exciton eigenstates composed out of combinations of the conduction band electrons and HH-band dominant holes. Among these excitons there is a (near) degenerate high energy pair with total angular momentum corresponding to \(J\approx 1\) and a (near) degenerate low energy pair with total angular momentum corresponding to \(J\approx 2\). The order occurs because the electron-hole exchange interaction becomes repulsive as \(J\to 1\) and attractive as \(J\to 2\). These total angular momentum values imply that all excitons have allowed optical transitions, however the oscillator strengths of the high energy excitons greatly exceed those of the low energy excitons, and thus the high pair is often denoted as the bright pair whereas the low energy pair is referred to as the dim pair. Self assembled quantum dots will always have some effective elongation in one of the base diagonals due to strain-induced effects like piezoelectric fields [7]. This lowering of the symmetry further breaks the degeneracy of the high energy excitons and introduces a fine structure splitting.
The electron and hole wavefunctions in this study are computed using strain-dependent eight band envelope function theory on a real space grid. The QD electron and hole states can be written as a product between Bloch waves [\(u(\mathbf{r})\)] and spatially-varying envelope functions \(\{F(\mathbf{r}),G(\mathbf{r})\}\). The envelopes themselves are approximately constant within one unit cell of the grid and depend on the discrete grid coordinate \(\mathbf{R}\). The Bloch functions vary over a unit cell and are periodic, so they depend solely on the continuous coordinate \(\mathbf{\bar{r}}\) within a unit cell. The position vector may then be written as \(\mathbf{r}=\mathbf{R}+\mathbf{\bar{r}}\). We then compute the confined electron and hole states, which depend on the composition of the quantum dot and its geometry. The electron and hole wavefunctions are
\[\psi_{e}(\mathbf{R},\mathbf{\bar{r}}) =\sum_{i=1}^{8}F_{i}(\mathbf{R})u_{i}(\mathbf{\bar{r}}), \tag{1}\] \[\psi_{h}(\mathbf{R},\mathbf{\bar{r}}) =\sum_{j=1}^{8}G_{j}(\mathbf{R})u_{j}(\mathbf{\bar{r}}),\]
with \(\psi_{e}\) and \(\psi_{h}\) the electron and hole wavefunctions respectively, \(F_{i}\) the electron envelope functions, \(G_{j}\) the hole envelope functions and \(u_{i}\), the Bloch functions corresponding to each band (using the basis of Ref. [24], see supplementary material[25])[26; 27; 28; 29]. To calculate the eigenenergy of an exciton and to account for the antisymmetrization requirement of a two-body fermionic wavefunction, the wavefunctions in Eq. (1) are the two-particle Slater determinants [30; 19]
\[\Psi_{ii}(\mathbf{R}_{\mathbf{\bar{r}}},\mathbf{\bar{r}}_{1},\mathbf{R}_{2}, \mathbf{\bar{r}}_{2})=\]
\[\frac{1}{\sqrt{2}}\Big{[}\psi_{c}(\mathbf{R}_{\mathbf{\bar{r}}},\mathbf{\bar{ r}}_{1})\psi_{h}(\mathbf{R}_{\mathbf{\bar{r}}},\mathbf{\bar{r}}_{2})-\psi_{c}( \mathbf{R}_{2},\mathbf{\bar{r}}_{2})\psi_{h}(\mathbf{R}_{\mathbf{\bar{r}}}, \mathbf{\bar{r}}_{1})\Big{]}=\]
\[\frac{1}{\sqrt{2}}\sum_{i,j=1}^{8}\Big{[}F_{i}(\mathbf{R}_{1})u_{i}(\mathbf{ \bar{r}}_{1})G_{j}(\mathbf{R}_{2})u_{j}(\mathbf{\bar{r}}_{2})-F_{i}(\mathbf{R }_{2})u_{i}(\mathbf{\bar{r}}_{2})G_{j}(\mathbf{R}_{1})u_{j}(\mathbf{\bar{r}}_ {1})\Big{]}. \tag{2}\]
An upper bound on the eigenenergy of the exciton is obtained from the expectation value of the Hartree-Fock Hamiltonian,
\[\hat{\mathcal{H}}_{HF}=E_{e}+E_{h}+\frac{e^{2}}{4\pi\epsilon_{0}\epsilon_{0} \epsilon_{0}}\frac{1}{\|\Delta\mathbf{R}+\Delta\mathbf{\bar{r}}\|}, \tag{3}\]
with \(E_{e}\) and \(E_{h}\) the eigenenergies of the electron and hole, \(e\) the elementary charge, \(\epsilon_{0}\) the vacuum permittivity, \(\epsilon_{0}\) the high frequency (greater than phonon excitation energies) dielectric constant, \(\Delta\mathbf{R}=\mathbf{R}_{1}-\mathbf{R}_{2}\) and \(\Delta\mathbf{\bar{r}}=\mathbf{\bar{r}}_{1}-\mathbf{\bar{r}}_{2}\). For a two-particle fermionic wavefunction the Coulomb interaction can be split into the Hartree contribution \(J\) and the exchange contribution \(K\).
For the four excitons constructed from two electron and two hole states the specific states must be labeled; a specific one particle (electron or hole) or two particle (exciton) state will be labeled with the index \(\ell\) to distinguish this label from the band indices \(i\) and \(j\). The Hamiltonian in Eq. (3) will mix the four different exciton states so a matrix Schrodinger equation is constructed in the CI calculation. The excitonic eigenfunctions \(\Phi\) are written as a linear combination of the Slater determinants in Eq. (2):
\[\Phi_{\ell}(\mathbf{R}_{1},\mathbf{\bar{r}}_{1},\mathbf{R}_{2},\mathbf{\bar{ r}}_{2})=\sum_{\ell=1}^{4}C_{\ell^{\prime}}\Psi_{\ell^{\prime}}(\mathbf{R}_{1}, \mathbf{\bar{r}}_{1},\mathbf{R}_{2},\mathbf{\bar{r}}_{2}). \tag{4}\]
Off diagonal matrix elements between different Slater determinants originate from exchange. The Hamiltonian matrix elements for the CI calculation are
\[\langle\Psi_{\ell^{\prime}}|\hat{\mathcal{H}}_{HF}|\Psi_{\ell^{\prime}}\rangle= (E_{e}-E_{h})\delta_{\ell,\ell^{\prime}} \tag{5}\] \[-J_{\ell,\ell^{\prime}}+K_{\ell,\ell^{\prime}},\]
where \(J_{\ell,\ell^{\prime}}\) is the Hartree contribution and \(K_{\ell,\ell^{\prime}}\) is the exchange contribution. Computing these matrix elements and diagonalizing this Hamiltonian results in an upper bound for the exciton eigenenergies.
Within the theoretical framework of an envelope function model the electron-hole exchange interaction interaction is conveniently divided into two main constituents denoted short range (\(SR\)), and long range (\(LR\)) [16; 17], indicating whether the Coulomb interaction occurs between an electron and a hole within the same unit cell or in two different unit cells. The interaction within a unit cell (\(\mathbf{R_{1}}=\mathbf{R_{2}}=\mathbf{R}\)) corresponds to the analytic part of the exchange interaction or the short-range exchange interaction[31; 32; 33; 34]. The interaction across different unit cells (\(\mathbf{R_{1}}\neq\mathbf{R_{2}}\)) is referred to as the nonanalytic part of the exchange interaction or the long-range exchange interaction[32; 33; 34]. A more detailed description of the exchange interaction is given in the supplementary material.
Terms in the long range interaction can be further subdivided into the second order dipolar interaction (\(K_{LR,DD}^{(2)}\)) and other, usually neglected, zeroth, first and second order band mixing interactions (\(K_{LR,BM}^{(0)}\), \(K_{LR,BM}^{(1)}\) and \(K_{LR,BM}^{(2)}\) respectively). This results into the following expression:
\[\begin{split} K_{LR}=& K_{LR,BM}^{(0)}(\|\Delta \mathbf{R}\|^{-1})+K_{LR,BM}^{(1)}(\|\Delta\mathbf{R}\|^{-3})+\\ & K_{LR,DD}^{(2)}(\|\Delta\mathbf{R}\|^{-5})+K_{LR,BM}^{(2)}(\| \Delta\mathbf{R}\|^{-5}),\end{split} \tag{6}\]
where \(K_{LR,BM}^{(0)}\), \(K_{LR,BM}^{(1)}\) and \(K_{LR,BM}^{(2)}\) are respectively the zeroth, first and second order band mixing interactions. These interactions exclusively occur in models where the electron and hole wavefunctions have contributions from both conduction and valence bands. The term \(K_{DD}^{(2)}\) is the more frequently used dipole interaction for multiband wavefunctions. Even though the zeroth and first order band mixing terms drop off more slowly than the dipole interaction, the contribution of the dipole interaction to the fine structure splitting is still larger. Nevertheless the band mixing contributions to wavefunctions can be large, even in spherical dots, approaching nearly 30%[35; 21]. Thus the band mixing components cannot be ignored as will become evident below.
The FSS of quantum dots has been previously investigated using the short-range exchange interaction[36; 4] assuming the electron and hole wavefunctions are composed purely of conduction and valence band components respectively. In this case the exchange interaction takes the form[31]
\[\hat{H}_{zenk}=\Delta\,\mathbf{S}_{\mathbf{r}_{6}}\cdot\mathbf{J}_{\mathbf{r} _{8}} \tag{7}\]
where \(\mathbf{S}_{\mathbf{r}_{6}}\) is the spin operator for the bottom of the conduction band, \(\mathbf{J}_{\mathbf{r}_{8}}\) is the spin-3/2 operator for the top of the valence band, and the magnitude \(\Delta\) is fit to experiment. This form may be deduced from the fact that \(\mathbf{S}\cdot\mathbf{J}\) is the only available rotationally invariant operator.
In an eight-band model, however, there are three rotationally invariant operators: \(\mathbf{S}_{\mathbf{r}_{6}}\cdot\mathbf{J}_{\mathbf{r}_{8}},\mathbf{S}_{ \mathbf{r}_{6}}\cdot\mathbf{S}_{\mathbf{r}_{7}}\), and \(\mathbf{S}_{\mathbf{r}_{7}}\cdot\mathbf{J}_{\mathbf{r}_{8}}\), where \(\mathbf{S}_{\mathbf{r}_{7}}\) is the spin operator for the spin-orbit band. Each of these three interactions has its own coefficient determined by the integral in Eq. (8) of the supplemental material, which would be fit to experiment. Lacking sufficient data to do such a fit forces us to adopt the single band approach conventionally used.
The effect of the etch pit position on the fine structure of the bright excitons can be seen in Fig. 2 which presents the main results of this study. The figure shows the calculated fine structure splitting between the bright exciton states for three different situations; the total exchange interaction, only band mixing interactions and only dipole interactions. As the etch pit is shifted further along the diagonal the fine structure splitting gradually increases until it reaches a maximum, then decreases, passes through zero, and turns negative.
The origin for this behaviour may be explained by comparing the band mixing contributions to the FSS with the dipole contributions. Even though the contributions of both the band mixing terms and the dipole term to the fine structure splitting increase as the etch pit moves from a central location, the FSS induced by these are opposite in sign, causing them to cancel and produce a much smaller total fine structure splitting. The attractive sign of the dipole interaction originates from the very flat geometry of the quantum dot with a large base. A more detailed description on how each contribution to the exchange interaction acts separately on the fine structure splitting can be seen in Fig. 3, however a full discussion of all the nuances is beyond the scope of this work. Figure 3 identifies the dipole interaction as the largest contributor to the bright exciton FSS. The magnitude of the zeroth order band mixing interaction is approximately the same as that of the short range exchange interaction, and both are smaller than the magnitude of the first order band mixing interaction. Thus ignoring the band mixing contributions leads to significantly incorrect exciton fine structure splitting values. Figure 3 shows that the dark exciton fine structure splitting (\(FSS_{dark}=E_{dim}-E_{dark}\)
Figure 2: The fine structure splitting between the bright excitons for a droplet epitaxy quantum dot. The fine structure splitting is defined as \(FSS_{bright}=E_{brightest}-E_{bright}\). The black line indicates the calculated fine structure splitting for the total exchange interaction. The blue line indicates the calculated fine structure splitting when solely including the band mixing terms of the exchange interaction, these are \(K_{LR,BM}^{(0)}\), \(K_{LR,BM}^{(1)}\) and \(K_{LR,BM}^{(2)}\). The red line indicates the calculated fine structure splitting when solely including the short range (\(K_{SR}\)) and dipole interactions (\(K_{LR,DD}^{(2)}\)).
is not affected nearly as much by the presence of an etch pit as the bright excition fine structure splitting.
The bright versus dark excitons are identified from the oscillator strengths of each exciton, calculated for each etch pit position and shown in Fig. 4. The effect of shifting the etch pit on the exciton oscillator strenghts is dramatic, especially for the dark states. For a centered etch pit both dark states have a negligible oscillator strength, thus it is not possible to label the states "dark" and "dim". As the etch pit is shifted further away from the center, the oscillator strengths increase approximately three orders of magnitude, however remain far smaller than those of the bright states. Therefore the dark and bright states do not switch their identification for these dots. By comparing the results of Figs. 4 and 3 we see the presence of an etch pit significantly increases the oscillator strengths of the dark exciton without affecting the dark exciton fine structure splitting. Thus such etch pits may improve dark exciton qubit performance[37; 38].
Fig 5 shows the exciton energies with the lowest energy exciton subtracted from the other exciton energies. The bright excitons are higher in energy than the dark excitons since the electron-hole exchange interaction acts repulsively on anti-parallel spin configurations. Fig. 6 shows maps of the single particle states of a quantum dot with an etch pit at two different locations. When the etch pit is shifted off center such that \(C_{4s}\), symmetry is broken this causes the electron and hole states to respond to this shift by either moving towards the etch pit region (electrons) or away from it (holes).
The degeneracy of the bright excitons is a necessary condition for high fidelity between emitted photons, however the fidelity can be degraded by other effects such as those originating from charge and spin noise. As these band mixing terms have led to FSS cancelation in asymmetric structures, we suggest other dot geometries may also produce unexpected
Figure 4: Oscillator strengths for the bright (red) and dark (grey) excitons as a function of the etch pit position. With a centered ecbti the bright excitons have equal oscillator strengths but as the pit shifts one (\(X_{B}\)) becomes slightly brighter than the other (\(X_{b}\)). For zero etch pit shift the dark excitons have zero oscillator strength to within the numerical accuracy of the calculations. As the pit shifts, the dark excitons become merely dim (\(X_{D}\) and \(X_{d}\)).
Figure 5: Energies of an exciton in a quantum dot which has a varying etch pit position, relative to the ground-state exciton. The top figure shows the two bright excitons (\(X_{b}\), \(X_{B}\)), which are always higher in energy. The lowest energy exciton can be either the dark exciton (\(X_{D}\)) or the dim exciton (\(X_{d}\)); The bottom figure shows the splitting between these two, and indicates which is the ground state exciton.
Figure 3: The contributions of each part of the exchange interaction to the fine structure splitting of both the bright (red) and the dark (grey) excitons. It is evident from all plots that the bright states are predominantly affected by the exchange interaction whereas the dark states are not. |
2309.05265 | Ionized regions in the central arcsecond of NGC 1068. YJHK spatially
resolved spectroscopy | Context. Several bright emission line regions have been observed in the
central 100 parsecs of the active galaxy NGC 1068. Aims. We aim to determine
the properties and ionization mechanism of three regions of NGC 1068: the
nucleus (B) and two clouds located at 0.3" and 0.7" north of it (C and D).
Methods. We combined SPHERE (0.95 - 1.65 um) and SINFONI (1.5 - 2.45 um)
spectra for the three regions B, C, and D. We compared these spectra to several
CLOUDY photoionization models and to the MAPPINGS III Library of Fast Radiative
Shock Models. Results. The emission line spectra of the three regions are
almost identical to each other and contribute to most of the emission line flux
in the nuclear region. The emitting media contain multiple phases, the most
luminous of which have temperatures ranging from 104.8 K to 106 K. Central
photoionization models can reproduce some features of the spectra, but the fast
radiative shock model provides the best fit to the data. Conclusions. The
similarity between the three regions indicates that they belong to the same
class of objects. Based on our comparisons, we conclude that they are shock
regions located where the jet of the active galactic nucleus impacts massive
molecular clouds. | P. Vermot, B. Barna, S. Ehlerová, M. R. Morris, J. Palous, R. Wünsch | 2023-09-11T06:49:52Z | http://arxiv.org/abs/2309.05265v1 | # Ionized regions in the central arcsecond of NGC 1068 +
###### Abstract
Context:Several bright emission line regions have been observed in the central 100 parsecs of the active galaxy NGC 1068.
Aims:We aim to determine the properties and ionization mechanism of three regions of NGC 1068: the nucleus (B) and two clouds located at 0.3\("\) and 0.7\("\) north of it (C and D).
Methods:We combined SPHERE (0.95 - 1.65 \(\mu m\)) and SINFONI (1.5 - 2.45 \(\mu m\)) spectra for the three regions B, C, and D. We compared these spectra to several CLOUDY photoionization models and to the MAPPINGS III Library of Fast Radiative Shock Models.
Results:The emission line spectra of the three regions are almost identical to each other and contribute to most of the emission line flux in the nuclear region. The emitting media contain multiple phases, the most luminous of which have temperatures ranging from 10\({}^{18}\) K to 10\({}^{6}\) K. Central photoionization models can reproduce some features of the spectra, but the fast radiative shock model provides the best fit to the data.
Conclusions:The similarity between the three regions indicates that they belong to the same class of objects. Based on our comparisons, we conclude that they are shock regions located where the jet of the active galactic nucleus impacts massive molecular clouds.
## 1 Introduction
The central region of NGC 1068 provides a unique opportunity to study some of the smallest spatial scales (1 arcsecond \(\leftrightarrow\) 72 pc) in an active galactic nucleus (AGN) due to its proximity (14.4 Mpc) (Bland-Hawthorn et al., 1997). The galaxy's nearly face-on orientation offers a clear line of sight toward its central region, and its location near the celestial equator allows for observation from both hemispheres. As a result, NGC 1068 has been the subject of numerous publications since its original spectroscopic observation by Karl Seyfert in 1943 (Seyfert, 1943).
Observations of NGC 1068 at subarcsecond resolution from UV to radio wavelengths have been made possible by the use of space-based imaging, adaptive optics, and interferometry. These high-resolution observations have provided important insights into the physical processes occurring in the central region of this galaxy.
At gigahertz frequencies, a kiloparsec-scale (\(\sim 10"\)) radio jet oriented along a position angle of approximately 30 degrees is clearly visible in NGC 1068 (Gallimore et al., 1996; Muxlow et al., 1996). At smaller spatial scales (Gallimore et al., 2004), it appears that the jet is initially launched at a position angle of approximately 12 degrees from a compact radio source named S1, but it is bent approximately 0.3\("\) north of it at the position of another compact radio source named component C (see Fig. 1). This deviation of the jet is proposed to be due to an interaction with an interstellar cloud (Gallimore et al., 2004).
At longer wavelengths, the molecular content of NGC 1068 is detected through CO emission lines, notably with ALMA (Garcia-Burillo et al., 2019). The main feature of the central region is a massive circumnuclear molecular disk (CND) that is 400 pc wide and has a mass of 10\({}^{8}\) M\({}_{\odot}\). The CND has a central hole of approximately 200 pc in diameter, within which the AGN is located, but not at the center. A smaller molecular disk with a mass of 3\(\times 10^{5}\) M\({}_{\odot}\) has been observed at the position of S1, which has been identified as the mass reservoir and obscures the structure around the AGN (Garcia-Burillo et al., 2016; Imanishi et al., 2018; Garcia-Burillo et al., 2019). A molecular clump has also been observed north of S1, roughly at the bending point of the radio jet component C. High-resolution continuum observations in the ALMA spectral range have revealed a similar geometry to the observations made at gigahertz frequencies (Garcia-Burillo et al., 2019; Imanishi et al., 2020).
In the IR, the details of NGC 1068 become accessible with adaptive optics-fed instruments. Gratadour et al. (2006) used deconvolution techniques on NAOS-CONICA observations to produce high angular resolution images of the central arcsecond in the K, L, and M bands. The nucleus S1 is the brightest source, particularly at short wavelengths. However, several compact sources with significant flux can be observed around it, particularly in the M band, where the northern sources are almost as
bright as the nucleus. The positions of the two brightest sources, that is, the nucleus and IR1B, correspond to the positions of the radio components S1 and C, respectively.
At the shortest wavelengths, the central region of NGC 1068 has been imaged in the optical and UV wavelengths with the Hubble Space Telescope Faint Object Camera (Macchetto et al., 1994). The images of the source in the optical and UV continuum, as well as with the [O III] emission line filter, are very similar to each other on a large scale, revealing an extended \(\sim\)400 pc structure oriented along a position angle of approximately 30 degrees. The [O III] image revealed the complex filamentary structure of the narrow line region (NLR), which is an outflowing ionized bicone. Several bright emission line clouds were observed in the central arcsecond (Crenshaw & Kraemer, 2000; Das et al., 2006).
In this paper, we present a detailed study of the spatially resolved near-IR (NIR) spectra of three clouds (B, C, and D) from the inner NLR of NGC 1068 (named following the original convention from Evans et al. (1991)). In Section 2.1, we present the observations, followed by a quantitative description of the spectra in Section 3. In Section 4, we present the associated modelings, while a discussion of the nature of the objects is presented in Section 5. Finally, in Section 6, we summarize our conclusions.
## 2 Observations and spectra extraction
### Observations
This study is based on the merged analysis of two spectroscopic observations obtained with SPHERE/VLT and SINFONI/VLT. These observations are discussed individually in the following paragraphs.
#### 2.1.1 Sphere
We used a new observation performed with SPHERE in the IRDIS long-slit spectroscopy (LSS) mode (Vigan et al., 2008; Beuzit et al., 2019) as part of the 0104.B-0242(A) ESO observing program (PI: J.-L. Beuzit). The observation was conducted under changing atmospheric conditions. The seeing was around 0.7" at the beginning of the observation (with a coherence time of the atmospheric turbulence between 5 and 6 ms) but quickly degraded to more than 1" (with a coherence time below 4 ms). As a result, the adaptive optics' performance varied a lot during the observation, providing a good resolution for the first few frames and quickly worsening. As a compromise between SNR and angular resolution, we selected the three best exposures of the object in terms of angular resolution and discarded the others.
The data reduction followed the instructions from the SPHERE manual1 and was performed as described in detail in Vermot et al. (2019). We applied detector-level corrections, including dark subtraction, flat-field division, distortion correction, linear interpolation of the bad pixels, and realignment of the various exposures. As a second step, we performed spectroscopic calibrations, correction for the atmospheric transmission, and flux calibration from the 2MASS J and H magnitudes of BD-00413, the standard star observed during the run.
Footnote 1: [https://www.eso.org/sci/facilities/paranal/instruments/sphere/doc.html](https://www.eso.org/sci/facilities/paranal/instruments/sphere/doc.html)
The final product is a flux-calibrated spectrum with a 100 mas angular resolution and an 8 nm spectral resolution. The slit covers 11" x 0.09", is centered on the maximum of continuum emission, and is oriented with \(PA=11^{\circ}\) (see Fig. 1).
#### 2.1.2 SINFONI
We used archival SINFONI integral field spectroscopy obtained as part of the observing program 076.B-0098(A). The instrumental setup provides spectrally dispersed images in the H and K bands with \(R\sim 1500\) and \(r=100\) mas.
The data reduction was performed with the ESO pipeline in its standard mode, except for the flux calibration, which was performed a posteriori by matching the integrated HK band flux from the entire field of view to the one measured on 2MASS im
Figure 1: Top: Schematic representation of the main components in the central region of NGC 1068: the CND, the outer edges of the outflow bicone, and the three regions (B, C, and D) analyzed in this work. The field of view of the SINFONI observation is indicated with a pink square. Bottom: [Si VI] emission line map in the central region of NGC 1068 obtained with the SINFONI observation (pixel scale 50 mas) and position of the SPHERE slit (red lines, roughly aligned with the jet orientation). The coordinates are centered on the position of the K band continuum maximum. Following the naming convention from Evans et al. (1991), the three regions studied in this paper are cloud B (in the center; at the same position as the radio component S1 and the compact torus), cloud C (0.25” north), and cloud D (0.7” north).
ages of the galaxy. The image presented in Fig. 1 corresponds to the [Si VI] continuum-subtracted emission line map obtained from this observation.
### Data interpolation and spectrum extraction
To combine the two observations, we computed a synthetic long slit on the SINFONI data cube. We used linear interpolation to rotate the cube in the spatial dimension around the maximum of continuum emission and extracted a two pixel-wide slit matching the orientation of the SPHERE one. Since this synthetic slit is 100 mas wide while the physical one from SPHERE is 90 mas, we applied a 0.9 correction factor to the SINFONI flux.
We extracted three spectra from each slit at positions corresponding to the maxima of emission lines: 0.0" (cloud B), 0.25" (cloud C), and 0.7" (cloud D). Each spectrum was summed over a 100 mas bin. The Y and J bands are only covered by SPHERE, the K band only by SINFONI, and the H band by both observations. For the latter, we interpolated the SINFONI data to the spectral resolution of SPHERE and took the average between the two observations. The match was perfect between the two datasets for clouds C and D, but the SINFONI spectrum of cloud B required recalibration in order to increase its flux by a factor two, which we attribute to a combination of imprecision in the synthetic slit extraction and differences between the PSFs of the two adaptive optics systems. Lastly, the continuum of emission was subtracted from the spectrum of each cloud to extract a pure emission line spectrum.
## 3 Detected emission lines
The continuum-subtracted spectra of the three clouds are presented in Fig. 2, color coded by spectral domain. The green and blue parts correspond, respectively, to the SPHERE and SINFONI exclusive spectral zones, while the red part corresponds to the overlap of the two instruments. The spectra are presented, from top to bottom, with increasing distance from the nucleus. The uncertainty on the spectrum, indicated as a gray shaded area, was estimated by measuring the standard deviation of the spectra in regions free of emission lines. As can be observed, the SNR also increases with the distance from the center despite the flux of the emission lines decreasing. This is because the photon noise from the continuum of emission is at its maximum at the center.
A significant result can already be deduced from the qualitative analysis of these spectra: They are remarkably similar to each other. They are dominated by a very strong emission line at \(\sim 1.10\ \mu m\), to the left of which are three medium emission lines. Redward, two medium emission lines are found at \(\sim 1.25\ \mu m\) and one at \(\sim 1.45\ \mu m\), and lastly, another strong emission line is found at \(\sim 1.95\ \mu m\). This is valid for the spectra of clouds C and D, which are almost indistinguishable apart from their fluxes. Part of the spectrum of cloud B is unusable, but the emission lines in the J band are also fully consistent with the description above and with clouds C and D. This indicates that the three emission line regions are excited by the same mechanism, and most probably, the properties of the interstellar medium (ISM), for example, ionizing radiation field, density, and composition, are very close in these three clouds.
Altogether, we detected a few dozen emission lines, spanning a variety of elements and ionization energies. We present in Table 1 the flux of all the emission lines detected with SNR \(>\) 3. We measured them with Gaussian fits (simple in the case of isolated lines and double in the case of overlapping lines). The uncertainties provided correspond to \(1\sigma\), as estimated from the covariance matrix of the parameters. As discussed later in this paper, the identification of the lines has been done a posteriori by analyzing the output from the best CLOUDY models. We detected permitted emission lines, notably H I and He I. The He I line is the strongest emission line of the spectrum, and we detected up to seven emission lines from different hydrogen series. Assuming the Case B recombination case (clouds opaque to Lyman UV photons), we measured no extinction (\(A_{K}\leq 0.15\) for all three clouds at \(3\sigma\)). We also detected forbidden low-ionization
Figure 2: Combined YJHK spectra of the clouds with increasing distance from the nucleus. From top to bottom, the panels show a distance of 0.0” (B/S1/nucleus), 0.25” (C), and 0.7” (D). The SPHERE data are displayed in green, the SINFONI data in blue, and the averaged overlap in red. The estimated uncertainty is indicated as a gray area.
energy emission lines, indicating the presence of a low-density environment. They are numerous and have relatively low fluxes. Lastly, we detected forbidden high-ionization energy emission lines, also known as "coronal emission lines." Two of them are of particular importance: [Si X] at 1430 nm, which has the strongest ionization energy among our lines (401 eV), and [Si VI] at 1960 nm, which is the second strongest line in our spectral domain.
## 4 Physical conditions and ionization mechanism
### Goal and method
In this section, we investigate the physical conditions and ionization mechanism of the clouds. To achieve this, we compare the observations with three ionization models: (1) a simple CLOUDY c17.02 (Ferland et al. 2017) setup describing a cloud with constant density and temperature (without any ionization source; models PC1, PC2, and PC3); (2) a CLOUDY model of a cloud photoionized by a central AGN (model AGN); and (3) the MAPPINGS III shock model (which itself relies on CLOUDY; model SHK).
Instead of using a specific emission line ratio as a diagnostic between the different situations, we used the models to compute synthetic spectra with the same resolution as the observation and compared them directly to the observed spectra. This approach has several advantages over the use of emission line ratios. First, it takes into account every emission line. Second, it takes into account all the non-detections, which bring important constraints on the model. Third, it takes into account the absolute flux of the emission lines. Fourth, it makes no a priori assumptions about the identification of the lines. Moreover, overlapping close lines are naturally taken into account. Finally, by studying the best model, we can make an identification of the lines a posteriori.
We removed the [Fe II] lines from the models because they should be strong and ubiquitous according to our ionization models, but they are barely detected. The explanation for this weak [Fe II] signal is discussed in 5.2.3. Otherwise, synthetic spectra were computed and directly compared to the observations. For each model, we determined the best set of parameters by comparing the reduced chi-square between the model and the observation.
### Physical conditions: Constant density and temperature
To predict the emission line spectrum of a cloud with a uniform density and temperature on a large and fine grid, we first determined the physical conditions of the emitting regions by using CLOUDY c17.02. We call this model PC1. The grid covers a wide range of temperatures (\(log(T/K)\) ranging from 3.70 to 9.00 with 0.02 steps) and densities (\(log(n/cm^{-3})\) ranging from -5 to 9 with 0.05 steps). We compared the spectra in absolute units of flux, assuming an emitting volume smaller than 125 pc\({}^{3}\).
The results for all three clouds are quite similar, with a deep minimum observed in the residual maps for temperatures ranging from \(10^{4.2}\) K to \(10^{5.2}\) K and densities of 100 \(cm^{-3}\) or greater. The corresponding 2D maps for these parameters can be found in Fig. 10, while the 1D plots are presented in Figs. 11 and 12. The lower limit on density (\(n\geq 10^{4.5}\)\(cm^{-3}\)) is imposed by the maximum volume of the emitting cloud, but otherwise this parameter has little effect on the spectrum.
The best models shown in the top plots of Figs. 12, 13 and 13 successfully reproduce the permitted emission lines (He I and hydrogen series) but do not produce any forbidden emission lines. The best-fit parameters for these models, including the corresponding emitting volumes, are provided in Table 2. The optimal temperature values for all three clouds are quite similar, ranging from \(10^{4.8}\) to \(10^{4.9}\) K.
As indicated by Figs. 10 and 12, a less likely secondary solution is also present in all three clouds, characterized by temperatures ranging from \(10^{4.5}\) to \(10^{6.5}\) K. These solutions predict strong forbidden emission lines but weak permitted emission lines.
In the next step, we attempted to find a two-temperature combination that can reproduce the observed spectrum. To this end, we fixed the density at \(n=100\) cm\({}^{-3}\) and computed the spectra for all possible temperature combinations, with flux ratios between the two components ranging from 0.05 to 0.45 in increments of 0.05. The results from this approach, which we call model PC2, are detailed in Table 3, with residual maps in Fig. 14. For all three clouds, the majority of the emission is still attributable to a medium at \(\sim 10^{4.8}\) K (with acceptable temperatures ranging from \(10^{4.5}\) to \(10^{5.2}\) K). Clouds B and C exhibit a secondary medium with temperature \(10^{6}\) K (with corresponding flux ratios of 15% and 25%), while cloud D has a secondary medium with a temperature of \(10^{5.4}\) K.
A comparison of the best PC2 models for clouds B, C, and D is presented in Figs. 12, 13, and 13. The results show that the best models for clouds B and C are successful in reproducing the permitted emission lines as well as most of the forbidden emission lines, but they fail to generate significant flux for the strong [Si VI] coronal emission line at 1960 nm. In contrast, the best model for cloud D accurately reproduces the He, H, and [Si VI] emission lines, but it fails to generate the correct flux for several other forbidden emission lines, specifically the coronal lines [S IX] and [Si X] at 1251 and 1430 nm. The best solution for clouds B and C is a good secondary solution for cloud D, and vice versa.
Each of the three clouds exhibits a multiphase ionized medium, with temperatures ranging from \(10^{4.8}\) to \(10^{6}\) K. We created a third model (PC3) with three emitting media; the density fixed at \(n=100\)\(cm^{-3}\); and the temperatures set to \(T_{1}=10^{4.8}\)\(K\), \(T_{2}=10^{5.4}\)\(K\), and \(T_{3}=10^{6}\)\(K\). We then determined the relative contribution of each medium to the observed spectrum. The best-fit parameters are presented in Table 4, and the corresponding spectra are shown in Figs. 12, 13, and 13. This model produces the best results, accurately predicting the flux of both permitted emission lines and the most important forbidden lines ([S IX], [Si X], and [Si VI]) for all three clouds. The ionized phase's mass is estimated to be approximately 1 \(M_{\odot}\), 0.75 \(M_{\odot}\), and 0.5 \(M_{\odot}\) for clouds B, C, and D, respectively. In each case, the primary phase dominates at \(T=10^{4.8}\) K (contributing to around 65% of the flux), followed by \(T\sim 10^{5.4}\) K (contributing to approximately 20% of the flux) and \(T\sim 10^{6}\) K (accounting for roughly 15% of the flux).
### Central photoionization
In a second model, we attempted to reproduce the observed spectra with a photoionization model of a cloud around a central radiation source. For this purpose, we used CLOUDY to simulate a cloud illuminated by an AGN. The luminosity of the source was relatively well constrained and set to \(10^{38}\) W (Bland-Hawthorn et al. 1997; Gallimore et al. 2001; Vermot et al. 2021). The cloud's thickness was set to 10 pc, and its distance from the nucleus was 30 pc for clouds C and D, whereas for cloud B, it was set to 1 pc. In our simulations, we varied three parameters: the density of the cloud, ranging from \(10^{-2}\) to \(10^{9}\)\(cm^{-3}\) in \(10^{0.2}\) logarithmic steps; the temperature of the big blue bump (BBB),
logarithmically ranging from \(10^{4}\) to to \(10^{6.5}\) K in logarithmic steps of \(10^{0.1}\) K (we note that the luminosity of ionizing photons is always \(10^{38}W\)); the optical to X-ray spectral index \(\alpha_{OX}\), as defined in Zamorani et al. (1981) and implemented in CLOUDY with an explicit negative sign. Typical AGN have \(\alpha_{OX}=-1.4\), while \(\alpha_{OX}=0\) corresponds to the absence of X-ray emission.
With this model too, the residual maps are very similar from one cloud to another. An important degeneracy is observed between the temperature of the BBB and the \(\alpha_{OX}\) parameter: the higher the temperature of the BBB, the lower the amount of required X-rays (see Fig. 1). In all cases, the model converges toward relatively low amounts of X-rays (\(\alpha_{OX}\geq-1.2\)). For all clouds, density is reliably constrained between \(10^{2.5}\) and \(10^{6.5}\)\(cm^{-3}\). For clouds B and C, the model converges toward a density \(n\sim 10^{4}\)\(cm^{-3}\), while for cloud C, it converges toward \(n\sim 10^{5.5}\)\(cm^{-3}\) (see Table 5 for the exact values of the best parameters).
Comparisons between the model and observation spectra for the three clouds are presented in Figs. 1, 2, and 3. For cloud D, the model correctly predicts the flux of the permitted emission lines as well as the strong [Si VI] coronal line, but it fails to reproduce the other weaker forbidden lines. For clouds B and C, the model predicts the presence of most of them, but it overestimates the flux of the [Ca VIII] emission line in the K band and the Hydrogen series, and it underestimates the [Si VI] line at 1960 nm.
### Fast radiative shock
In the final modeling attempt, the observed spectrum was modeled as emerging from a region ionized by a shock using the MAPPINGS III Library of Fast Radiative Shock Models (Allen et al. 2008). The model assumes that the shock front generates a strong radiation field of extreme ultraviolet and soft X-ray photons, leading to significant photoionization in the shock itself and in the region ahead of it. The model varies three parameters: the velocity of the shock, ranging from 100 to 1000 km.s\({}^{-1}\) in steps of 25 km.s\({}^{-1}\); the preshock density, ranging from 10\({}^{-2}\) to 10\({}^{3}\) cm\({}^{-3}\) in logarithmic steps of 10\({}^{1}\); the preshock transverse magnetic field **B**, with values ranging from 10\({}^{-4}\) to 10\({}^{3}\)\(\mu\)G, in an irregular sampling covering the extremes expected in the ISM while also sampling more finely the magnetic field values near equipartition.
The library does not explore a full regular parameter space, as the sampling of the magnetic field (number of samples and selected values of _B_) differs from one value of the density to another. Consequently, 2D maps of the chi-square as a function of the parameters are sparse and difficult to read, and thus we only present1D projections (minimum value of the chi-square for a given parameter). These are available in Figs. 10, 11, and 12. They favor fast shocks (\(v\geq 750\ km.s^{-1}\)) and exclude the lowest densities (\(n\geq 10^{-1}\ cm^{-3}\)) and magnetic fields (**B**\(\geq 10^{-1}\ \mu\)G).
As with the previous models, the optimal parameters of the three clouds are very similar to each other. The minimum value of the chi-square is obtained for the highest velocity (\(v_{x}\sim 1000\) km.s\({}^{-1}\)), low densities (\(n\sim 1-100\) cm\({}^{-3}\)), and intermediate magnetic fields (\(B\sim 1-100\)\(\mu\)G). The exact best parameters for each cloud are presented in Table 6.
For the three clouds, the best model is in excellent agreement with the observation. All the main emission lines are correctly predicted by the model (permitted and forbidden) with fluxes very close to what is observed (comparisons between the spectra are presented in the bottom plots of Figs 11, 12, and 13). The ratio between the strong He I and [Si VI]\({}_{1900}\) emission lines is well reproduced, as well as the flux of the other coronal lines, [S IX] and [Si X]. The flux of the hydrogen emission lines is overpredicted, but the shape of the complex emission features redward of 1100 nm is remarkably well reproduced.
The best model corresponds to a shock at 1000 km.s\({}^{-1}\), which is the highest velocity of the MAPPINGS model. As a consequence, it is possible that higher shock velocities could provide an even better fit to the data.
## 5 Discussion
### Similarities between the clouds
One of the most striking results of the analysis presented in this paper is the strong similarity between the emission line spectra of the three objects, which leads to an almost identical modeling. Not only are the ratios of the various emission lines extremely similar between them, but their absolute fluxes are also of the same magnitude. We conclude that they belong to the same class of object, with similar physical conditions created by the same ionization mechanism.
### Ionization mechanism
#### 5.2.1 Comparison between the models
It is clear from the comparison to the models that the emitting medium is multiphase, with temperatures ranging from \(T_{1}=10^{4.8}\) K to \(T_{3}=10^{6}\) K. The detection of strong forbidden lines with ionization energies as high as 400 eV imposes the highest temperatures, but a lower energy phase must be present to explain the strong emission from non-fully ionized elements.
Photoionization models of a central source with a simple blackbody spectral energy distribution fail to reproduce the observed spectra, as they can only reproduce one of the above-mentioned phases, depending on the temperature of the central source. More complex photoionization models with a source combining UV and X-ray contributions can produce a multiphase medium when illuminating dense clouds, reproducing the observed He I flux as well as some forbidden lines. Nevertheless, this AGN central photoionization model fails to reproduce some features of the observed spectrum, such as the complex structure blueward of He I or the ratios between the many high-energy forbidden lines.
The last model presented in this paper, which assumes that the emission lines are stimulated by a fast radiative shock, provides a remarkably good visual fit to the data and can reproduce all the essential features of the spectra (see Fig. 3 and Tables 11, 12, 13). Despite significant differences between the measured emission fluxes and those predicted by our model, the reduced chi-square value we presented remains relatively low, which can appear surprising. This is because it is calculated across all pixels in the spectrum, incorporating the many non-detected emission lines in the process, which effectively lowers the value of the reduced chi-square. Furthermore, we employed a meticulous Gaussian fitting method to measure the fluxes reported in Tables 11, 12, and 13 that yields more precise flux values than a simple sum of the corresponding pixels.
Given the presence of an AGN in the vicinity of the emitting media, it is reasonable to assume that fast shocks could be responsible for the excitation of the clouds. Therefore, we conclude that this model is the most plausible one to explain the observed emission lines. For clouds C and D, the chi-square analysis significantly favors this shock model over the central photoionization one (see Table 7). For cloud B, the two models have a similar chi-square since the K band spectra is too noisy to discriminate between the two.
Our best fit analysis reveals that the shocks must be at high velocity (close to or exceeding 1000 km.s\({}^{-1}\)); the preshock density is relatively high, in the range of \(1-10\) cm\({}^{-3}\); and the magnetic field is near equipartition. These parameters provide insights into the structure of the medium, which is composed of three regions: the shock itself, which heats the gas up to tem
\begin{table}
\begin{tabular}{c|c|c|c|c} Cloud & Velocity & Precursor & **B** & Shock surface \\ name & \(km.s^{-1}\) & \(log(n/cm^{-3})\) & \(\mu G\) & \(pc^{2}\) \\ \hline B & 1000 & 1 & 2.7 & 0.32 \\ C & 1000 & 1 & 5.0 & 0.25 \\ D & 975 & 2 & 20. & 0.015 \\ \end{tabular}
\end{table}
Table 6: Best parameters for model SHK (fast radiative shock).
\begin{table}
\begin{tabular}{c|c c c|c c} Cloud & PC1 & PC2 & PC3 & AGN & SHK \\ \hline B & 11.9 & 8.8 & 8.8 & 10.5 & 10.6 \\ C & 9.8 & 4.4 & 1.9 & 5.6 & 4.1 \\ D & 22.1 & 10.0 & 2.8 & 9.4 & 7.9 \\ \end{tabular}
\end{table}
Table 7: Reduced chi-square for the various models.
peratures of \(10^{7}\) K and generates strong ionizing radiation; a preshock region that is ionized by this radiation and forms an HII region; and a cooling recombination region located behind the shock. For a quantitative description of the ionization, temperature, and density profiles around a shock with similar parameters (\(v_{s}=1000\) km.s\({}^{-1}\), \(n=1\) cm\({}^{-3}\), \(\textbf{B}=3.23\)\(\mu\)G), we refer to Figs. 4 and 8 in Allen et al. (2008).
#### 5.2.2 Improved diagnostic diagram
"Baldwin, Phillips and Terlevich" (BPT) diagrams (Baldwin et al. 1981; Veilleux & Osterbrock 1987; Kewley et al. 2001) are a powerful tool using emission line ratios to distinguish between an AGN and star formation activity from optical spectra. In a series of recent publications, D'Agostino et al. (2019a,b) proposed an improved 3D version of this type of diagram that takes into account the measured velocity dispersion derived from the lines as well in order to distinguish between shocks, AGN activity, and star formation and applied it to NGC 1068. The observation, obtained with the Wide Field Spectrograph (WiFeS; Dopita et al. 2007, 2010), is at a lower spatial resolution than ours and does not allow the resolution of individual clouds in the central arcsecond to be obtained. However, their results appear consistent with ours, highlighting that the entire central region (400 pc \(<\)=\(>\) 6") is dominated by shocks (Fig. 9 in D'Agostino et al. 2019b).
In our work, the conclusion is purely based on the analysis of the fluxes of various emission lines, while in their work, the decisive argument is the measured velocity dispersion of the emission lines; when high, this is a strong indicator of shock excitation (Rich et al. 2011; Ho et al. 2014). We cannot directly apply their method to our observation since we are observing other emission lines and do not have enough SPHERE spaxels to compute a proper emission line ratio function. However, despite our lower spectral resolution, we can attempt to measure the velocity dispersion from the emission lines in the SINFONI dataset for clouds C and D.
The resolution of our observation is R \(\sim\) 1500, corresponding to an expected full width half maximum (FWHM) of 1.35 nm at 2 um for an unresolved emission line. For our strongest emission line, [Si VI] at 1965 nm, we measured an FWHM of 4 nm for cloud C and 3 nm for cloud D, corresponding to intrinsic velocity dispersions of 245 km s\({}^{-1}\) and 150 km s\({}^{-1}\), respectively. For [Ca VIII], we found similar values: 220 km s\({}^{-1}\) and 130 km s\({}^{-1}\). These values are not as high as the highest ones reported in (D'Agostino et al. 2019a,b) for the central region, but they still correspond to the beginning of the shock-dominated sequence from the 3D diagram, especially if we consider that a) a higher velocity dispersion component could be blended in our line and b) spatial mixing could be present in their lower spatial resolution IFU observation. Overall, we consider that our results are fully consistent with theirs, strengthening the interpretation that the excitation of the emission lines in the central region of NGC 1068 is dominated by shocks.
#### 5.2.3 [Fe II] emission lines
We noted an unexpected lack of [Fe II] line emission in our spectra. According to the photoionization and shock models used to describe the other emission lines, these lines should be prominent. This discrepancy led us to remove [Fe II] lines from our analysis. In this section, we explore why [Fe II] might be so weak.
In the ISM, UV and X-ray observations often find less iron than expected. Most of it, about 90%, is thought to be hidden in dust particles (Dwek 2016; De Cia, Annalisa 2018). One might initially guess that iron is similarly depleted in our case, thereby accounting for the weakness of [Fe II] emission. However, our best explanation for the emission lines we do see involves a fast-moving shockwave traveling at 1000 km s\({}^{-1}\). Such a shockwave
Figure 3: Cloud D: Comparison between the observed spectrum (top, black) and the best model (bottom, red). The numerical values of the fluxes from the model’s main emission lines are compared with those from the observation in Table 6 from the appendix.
would destroy dust, freeing the iron. Moreover, if iron were depleted, we should also miss signatures of other elements found in dust, such as silicon, but we observed strong silicon lines. A closer look revealed that the [Si X] and [Si VI] lines we observed come from ionization states requiring extremely high temperatures (at least \(10^{5}\) K and \(10^{6}\) K, respectively), whereas [Fe II] lines are typically found at temperatures of \(\sim 10^{4}\) K or less, as shown in Figs. 1, 2, and 3.
In the MAPPINGS shock model, there are two places where temperatures are below \(10^{4}\) K: the gas that has not been hit by the shockwave yet and the cooling zone behind the shock (see Fig. 5 in Allen et al. 2008). The model predicts that the vast majority of the [Fe II] emission should arise in the cooling zone, due to the higher \(n_{H}\) density. So we need to explain why there is no [Fe II] there. We think there are two possibilities.
One possibility is that dust forms again after the shock. If dust particles form quickly in the cooling region, they could capture a lot of the iron, explaining why we do not see [Fe II]. The same would happen to silicon, but the [Si X] and [Si VI] lines would still be visible, as they come from the hot, shocked region. Dust formation might indeed be possible here, as there are enough atomic elements to form dust (e.g., Fe, Si, C, which are released from the destroyed dust); the shock would make the medium dense; and the dust wind from the central black hole could provide "seeds" for dust formation. We note that [C I], which is a low-ionization energy state coming from an element constitutive of dust (similar to [Fe II]; see Fig. 4), is also over-predicted by the shock model (see Tables 1, 2, and 3), which supports this hypothesis. As a possible follow-up observation, the nondetection of the [Si II] line at 34.85 \(\mu m\) could confirm it.
The second possibility is that no low-temperature area exists after the shock. Due to intense radiation from the AGN central engine, it is possible that the cooling region behind the shock never cools to below \(10^{4}\) K. Additionally, the central region of NGC 1068 is known to have recently experienced star formation (Storchi-Bergmann et al. 2012; Vermot et al. 2019; Rouan et al. 2019), so hot stars in the nuclear star cluster (or directly inside the clouds) could contribute to the heating of the ISM in the post-shock region, for instance, in the Orion HII region, which exhibits low [Fe II] emission (Walmsley et al. 2000). In that case, we would not see any [Fe II] because iron cannot emit it in those conditions.
To confidently choose between these two options, we would need more complex simulations. However, this is not within the scope of our current work.
### Nature of the objects
#### 5.3.1 Other properties
In addition to the emission line spectra, the objects have been detected in the mid-IR (Gratadour et al. 2006), forming a structure very similar to the one observed in the [Si VI] image presented in Fig. 1. Cloud B corresponds to the nucleus, cloud C to IR-1B, and cloud D to the superposition of IR-3 and IR-4, as named in Gratadour et al. (2006). The other clouds detected in the mid-IR also have counterparts in the [Si VI] image. While the secondary clouds are much less luminous than the nucleus in the K band, this difference in flux becomes negligible at longer wavelengths, such as the M band, where most of the energy is radiated: \(M_{nucleus}=6.6\), \(M_{IR-1B}=6.7\), \(M_{IR-3+IR-4}=7.6\). Therefore, all three objects can be considered strong sources of IR radiation.
The three clouds also have molecular counterparts. In Muller Sanchez et al. (2009), the authors identified several molecular structures in SINFONI observations through an \(H_{2}\) rovibrational line. Clouds B and C correspond to the structures named southern and northern tongues, respectively, identified in the \(H_{2}\) observations. The southern tongue is on the line of sight of the IR peak, and cloud D is located within a larger CO molecular structure, which is an overdensity of the CND. The dynamical modeling of the southern and northern tongues indicates that they could be molecular streamers fueling the nucleus. The mass of these streamers ranges from a few \(10^{6}\) to several \(10^{7}\) M\({}_{\odot}\), estimated by various methods. The CO counterpart to component B is the molecular disk observed at the position of S1, with a mass estimated at \(3\times 10^{5}\) M\({}_{\odot}\). A small CO clump is observed north of it at the position of cloud C, and cloud D lies again within the larger CND. In our [Si VI] observation of the objects, a velocity gradient can be measured along clouds C and D (see Appendix G; cloud B is too noisy). The amplitude of the velocity gradient is similar between the two clouds, and if interpreted as gravitational rotation, it would correspond to masses greater than a few \(10^{6}\) M\({}_{\odot}\).
The three clouds may also correspond to the radio sources detected in Gallimore et al. (2004) between 1 and 10 GHz. However, aligning the radio image with the NIR emission is uncertain due to the high uncertainty in absolute astrometry (Capetti et al. 1997) and the dissimilarities in morphology between the two images that prevent cross-registration. Despite this, if S1 and the IR emission peak are associated with the nucleus (Gallimore et al. 2004; Gratadour et al. 2006; Gamez Rosas et al. 2022), cloud B may correspond to the radio component S1 and cloud C to the radio component C. Cloud D could be associated with component NE, but with an offset of about 0.05". At these wavelengths, the similarities between the objects are striking, particularly between S1 and C, which have comparable fluxes, spectral slopes, and geometries (see Figs. 3, 4, and 5 in Gallimore et al. 2004). This comparison also holds for component NE at 5 and 8.4 GHz, but not at 1.4 GHz, where it appears more extended and luminous.
Thus, the similarities observed in the NIR emission line spectra of the clouds extend to other wavelengths. All three clouds are strong sources of IR radiation, indicating the presence of hot dust. They also have strong molecular counterparts and are massive, likely with masses greater than or equal to \(5\times 10^{6}\) M\({}_{\odot}\). Finally, energetic phenomena occur in their vicinity, as evidenced by the strong radio continuum.
#### 5.3.2 Shocked molecular clouds
Our new results indicate that clouds B, C, and D are giant molecular clouds shocked by the AGN jet, as evidenced by the high-energy forbidden lines observed in our NIR spectra and the nearby strong radio sources. The presence of hot dust and molecular content indicates that these clouds have not been completely destroyed by the jet yet, and they could be transient structures.
If the IR peak indeed traces the position of the nucleus (GRAVITY Collaboration et al. 2020; Vermot et al. 2021; Gamez Rosas et al. 2022), then cloud B is in the line of sight of the nucleus. If it is interacting with the jet, as evidenced by this work, it means that it has reached the actual position of the nucleus. As such, cloud B could simultaneously be the mass reservoir for the current accretion episode and the dense obscurer giving the nucleus its type 2 properties. Both the hot phase of the molecular content observed with SINFONI through the \(H_{2}\) rovibrational line (Muller Sanchez et al. 2009) and the colder phase
traced by various ALMA molecular lines (Garcia-Burillo et al., 2019; Impellizzeri et al., 2019; Imanishi et al., 2020) point toward complex kinematics that are incompatible with a simple rotation profile and probably result from the superposition of rotation, counter rotation, and outflow kinematics. Thus, if cloud B is indeed currently fueling and obscuring the nucleus, it has not reached a steady orbital state around the central mass.
Cloud D is part of the CND, and clouds E and F (not discussed in this paper) are also part of the CND, providing evidence that the CND shocked region is extended. The presence of strong IR emission and molecular tracers at the position of the clouds reveal that the interaction with the jet is not sufficient to destroy the densest molecular clouds of the CND.
Cloud C is located in the empty region between the CND and the nucleus, and it appears to be an extended structure as shown in the high angular resolution [O III] HST/FOC image (Macchetto et al., 1994) and the \(H_{2}\) rovibrational line image (Muller Sanchez et al., 2009). A kinematics analysis in the latter reference indicates that the structure is in a highly elliptical orbit around the nucleus and is streaming toward it. The associated radio component C is located on its eastern edge, which is the leading side of its orbit, as per Muller Sanchez et al. (2009). Our results indicating that the emission lines come from a shocked region are consistent with the scenario of cloud C being a tidally disrupted molecular cloud that is shocked and ionized as its orbit brings it into contact with the outflow.
In this scenario, clouds D, C, and B would chronologically trace the evolutionary stages of the gas reservoir for the AGN activity. The CND (cloud D) would constitute the bulk of the molecular reservoir. Its interaction with the outflow would result in a loss of angular momentum, causing some clouds to detach from it and fall into the gravitational potential (cloud C). Ultimately, these clouds would reach the nucleus (cloud B) and trigger the accretion onto the supermassive black hole (SMBH), potentially obscuring it in the process.
#### 5.3.3 Cloud C: A potential secondary AGN
Cloud B is located at the position of the IR peak, which is reliably associated with S1 (GRAVITY Collaboration et al., 2020; Gamez Rosas et al., 2022), the AGN nucleus (Gallimore et al., 2004). Given the similarities observed between clouds B and C in their NIR emission lines, radio flux and spectrum, and IR flux, it is reasonable to consider that cloud C could also host an active SMBH.
In the IR images presented in Gratadour et al. (2006), the authors discuss the challenge of explaining the high temperatures observed in IR-1B with radiation from the central source and suggest that the interaction of the jet with its environment or the presence of very small grains could account for it. However, the presence of an active SMBH in cloud C would provide a simpler explanation for this temperature excess.
While component S1 is identified as the nucleus in Gallimore et al. (2004), due to an inverted spectrum below 5 Ghz and a flat spectrum above as well as the presence of bright \(H_{2}O\) masers in a rotating pattern, component C is not identified as a nucleus despite having an almost identical spectrum and flux as S1 and the presence of \(H_{2}O\) masers (without a rotating pattern). Component C is instead associated with the interaction between the jet and a molecular cloud, due to the non-exact alignment with optical emission lines and the apparent deviation of the jet at this position. However, recent observations by Morshima et al. (2023) have revealed a ring-like structure of \(H_{2}O\) masers around component C, which could be indicative of rotation. While this does not rule out the possibility that component C results from the interaction of the jet with a molecular cloud, the authors suggest that it could also indicate the presence of a rotating disk around a \(\sim 10^{6}\) M\({}_{\odot}\) SMBH. This mass estimate is consistent with the velocity gradient measured on the [Si VI] emission line in our SINFONI data (see Fig. 4).
At radio wavelengths, the properties of components S1 and C in NGC 1068 are comparable to those of components N1 and S in NGC 6240 in terms of flux, geometry, and (to a lesser extent) spectral slope (Gallimore & Beswick, 2004). In the case of NGC 6240, these two radio sources are identified as a double AGN. The SINFONI observation of NGC 6240 (Ilha et al., 2016) also shows the peak of the coronal emission lines at the position of the two AGN, closely but not perfectly lined up, similar to NGC 1068. The two nuclei of NGC 6240 are also strong sources of NIR to mid-IR radiation (Max et al., 2005; Mori et al., 2014).
The decisive argument to classify NGC 6240 as a double AGN was the association of the two strong IR and radio sources to two hard, luminous X-ray nuclei discovered with Chandra (Komossa et al., 2003). However, in the case of NGC 1068, clouds B and C are too close together for X-ray telescopes to resolve them. Nevertheless, the Chandra image presented in Young et al. (2001) reveals an extension of the nuclear component in the direction of cloud C, and the spectral modeling done in the same paper highlights the difficulty in fitting the X-ray spectrum with a single hot plasma model. These elements are consistent with the presence of a secondary source.
In essence, we contend that even without S1, the central region of NGC 1068 would still be classified as an AGN based on the properties of cloud C, such as its strong inverted radio continuum, its ring-like distribution of masers, and its strong IR emission. Our results corroborate this interpretation, as they reveal strong similarities between the emission line spectra of clouds B and C, with the former being identified as an AGN. However, this argument is not conclusive since the shocked atomic content detected in our study could also arise from a shock with the jet. Given that cloud C is in the outflow region, this hypothesis is favored until further evidence for AGN activity in cloud C (such as resolved X-ray emission) is found. We plan to investigate this question further in the near future.
Figure 4: Cloud C: Velocity map based on the measured Doppler shift of the [Si VI] emission line. The contours represent flux levels.
## 6 Conclusions
The combination of SPHERE and SINFONI data enabled us to produce a comprehensive YJHK spectrum for three emission line regions named B, C, and D in the central arcsecond of NGC 1068. Our analysis reveals that these regions exhibit similar emission line spectra, suggesting that they belong to the same class of objects. The various modeling techniques utilized in this study produced consistent results for all three objects.
Our investigation involving the use of CLOUDY to produce synthetic emission line spectra for several models and the comparison of these spectra with the full observations revealed the presence of multiple phases in the clouds. The temperatures of these phases range from \(10^{4.8}\) K to \(10^{6}\) K. Our findings indicate that a central AGN photoionization model is not able to accurately reproduce the variety of emission lines observed in the spectra. However, the MAPPINGS III fast radiative shock model is able to reproduce the observed emission line spectra well, with shock velocities of \(v_{s}\sim 1000\) km.s\({}^{-1}\) and near equipartition magnetic fields.
Our study suggests that the three objects are likely to be giant molecular clouds that are either orbiting or streaming toward the nucleus and are being shocked by the same radio jet as they enter the outflowing bicone. It is possible that most of the properties associated with the nucleus and its torus, such as mid-IR and radio continuum emissions as well as atomic and molecular lines, originate from cloud B, which is coincident with the nucleus and may be currently feeding the central SMBH. However, given the similarities between clouds B and C, together with their shared AGN properties, such as high IR flux, radio continuum, \(H_{2}O\) masers, and \(M_{dyn}\gg 10^{6}\)M\({}_{\odot}\), it is also possible that both objects host an AGN, with cloud C being a secondary AGN orbiting cloud B.
###### Acknowledgements.
We thank Eric Lagadec and the SPHERE consortium for carrying out this observation as part of the GTO _Other Science_ program. This work was made possible by the support of the international collaboration in astronomy (ASU mobility) with the number _CZ_02.2.69/0.0/0.0/18, 053/0016972 and the institutional project RVO-67985815. ASU mobility is co-financed by the European Union.
|
2309.08999 | Context-aware Adversarial Attack on Named Entity Recognition | In recent years, large pre-trained language models (PLMs) have achieved
remarkable performance on many natural language processing benchmarks. Despite
their success, prior studies have shown that PLMs are vulnerable to attacks
from adversarial examples. In this work, we focus on the named entity
recognition task and study context-aware adversarial attack methods to examine
the model's robustness. Specifically, we propose perturbing the most
informative words for recognizing entities to create adversarial examples and
investigate different candidate replacement methods to generate natural and
plausible adversarial examples. Experiments and analyses show that our methods
are more effective in deceiving the model into making wrong predictions than
strong baselines. | Shuguang Chen, Leonardo Neves, Thamar Solorio | 2023-09-16T14:04:23Z | http://arxiv.org/abs/2309.08999v2 | # Context-aware Adversarial Attack on Named Entity Recognition
###### Abstract
In recent years, large pre-trained language models (PLMs) have achieved remarkable performance on many natural language processing benchmarks. Despite their success, prior studies have shown that PLMs are vulnerable to attacks from adversarial examples. In this work, we focus on the named entity recognition task and study context-aware adversarial attack methods to examine the model's robustness. Specifically, we propose perturbing the most informative words for recognizing entities to create adversarial examples and investigate different candidate replacement methods to generate natural and plausible adversarial examples. Experiments and analyses show that our methods are more effective in deceiving the model into making wrong predictions than strong baselines.
## 1 Introduction
Existing methods for adversarial attacks mainly focus on text classification Liang et al. (2018); Garg and Ramakrishnan (2020), machine translation Belinkov and Bisk (2018); Cheng et al. (2019), reading comprehension Jia and Liang (2017); Wallace et al. (2019), etc. A slight perturbation to the input can deceive the model into making wrong predictions or leaking important information. Such adversarial attacks are widely used to identify potential vulnerabilities and audit the model robustness. However, in the context of named entity recognition (NER), these adversarial attack methods are inadequate since they are not customized for the labeling schemes in NER Lin et al. (2021). This is especially problematic as the generated adversarial examples can be mislabeled.
Prior studies have proposed various context-aware attacks (i.e., perturb non-entity words) and entity attack (i.e., perturb only entity words) methods to address this issue. Despite their success, most existing methods randomly select words to perturb without taking the linguistic structure into consideration, limiting their effectiveness to consistently generate natural and coherent adversarial examples. Some words in a sentence are more informative than others in guiding the model to recognize named entities. For instance, in Figure 1, the word "rackets" can provide more information than the word "tournament" to infer the entity type of "Wilson". Perturbing such words can be effective in leading to more incorrect model predictions.
In this work, we explore the correlation between model vulnerability and informative words. We aim to conduct adversarial attacks by perturbing the informative words to expose the potential vulnerabilities of NER systems. To this end, we investigate different candidate selection methods to determine which words should be perturbed, including part-of-speech (POS) tagging, dependency parsing, chunking, and gradient attribution. To demonstrate the effectiveness of our proposed methods, we adapt two commonly-used candidate replacement approaches to replace the selected candidate words: synonym replacement (i.e., replace with a synonym) and masked language model replacement (i.e., replace with a candidate generated from a masked language model). We conduct experiments on three corpora and systematically evaluate our proposed methods
Figure 1: Comparison between adversarial attack with and without perturbing informative words.
with different metrics. Experimental results and analyses show that our proposed methods can effectively corrupt NER models.
In summary, our contributions are as follows:
1. We investigate different methods to perturb the most informative words for generating adversarial examples to attack NER systems.
2. Experiments and analyses show that the proposed methods are more effective than strong baselines in attacking models, posing a new challenge to existing NER systems.
## 2 Related Work
Adversarial attacks have been receiving increasing attention in the field of NER. Prior work in this research direction can be generally classified into two categories: i) context-aware attacks and ii) entity attacks. In the context-aware attacks, only the non-entity context words are modified. To achieve this, Lin et al. (2021) proposed to perturb the original context by sampling adversarial tokens via a masked-language model. Simoncini and Spanakis (2021) presented multiple modification methods to substitute, insert, swap, or delete characters and words. Wang et al. (2021) studied to create adversarial samples by concatenating different sentences into a single data point. For entity attacks, the entity words are modified while the non-entity context words are kept unchanged. In particular, Lin et al. (2021) exploited an external dictionary from Wikidata to find replacements for entity instances. Simoncini and Spanakis (2021) studied the use of the SCPNs (Iyyer et al., 2018) to generate candidate paraphrases as adversarial samples. Reich et al. (2022) proposed leveraging expert-guided heuristics to modify the entity tokens and their surrounding contexts, thereby altering their entity types as adversarial attacks. Wang et al. (2021) performed adversarial attacks by swapping words or manipulating characters.
## 3 Context-aware Adversarial Attack
In this work, we propose different methods to generate adversarial samples for the purpose of auditing the model robustness of NER systems. In the following sections, we describe the two main stages involved in this process: 1) candidate selection, which aims to determine which candidate words should be replaced; and 2) candidate replacement, which aims to find the best way to replace candidate words. The pipeline of adversarial data generation is shown in Figure 2.
### Candidate Selection
To effectively attack the model, we consider perturbing the most informative words for recognizing entities. We investigate the following automated methods to select such words as candidates:
* **Random (RDM)**: select non-entity words at random from the sentence as candidate words.
* **POS tagging (PST)**: select semantic-rich non-entity words as candidate words based on their POS tags. Here, we consider selecting adjectives, nouns, adverbs, and verbs.
* **Dependency parsing (DEP)**: select the non-entity words related to entity instances as candidate words based on dependency parsing.
* **Chunking (CHK)**: select the non-entity words in the noun chunks that are close to entity instances as candidate words to preserve both semantic and syntactic coherence.
* **Gradient (GDT)**: select the non-entity words according to the integral of gradients. We use Integrated Hessians (Janizek et al., 2021) to
Figure 2: The pipeline of the proposed context-aware adversarial attack, including candidate selection to determine which words to perturb and candidate replacement for replacing candidate words.
determine the importance of non-entity words based on their feature interactions with entity instances, and select the words with higher importance scores to perturb.
### Candidate Replacement
Perturbations in text at the character-level can be easily detected and defended by spell check and correction (Pruthi et al., 2019; Li et al., 2020). Therefore, we exclusively focus on the word-level perturbations in this work. We investigate the following two approaches for replacing candidate words:
* **Synonym Replacement**: Using synonyms to replace candidate words as adversarial samples can guarantee the preservation of text semantics and make it hard to be perceived by human investigation. We use the WordNet (Miller, 1998) dictionary to find synonyms for candidate words, and then randomly select one of them as a replacement.
* **Masked Language Model Replacement**: The masked language model (MLM) attempts to predict the masked words in the given input sequence. In our work, we first create masks for candidate words, and then use a masked language model RoBERTbase(Liu et al., 2019) to generate a replacement based on the context. This approach is capable of preserving both semantics and syntax in the generated adversarial samples.
## 4 Experiments
In this section, we present the experimental setup and results. We systematically conduct experiments to evaluate our proposed methods on three corpora with different metrics and provide analyses to better understand their effectiveness.
### Experiment Setup
DatasetsWe evaluate the proposed methods on three corpora for NER, including CoNLL03 (Tjong Kim Sang and De Meulder, 2003), OntoNotes5.0 (Pradhan et al., 2013), and W-NUT17 (Derczynski et al., 2017). The data statistics are summarized in Appendix A and the data processing is described in Appendix B.
Victim ModelThe victim model consists of the BERTbase(Devlin et al., 2019) as the base model and a linear layer as the classifier to assign NER tags. The details of hyper-parameters and fine-tuning are described in Appendix C.
Evaluation MetricsTo examine the effectiveness of our proposed methods, we consider the following metrics for evaluation:
* **Textual Similarity (Sim.)**: cosine similarity between adversarial examples and the corresponding original examples using the Univer
\begin{table}
\begin{tabular}{c c c c} \hline \hline & CoNLL03 & OntoNotes0 & W-NUT17 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Compare between different candidate selection methods using synonym replacement. RDM, PST, DEP, CHK, GDT are short for random, POS tagging, dependency parsing, chunking, and gradient candidate selection, respectively.
sal Sentence Encoder (Giorgi et al., 2021). A higher textual similarity score indicates that more semantics are preserved.
* **Performance Decrease (\(\Delta\)Perf.)**: the difference in F1 scores between adversarial examples and their corresponding original examples. A higher performance decrease indicates that the model makes more mistakes.
### Main Results
We compare candidate selection and replacement methods by perturbing the same number of words in the sentences. Below we present experimental results and summarize our findings:
Candidate Selection V.S. MetricsFrom the results in Table 1, we observe that the model performance decreases rapidly under adversarial attacks. When perturbing five words in the sentence, the F1 scores decrease by 10% \(\sim\)20%. Among these attack methods, GDT and RDM are more effective at deceiving the model into making wrong predictions. When performing attacks with RDM, however, the text similarity is sacrificed in exchange for a greater performance decrease, which can potentially make adversarial examples easier to detect. Additionally, it is worth noting that DEP is also effective at a slight perturbation, although it can only result in a smaller performance decrease as we increase the number of perturbed words. In terms of textual similarity and performance decrease, PST is the least effective method in most cases.
Candidate Replacement V.S. MetricsThe comparison between different candidate replacement methods is shown in Table 2. In general, compared to masked language model replacement, synonym replacement can achieve a higher textual similarity, indicating that more semantics are preserved in adversarial examples. However, its performance decrease is quite limited. At a slightly lower textual similarity, masked language model replacement leads to a much larger performance decrease. Besides, both replacement methods are relatively less effective on the W-NUT17 corpus. Compared to the text from CoNLL03 and OntoNotes5.0 which is long and formal, the text from W-NUT17 is short and noisy as it contains many misspellings and grammar errors. For this reason, the model cannot rely too heavily on context when making predictions, limiting the effectiveness of adversarial attacks on this corpus.
## 5 Conclusion
In this work, we study adversarial attacks to examine the model robustness using adversarial examples. We focus on the NER task and propose context-aware adversarial attack methods to perturb the most informative words for recognizing entities. Moreover, we investigate different candidate replacement methods for generating adversarial examples. We undertake experiments on three corpora and show that the proposed methods are more effective in attacking models than strong baselines.
\begin{table}
\begin{tabular}{c c c} \hline \hline & CoNLL03 & OntoNotes5.0 & W-NUT17 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Compare between different candidate replacement methods when perturbing five words in each sentence. RDM, PST, DEP, CHK, GDT are short for random, POS tagging, dependency parsing, chunking, and gradient candidate selection, respectively.
## Limitations
The proposed methods require linguistic knowledge (e.g., part-of-speech tags and dependency parsing) to processing the text. Most existing tools can automate this process for English. However, these tools may need to be extended to support other languages, especially for minority languages. Additionally, the proposed methods maybe not applicable with low computational resources or in real-time scenarios.
|
2308.00034 | Insights on the current semi-leptonic $B$-decay discrepancies -- and how
$B_s \to μ^+ μ^- γ$ can help | $B_s \to \mu^+ \mu^- \gamma$, measured at high $q^2$ as a partially
reconstructed decay, can probe the origin of the existing discrepancies in
semi-leptonic $b \to s$ and $b \to c$ decays. We perform a complete study of
this possibility. We start by reassessing the alleged discrepancies, with a
focus on a unified EFT description. Using the SMEFT, we find that the tauonic
Wilson coefficient required by $R(D^{(*)})$ implies a universal muonic Wilson
coefficient of precisely the size required by semi-muonic BR data and,
separately, by semi-muonic angular analyses. We thus identify reference
scenarios. Importantly, $B_s \to \mu^+ \mu^- \gamma$ offers a strategy to
access them without being affected by the long-distance issues that hamper the
prediction of semi-leptonic $B$ decays at low $q^2$. After quantifying to the
best of our knowledge the $B_s \to \mu^+ \mu^- \gamma$ experimental over the
long haul, we infer the $B_s \to \mu^+ \mu^- \gamma$ sensitivity to the
couplings relevant to the anomalies. In the example of the real-$\delta
C_{9,10}$ scenario, we find significances below 3$\sigma$. Such figure is to be
compared with other single-observable sensitivities that one can expect from
e.g. BR and angular data, whether at low or high $q^2$, and not affected by
long-distance issues such as narrow resonances or intermediate charmed di-meson
rescattering. | Diego Guadagnoli, Camille Normand, Silvano Simula, Ludovico Vittorio | 2023-07-31T18:00:04Z | http://arxiv.org/abs/2308.00034v2 | Insights on the current semi-leptonic \(B\)-decay discrepancies - and how \(B_{s}\to\mu^{+}\mu^{-}\gamma\) can help
###### Abstract
\(B_{s}\to\mu^{+}\mu^{-}\gamma\), measured at high \(q^{2}\) as a partially reconstructed decay, can probe the origin of the existing discrepancies in semi-leptonic \(b\to s\) and \(b\to c\) decays. We perform a complete study of this possibility. We start by reassessing the alleged discrepancies, with a focus on a unified EFT description. Using the SMEFT, we find that the tauonic Wilson coefficient required by \(R(D^{(*)})\) implies a universal muonic Wilson coefficient of precisely the size required by semi-muonic BR data and, separately, by semi-muonic angular analyses. We thus identify reference scenarios. Importantly, \(B_{s}\to\mu^{+}\mu^{-}\gamma\) offers a strategy to access them without being affected by the long-distance issues that hamper the prediction of semi-leptonic \(B\) decays at low \(q^{2}\). After quantifying to the best of our knowledge the \(B_{s}\to\mu^{+}\mu^{-}\gamma\) experimental over the long haul, we infer the \(B_{s}\to\mu^{+}\mu^{-}\gamma\) sensitivity to the couplings relevant to the anomalies. In the example of the real-\(\delta C_{9,10}\) scenario, we find significances below \(3\sigma\). Such figure is to be compared with other _single_-observable sensitivities that one can expect from e.g. BR and angular data, whether at low or high \(q^{2}\), and not affected by long-distance issues such as narrow resonances or intermediate charmed dimeson rescattering.
LAPTH-039/23
## 1 Introduction
An ensemble of \(b\to s\mu^{+}\mu^{-}\) data including branching-ratio measurements as well as angular ones displays a less than perfect agreement with the SM prediction. Within LHCb, the concerned observables include \({\cal B}(B\to M\mu^{+}\mu^{-})\) for \(M=K^{0,+,*+}\)[1], \(K^{*}(892)\)[2], \(K^{*0}\)[3]; \({\cal B}(B_{s}^{0}\to\phi\mu^{+}\mu^{-})\)[4, 5]; the baryonic channels \({\cal B}(\Lambda_{b}\to\{\Lambda,\Lambda(1520)\}\mu^{+}\mu^{-})\)[6, 7, 8]; two angular analyses for \(M=K^{*+,*0}\)[9, 3]. The trend suggested by these datasets is generally, although not uniformly, confirmed by measurements collected by Atlas, BaBar, Belle, CMS [10, 11, 12, 13, 14, 15, 16]. Finally, past experimental indications of lepton-universality
violation have disappeared [17]. Hence the above tensions, if confirmed, are not expected to concern dimuons channels only.
Importantly, this disagreement is seen only for low di-muon invariant mass squared \(q^{2}\), and is consistent across the whole ensemble, in the sense that the same Wilson-coefficient shift accommodates all discrepancies. However, interpreting this shift as due to new physics (NP) relies on a strong assumption about the possible size of long-distance contributions to the different observables concerned. These contributions are, on the one side, difficult to estimate, and on the other side largely _equivalent_ to (i.e. parametrically interchangeable with) the NP shifts that one wants to probe.
A complete estimation of such long-distance effects is, as well-known, an issue as important as it is challenging. The existing calculations in Refs. [18; 19] (building on Refs. [20; 21; 22]) focus on the "charm-loop"-to-\(\gamma^{*}(q^{2})\) amplitude, whose long-distance effects correspond to poles and cuts in the \(q^{2}\) variable. On the other hand, Ref. [23] (see also Refs. [24; 25; 26; 27; 28]) emphasizes the importance of including contributions from \(B\) to di-meson rescatterings, that correspond to cuts in the full decay variable \((q+k)^{2}\), where \(k\) is the momentum of the final-state \(K^{(*)}\).1 Ideally, one should start from an amplitude that is function of \(q^{2}\) and of \((q+k)^{2}\), such that cuts in the second variable reproduce \(B\to D\bar{D}^{*}\) decay rates, and only then set \((q+k)^{2}=m_{B}^{2}\). In other words, inclusion of \((q+k)^{2}\) as a variable should allow to take into account the so-called _anomalous cut_2 in addition to the "usual" cut associated to the \(c\bar{c}\) threshold. The state-of-the-art calculations in Refs. [18; 19] allow for complex-valued helicity amplitudes (see e.g. discussion in Ref. [22]), which are nevertheless functions of \(q^{2}\) only, and are denoted as \(\mathcal{H}_{\lambda}(q^{2})\). We are lacking a proof that the theoretical and experimental input used to constrain complex \(\mathcal{H}_{\lambda}(q^{2})\) fully encapsulates the structure due to the cuts in \((q+k)^{2}\). It is clear that going beyond the calculations in Refs. [18; 19]--that constitute a benchmark for all that is calculable with regards to this issue--is a daunting task, primarily because there is no known EFT that would allow a quantitative estimate of the anomalous-cut contributions.
Footnote 1: The only existing phenomenological estimate [29], considering light di-meson intermediate states (i.e. not \(D\bar{D}^{*}\) ones), suggests that such contributions may be sizeable.
Footnote 2: See e.g. discussion in Ref. [30].
There are two clear avenues towards resolving the above quandary: on the theory side, an estimate of the mentioned missing long-distance effects, dispelling any doubt that they may be mimicking NP; on the experimental side, the consideration of alternative observables--in particular, in alternative \(q^{2}\) regions--that are sensitive to the very same short-distance physics without being affected by the long-distance effects in question.
The branching ratio for \(B_{s}\to\mu^{+}\mu^{-}\gamma\), measured at high \(q^{2}\) as a partially reconstructed decay [31] (see Ref. [32; 33] for a first application at LHCb) offers an example of such an alternative observable. It actually provides a litmus test between the two mentioned explanations: NP versus long-distance effects mimicking it. In fact, at high \(q^{2}\) this observable's long-distance contributions are dominated by two form factors that should be accessible by first-principle methods in the short term--whereas the helicity amplitudes that are input
of the presently discrepant \(b\to s\mu^{+}\mu^{-}\) data do not enter at all.
A clear test then emerges: one may predict \(\mathcal{B}(B_{s}\to\mu^{+}\mu^{-}\gamma)\) assuming NP shifts as large as suggested by the currently discrepant low-\(q^{2}\)\(b\to s\mu^{+}\mu^{-}\) observables if the incalculable long-distance contributions to these observables are negligible. A \(\mathcal{B}(B_{s}\to\mu^{+}\mu^{-}\gamma)\) measurement that confirms our prediction would circumstantially support NP shifts of that size. This paper aims at discussing the relation between NP sensitivity and amount of data required for such test. To this end, the paper is structured as follows. In Sec. 2, we re-assess the \(b\to s\mu^{+}\mu^{-}\) discrepancies by performing global likelihood fits of all relevant observables and their up-to-date measurements, among the others \(R_{K^{(*)}}\) and \(B_{s}\to\mu^{+}\mu^{-}\). From such fits we extract NP shifts to the semi-leptonic Wilson coefficients known as \(C_{9}^{bs\mu\mu}\) and \(C_{10}^{bs\mu\mu}\), to be used as reference for the rest. We consider separately the cases of real and complex shifts to these Wilson coefficients, and we examine to what extent data (including \(b\to c\ell^{-}\bar{\nu}\)) obey a coherent effective-theory picture. In Sec. 3, we attempt an extrapolation of the experimental and theoretical uncertainties associated to \(B_{s}\to\mu^{+}\mu^{-}\gamma\) at high \(q^{2}\) and we use such extrapolation to project the NP sensitivity of the observable, using as NP benchmarks the couplings discussed in Sec. 2. We draw our conclusions in Sec. 4. Finally, the Appendix collects additional information and plots related to the global fits in Sec. 2.
## 2 NP benchmarks
### Real Wilson-coefficient shifts
One can consider the list of measurements3 in Table 1 and perform a global fit, using the python software package flavio4[64]. As a first step, we focus on shifts to semi-leptonic Wilson coefficients involving dimuons, defining (\(k=\,9,\,10\))
Footnote 3: The \(\Lambda_{b}\to\Lambda\mu^{+}\mu^{-}\) as well as \(\Lambda_{b}\to\Lambda(1520)\mu^{+}\mu^{-}\) measurements in Refs. [6, 7] and in Ref. [8], respectively, may be tested against the pioneering SM calculations in Ref. [60] and in Refs. [61, 62], respectively. In this work, however, we do not include \(b\to s\) data from semileptonic decays of baryons following the guidelines of the latest FLAG Review [63].
\[\begin{array}{lclclcl}\text{NP shift}&&\ell\text{-specific}+\ell\text{- univ. parts}&&\text{full WC (NP + SM)}\\ \hline\delta C_{k}^{bsee}&\equiv&\delta C_{k}^{(e)}+\delta C_{k}^{u(e,\mu)}&=&C _{k}^{bsee}-C_{k}^{bs\ell\ell,\text{SM}}\,&C_{k}^{bsee}\equiv C_{k}^{(e)}\,\\ \delta C_{k}^{bs\mu\mu}&\equiv&\delta C_{k}^{(\mu)}+\delta C_{k}^{u(e,\mu)}&=& C_{k}^{bs\mu\mu}-C_{k}^{bs\ell\ell,\text{SM}}\,&C_{k}^{bs\mu\mu}\equiv C_{k}^{(\mu)}\,\\ \delta C_{k}^{bs\tau\tau}&\equiv&\delta C_{k}^{(\tau)}&=&C_{k}^{bs\tau\tau}-C_{k }^{bs\ell\ell,\text{SM}}\,&C_{k}^{bs\tau\tau}\equiv C_{k}^{(\tau)}\,\end{array} \tag{1}\]
with the SM part also abbreviated through \(C_{k}^{bs\ell\ell,\text{SM}}\equiv C_{k}^{(\ell),\text{SM}}\). As highlighted in eq. (1), NP shifts are identified from a leading \(\delta\) (at variance with much of the literature). In addition, the full NP shift generally consists of a lepton-specific component, labeled as \((\epsilon),(\mu),(\tau)\) plus a lepton-universal one. In most of our discussion we will actually focus on
universal components only concerning the light leptons, i.e. on _light-lepton_-universal shifts, that are labeled with \({}^{u(e,\mu)}\).
With eq. (1) setting the notation, we show the case \(\delta C_{9}^{(\mu)}\) vs. \(\delta C_{10}^{(\mu)}\) in the top-left panel of Fig. 1. The fit suggests that the updated \(R_{K^{(*)}}\) measurement and the anomalies observed in the \(b\to s\mu^{+}\mu^{-}\) sector--branching ratios (BRs) and angular observables--do not yield a coherent picture in a scenario where one only shifts \(C_{9}^{(\mu)}\) and \(C_{10}^{(\mu)}\), i.e. where their counterparts for all other flavours are assumed SM-like.5 Specifically, the combination of
\begin{table}
\begin{tabular}{|l|c|c|} \hline & Refs. measurements & Refs. prediction \\ \hline \(\boldsymbol{b\to s\mu^{+}\mu^{-}}\) BR obs. & & \\ \hline \(\left\langle\frac{d\mathcal{B}}{dq^{2}}\right\rangle(B^{+}\to K^{(*)}\mu\mu)\) & [1, 34] & [35, 36, 37] \\ \(\left\langle\frac{d\mathcal{B}}{dq^{2}}\right\rangle(B_{0}\to K\mu\mu)\) & [34, 38, 1] & [37] \\ \(\left\langle\frac{d\mathcal{B}}{dq^{2}}\right\rangle(B_{s}\to\phi\mu\mu)\) & [4, 34, 5] & [35] \\ \(\left\langle\frac{d\mathcal{B}}{dq^{2}}\right\rangle(B_{0}\to K^{*}\mu\mu)\) & [38, 2] & \\ \(\left\langle\mathcal{B}\right\rangle(B\to X_{s}\mu\mu)\) & [39] & [40] \\ \hline \(\boldsymbol{b\to s\mu^{+}\mu^{-}}\) angular and CPV obs. & & \\ \(\left\langle F_{L},P_{1},P_{4,5}^{\prime},A_{\rm FB}\right\rangle(B_{0}\to K^{* }\mu^{+}\mu^{-})\) & [34, 15, 3, 41] & \\ \(\left\langle F_{L},P_{1,2},P_{4,5}^{\prime}\right\rangle(B^{+}\to K^{*+}\mu^ {+}\mu^{-})\) & [9] & [35] \\ \(\left\langle F_{L},S_{3,4,7}\right\rangle(B_{s}\to\phi\mu\mu)\) & [42] & \\ \(A_{3-9}(B_{0}\to K^{*}\mu^{+}\mu^{-})\) & [43] & \\ \hline \(\boldsymbol{R_{K/K^{*}}}\) & [44, 45, 17, 46] & [35, 36, 37] \\ \hline \(\boldsymbol{\mathcal{B(B_{d,s}\to\mu\mu)}}\) & [47, 32, 33, 48] & [49] \\ \hline \(\boldsymbol{b\to s\gamma}\) obs. & & \\ \(\left\langle\mathcal{B},A_{CP}\right\rangle(B\to X_{s}\gamma)\) & [50] & [51, 52] \\ \(\mathcal{B}(B^{0}\to K^{*0}\gamma)/\mathcal{B}(B_{s}^{0}\to\phi\gamma)\) & [53] & [35, 36] \\ \(\mathcal{B}(B\to K^{*}\gamma)\) & [54] & \\ \(\mathcal{B}(B_{s}^{0}\to\phi\gamma)\) & [55] & [35] \\ \(A_{\Delta\Gamma},S(B_{s}^{0}\to\phi\gamma)\) & [56] & \\ \(S_{K^{*0}\gamma}\) & [54] & \\ \hline \end{tabular}
\end{table}
Table 1: List of the most constraining observables and their measurements implemented in the flavio v2.5.4 Python package at the date of publication. For the exclusive channels, the predictions refer to the particular set of form factors of the relevant transition. Helicity amplitudes are detailed in Refs. [57, 58, 59].
the recent \(B_{s}\to\mu^{+}\mu^{-}\) measurements by both LHCb and CMS collaborations provides a strong constraint on \(\delta C_{10}^{(\mu)}\), which is now consistent with zero. Besides, the \(R_{K^{(*)}}\) measure
ments on the one side and the discrepant \(b\to s\mu^{+}\mu^{-}\) observables on the other side, each constrain the \(\delta C_{9}^{(\mu)}\) vs. \(\delta C_{10}^{(\mu)}\) plane obliquely, but the two respective regions overlap at no better than \(2\sigma\).
As a second case of interest, we consider a scenario that is especially appealing in the light of a UV interpretation, where NP fulfils the constraint \(\delta C_{9}^{(\ell)}=-\delta C_{10}^{(\ell)}\equiv\delta C_{LL}^{(\ell)}/2\),6 in both the muonic and electronic channels. This is shown in the top-right panel of Fig. 1, again displaying that the region favoured by \(B_{s}\to\mu^{+}\mu^{-}\) is in mild tension with the regions preferred by \(b\to s\mu^{+}\mu^{-}\) BR and angular data--that instead are mutually consistent.
Footnote 6: This identification follows trivially from writing the effective Hamiltonian either in the “traditional basis”, \(\propto C_{9}{\cal O}_{9}+C_{10}{\cal O}_{10}\) with \({\cal O}_{9}\propto(\bar{s}\gamma_{L}^{\mu}b)\,(\bar{\ell}\gamma_{\mu}\ell)\) and \({\cal O}_{10}\propto(\bar{s}\gamma_{L}^{\mu}b)\,(\bar{\ell}\gamma_{\mu} \gamma^{5}\ell)\), or in the chiral basis, \(\propto C_{LL}{\cal O}_{LL}\), where \({\cal O}_{LL}\propto(\bar{s}\gamma_{L}^{\mu}b)\,(\bar{\ell}\gamma_{\mu L}\ell)\). Here we omit flavour indices for simplicity, as well as proportionality factors immaterial to the identification.
The very SM-like \(R_{K^{(*)}}\) measurements can be reconciled with the \(b\to s\mu^{+}\mu^{-}\)-sector BR and angular observables by allowing NP in the electron channels. Hence a first natural scenario to account for discrepant BR and angular data, and for _no_ LFUV in \(b\to s\ell^{+}\ell^{-}\) (\(\ell=e,\mu\)) is the case \(\delta C_{9}^{u(e,\mu)}\) vs. \(\delta C_{10}^{u(e,\mu)}\) (we refer again to eq. (1) and the surrounding text for the meaning of \(u(e,\mu)\)). This case is displayed in the bottom-left panel of Fig. 1. In this panel the \(R_{K^{(*)}}\) constraint is trivially satisfied in the whole plane and thus not displayed. What we deem significant in this scenario is the fact that BR (in blue) and angular data (in green) constrain this WC plane in two independent directions, and the preferred regions of either set neatly overlap at a non-zero value of \(\delta C_{9}^{u(e,\mu)}\)--while staying consistent with a SM-like, i.e. null within errors, value of \(\delta C_{10}^{u(e,\mu)}\). As a result, this scenario is consistent with all observables at the \(1\sigma\) level. We note that the hint at \(\delta C_{10}^{(\mu)}\) consistent with zero holds irrespective of its relation to \(\delta C_{10}^{(e)}\). In the last two panels of Fig. 1 we assume \(\delta C_{10}^{(e)}=\delta C_{10}^{(\mu)}\) and, respectively, \(\delta C_{10}^{(e)}=0\). The main difference between these two fits is in the significance of the null solution for the WC on the \(y\) axis, slightly above \(1\sigma\) and respectively within \(1\sigma\) in the bottom-left and bottom-right panels. The difference is clearly due to the \(R_{K^{(*)}}\) constraint, which as already mentioned is ineffectual in the bottom-left panel.
As a whole, Fig. 1 delivers _two_ clear-cut messages from data:
* the \(B_{s}\to\mu^{+}\mu^{-}\) update disfavors shifts to muonic \(\delta C_{10}\) and the \(R_{K^{(*)}}\) measurement favors a solution where the muonic vs. electronic WC shifts are equal. Jointly, these pieces of information would tend to disfavor the \(\delta C_{LL}^{(\ell)}\) scenario (top-right panel of Fig. 1). We will see, however, that this is true only if the shift is weak-phase-aligned with the SM--the relevant discussion is in Sec. 2.3;
* BR & angular data _separately_ point towards a non-zero, but _light-lepton_-universal shift \(\delta C_{9}^{u(e,\mu)}\).
We deem the last finding non-trivial. In the next section we will see that it can be
understood within a SMEFT picture, that relates \(b\to s\) and \(b\to c\) discrepancies and that turns out to quantitatively account for both.
### The SMEFT-induced connection between \(b\to s\) and \(b\to c\) anomalies: an update
While the \(\delta C_{9}^{u(e,\mu)}\) shift just discussed has to be corroborated by further data (e.g. updates in the channels of Refs. [1, 2]), it leaves a testable imprint already. In fact, quite generally a universal \(\delta C_{9}\)7 tends to correlate with effects in _charged-current_ semi-leptonic transitions, in particular \(b\to c\ell^{-}\bar{\nu}\). This mechanism is to be expected as consequence of the inescapable RG running between the scale of the new dynamics and the \(b\) scale, to the extent that the new dynamics is sizeably above the EW symmetry-breaking threshold.
Footnote 7: We do not use \(\delta C_{9}^{u(e,\mu)}\) to denote such shift, because the considerations in these paragraphs generally apply to a \(\delta C_{9}\) shift common to all three leptonic generations. As a rule, when “universal” is used textually rather than as \({}^{u(e,\mu)}\), we mean that it applies to all three generations.
As a matter of fact, the possibility of a low-scale universal \(C_{9}\) shift arising from high-scale interactions has been pointed out within the WET [66], the SMEFT [67] and in the context of renormalizable UV models [68]. A particularly plausible example is that of semi-tauonic, \(SU(2)_{L}\)-symmetric SMEFT operators, that will leave a direct imprint on \(R(D^{(*)})\). The corresponding SMEFT WCs are denoted as \([C_{lq}^{(1),(3)}]_{ijmn}\)8 with \(ij\) leptonic, \(mn\) quark indices, and \({}^{(1),(3)}\) labeling the \(SU(2)_{L}\)-singlet or triplet instances. The latter are customarily assumed to be equal in order to automatically fulfil \(B\to K^{(*)}\nu\bar{\nu}\) constraints [71]. We adhere to such assumption and drop the \({}^{(1),(3)}\) index, so that the SMEFT WCs will be simply denoted as \(C_{ii23}\). It is then interesting to study the implications that different assumptions on \(ii=11\) vs. \(22\) vs. \(33\) have on the global consistency between \(b\to s\) and \(b\to c\) discrepancies. For example, \(C_{ii23}\) generates a matching contribution to \(\delta C_{9}^{(\ell_{i})}=-\delta C_{10}^{(\ell_{i})}\); in turn \(C_{3323}\) can _also_ contribute a sizeable lepton-universal \(\delta C_{9}\) shift. (In principle, every \(ii\) contributes such RG-induced lepton-universal \(\delta C_{9}\) shift. However, \(ii=33\) is the least constrained in the light of data and can thus afford to provide the dominant contribution.)
Footnote 8: We adopt the “Warsaw basis” [69] and the convention that \(\ell\) and \(l\) refer respectively to charged leptons below the EW scale and to the lepton doublets above the EW scale, see e.g. [70, 67].
The above scenario can thus be realized in different declinations, that we explore in Fig. 2. We switch on \(C_{2223}\) and \(C_{3323}\) independently. Through RG running, _each_ of them would induce a contribution to \(\delta C_{9}^{u(e,\mu)}\). However, \(C_{2223}\) also generates a matching contribution to \(\delta C_{LL}^{(\mu)}\), which is now constrained to a SM-like value. On the other hand, the \(\delta C_{9}^{u(e,\mu)}\) shift induced by \(C_{3323}\) is much less constrained and can be numerically dominant. Quite remarkably, fixing \(C_{3323}\) to account for \(R(D^{(*)})\) yields a \(\delta C_{9}^{u(e,\mu)}\) shift of precisely the correct size to account for, _separately_, \(b\to s\mu^{+}\mu^{-}\) BR measurements and \(b\to s\mu^{+}\mu^{-}\) angular analyses. These facts are shown in the top-left panel of Fig. 2. In this panel, we
Figure 2: Semileptonic \(b\to s\)_and_\(b\to c\) constraints in the plane \(\delta C_{9}^{u(e,\mu)}\) vs. \(\delta C_{LL}^{(\mu)}\) obtained from SMEFT WCs under the assumptions \(|C_{3323}|\gg|C_{2223}|>0\), and \(C_{1123}=0\) (top-left panel) or \(|C_{3323}|\gg|C_{2223}=C_{1123}|>0\) (top-right panel). The latter scenario is also shown in the plane of total \(\delta C_{9}\), equal for _light_ leptons, vs. \(\delta C_{LL}^{(\tau)}\) (bottom-left panel) and with the \(R(D^{(*)})\) constraint inferred from the DM method rather than from the HFLAV average (bottom-right panel). We refer to eq. (1) for WET WC notation, and to the text (Sec. 2.2, 2nd paragraph) for the meaning of \(C_{ii23}\).
assume \(C_{1123}=0\). In fact, a non-zero \(C_{1123}\) would generate a matching contribution to \(\delta C_{LL}^{(e)}\), that (in order to fulfil the \(R_{K^{(*)}}\) constraint) one would want to be of similar size as \(\delta C_{LL}^{(\mu)}\), that in turn is constrained to be nearly zero because of the \(B_{s}\to\mu^{+}\mu^{-}\) constraint. In short, the above scenario corresponds to \(|C_{3323}|\gg|C_{2223}|>0\), and \(C_{1123}=0\), which can be justified on grounds of hierarchical NP.
Data, however, are also compatible with the alternative scenario \(C_{2223}=C_{1123}\), with \(|C_{3323}|\) again hierarchically larger in magnitude. This scenario can be justified on grounds of light-lepton universality, and is displayed in the top-right panel of Fig. 2, in the same WC plane as the top-left panel. We see that the main difference is the impact of the \(R_{K^{(*)}}\) constraint, which becomes ineffectual in the top-right panel. This situation thus parallels that of the bottom-right vs. bottom-left panels of Fig. 1, and was already commented upon before the final items on page 6.
In the bottom-left panel of Fig. 2 we show the implications of the same scenario (\(|C_{3323}|\gg|C_{2223}=C_{1123}|\)) in a different plane of WET WC combinations, that we consider more appropriate to capture the most promising effects in the light of present data. The \(y\) axis corresponds to the left-handed WC shift relevant for \(R(D^{(*)})\), namely \(\delta C_{LL}^{(\tau)}\). In turn, the \(x\) axis is the _total_ (i.e. lepton-specific, matching-induced plus lepton-universal, RG-induced) \(C_{9}\) shift, \(\delta C_{9}^{u(e,\mu)}+\delta C_{9}^{(\mu)}=\delta C_{9}^{u(e,\mu)}+\delta C _{9}^{(e)}\), which is identical for both light leptons in view of the assumptions in the underlying SMEFT scenario.
The top-right vs. bottom-left panels are very correlated with one another--the latter is basically a rotation plus a reflection of the former. Jointly, however, they show that while the _muonic_\(\delta C_{LL}\) direction has to a large extent been "trivialized" by \(B_{s}\to\mu^{+}\mu^{-}\), the _tauonic_\(\delta C_{LL}\) direction is the now-promising one, in the light of a SMEFT interpretation of the \(R(D^{(*)})\) discrepancy.
We emphasize that the SMEFT scenarios behind the top-left panel on one side and the top-right & bottom-left panels on the other side are distinct. The fact that they yield mutually consistent results is non-trivial, and is due to the lack of constraining power in electronic-channel BR and angular data. Basically, the strongest constraint on \(b\to se^{+}e^{-}\) at present is the one inferred indirectly from \(R_{K^{(*)}}\). The day channel-specific data, i.e. \(b\to se^{+}e^{-}\) BR and angular ones, will start to be constraining, they will either provide a striking confirmation of the above coherent picture, or expose an inconsistency. This inconsistency will signal that _(a)_ the anomalous \(R(D^{(*)})\) measurements, _(b)_ the anomalous \(b\to s\mu^{+}\mu^{-}\) BR ones, _(c)_ the anomalous \(b\to s\mu^{+}\mu^{-}\) angular ones, and _(d)_ the electronic counterparts (whether also anomalous or not) of sets _(b) + (c)_ are not consistent with a SMEFT picture. For reference, the counterparts of the top-left and top-right panels, but in the plane of the underlying SMEFT coefficients, are reported in the Appendix.
The above findings show once again that the wealth of semileptonic \(b\)-quark decay data offer a unique probe into possible heavy beyond-SM dynamics, and that this dynamics fulfils at the moment the basic constraints imposed by the SMEFT. We are lucky enough that this picture will be corroborated or falsified soon by data. We meanwhile reiterate
that a cautious approach to the above findings is in order. In this spirit, we also show in the bottom-right panel the counterpart of the bottom-left-panel plot, but for the fact that the \(R(D^{(*)})\) best-fit region is inferred from the Dispersive-Matrix (DM) method [72, 73, 74], rather than from the HFLAV average, whereby the latter represents the default choice in the rest of our numerical study. The DM method suggests a much milder \(R(D^{(*)})\) anomaly than it emerges from the HFLAV average. The bottom-right panel of Fig. 2 thus allows to address quantitatively the question to what extent a non-zero tauonic \(\delta C_{LL}\) gets closer to zero should the \(R(D^{(*)})\) discrepancy fade to the below-\(2\sigma\) figure suggested by the DM method.
For the sake of the discussion to follow, we focus on the scenario with a NP shift that is LFU in _both_ the \(C_{9}\) and \(C_{10}\) directions, shown in the bottom-left panel of Fig. 1. Because LFU is imposed on both axes the constraining power of \(R_{K^{(*)}}\) vanishes, and the discrepant data shows a very clear departure from the SM expectation, with a substantial improvement over the SM hypothesis, and an agreement between the different sets of data at the \(1\sigma\) level. The best-fit intervals for this scenario are reported in Table 2 (see first entry), and suggest a \(20\%\) NP effect in \(C_{9}^{u(e,\mu)}\), whereas shifts in \(C_{10}^{u(e,\mu)}\) stay consistent with zero. We take this case as one of our NP benchmarks, referred to as the "real \(\delta C_{9,10}\)" scenario in the following. The rest of the NP benchmarks in Table 2 concern complex Wilson coefficients, to which we turn next.
### Complex Wilson-coefficient shifts
It has been known for some time that the constraint on the magnitude of Wilson-coefficient shifts is loosened if their phase is not aligned with the phase of the SM contribution [75].9 We quantify this possibility, i.e. consider scenarios where Wilson-coefficient shifts are complex, with generic phases. We do so by alternatively including or excluding CPV
\begin{table}
\begin{tabular}{c c c c} \hline \hline Scenario & Best-fit point & \(1\sigma\) Interval & \(\chi^{2}/\chi^{2,\rm SM}\) \\ \hline \((\delta C_{9}^{u(e,\mu)},\delta C_{10}^{u(e,\mu)})\in\mathbb{R}\) & \((-0.88,+0.30)\) & \(([-1.08,-0.56],[0.15,0.46])\) & \(5.5\) \\ \(\delta C_{LL}^{u(e,\mu)}/2\in\mathbb{C}\) & \(-0.70-1.36i\) & \([-1.00,-0.54]+i[-1.77,-0.54]\) & \(5.8\) \\ \(\delta C_{9}^{u(e,\mu)}\in\mathbb{C}\) & \(-1.08+0.10i\) & \([-1.31,-0.85]+i[-0.70,+0.85]\) & \(6.4\) \\ \(\delta C_{10}^{u(e,\mu)}\in\mathbb{C}\) & \(+0.68+1.40i\) & \([+0.38,+1.00]+i[+0.69,+1.92]\) & \(3.2\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Reference scenarios for two-real or one-complex WC combinations.
observables in the \(b\to s\mu^{+}\mu^{-}\) sector, in order to study their possible role in favoring a particular sign for the imaginary part of the Wilson coefficients. Concretely, we include the e.m.-dipole Wilson coefficient
\[C_{7}^{bs}-C_{7}^{bs,\rm SM}\equiv\delta C_{7}^{bs}\equiv\delta C_{7}\,\ \ \ \ \ \ \ C_{7}^{bs\{,\rm SM \}}\equiv C_{7}^{\{\rm SM\}} \tag{2}\]
and explore four different scenarios, with a NP shift in one single Wilson coefficient or a combination, namely \(\delta C_{7}\), or \(\delta C_{9}^{u(e,\mu)}\), or \(\delta C_{10}^{u(e,\mu)}\), or the previously defined \(\delta C_{LL}^{u(e,\mu)}/2=\delta C_{9}^{u(e,\mu)}=-\delta C_{10}^{u(e,\mu)}\). In compliance with the \(R_{K^{(*)}}\) measurement, every scenario enforces LFU on both the real and imaginary parts of the Wilson coefficients. The results are presented in Fig. 3, and the \(1\sigma\) intervals for the scenarios that are most interesting for our purposes are also collected in Table 2.
We find that the current data are compatible with a sizeable (\(\simeq\) few \(10\%\times\) the size of the SM contribution, see footnote 5) imaginary component in any of the \(\delta C_{9}^{u(e,\mu)}\), \(\delta C_{10}^{u(e,\mu)}\) and \(\delta C_{LL}^{u(e,\mu)}\) scenarios in Table 2. Also, none of the scenarios hints at a non-zero imaginary part, in all cases compatible with zero below the \(2\sigma\) level. These conclusions, to be qualitatively expected, follow from the fact that CP-odd observables are still underconstraining, as already noted. The figure also shows that the only scenario where all considered subsets of data are in mutual agreement (i.e. all the colored subregions overlap somewhere in the plane) is the first scenario, where one fits to \({\rm Re}\,\delta C_{9}^{u(e,\mu)}\) vs. \({\rm Im}\,\delta C_{9}^{u(e,\mu)}\). This suggests, again, that \(\delta C_{9}^{u(e,\mu)}\) is a necessary ingredient, and \(\delta C_{10}^{(\mu)}=0\) a preferred requirement, to achieve mutual agreement among all data.
The first three of the scenarios collected in Table 2 perform comparably well, in particular \(\delta C_{9}^{u(e,\mu)}\) or \(\delta C_{LL}^{u(e,\mu)}\) represent the best-performing complex scenarios. Interestingly, the complex-\(\delta C_{LL}^{u(e,\mu)}\) case shows that a sizeable _real_ NP contribution--of \({\cal O}(20\%)\) the SM value--is still allowed by the current \(B_{s}\to\mu^{+}\mu^{-}\) measurement. This is because \({\cal B}(B_{s}\to\mu^{+}\mu^{-})\propto|C_{10}^{(\mu)}|^{2}\), leading to an approximately circular shape in the \({\rm Re}\ \delta C_{LL}^{u(e,\mu)}\) vs. \({\rm Im}\,\delta C_{LL}^{u(e,\mu)}\) plane. Also interestingly, \(B_{s}\to\mu^{+}\mu^{-}\) is likewise compatible with a sizeable imaginary part--that is entirely plausible, see footnote 9, and that existing data barely probe. As regards \(\delta C_{7}\), the strongest constraint comes from the wealth of \(b\to s\gamma\) data available, plus CPV observables in the \(b\to s\mu^{+}\mu^{-}\) sector. Taking also into account that \(B_{s}\to\mu^{+}\mu^{-}\gamma\) at high \(q^{2}\) is mostly sensitive to \(C_{9,10}\) (see discussion in Sec. 3.2), we do not consider a \(\delta C_{7}\) scenario in the rest of this work.
As a final remark for Sec. 2 we emphasize again, as we did in the Introduction, that a complete calculation of non-local contributions entering the currently discrepant \(b\to s\mu^{+}\mu^{-}\) data may, _or may not_, show that the shifts collected in Table 2, in particular those involving the muonic \(C_{9}\), are actually due to SM long-distance dynamics. In these circumstances, it is meaningful to pursue observables that bear sensitivity to the very same Wilson-coefficient shifts one wants to probe, but are _not_ affected by the same long-distance physics. The \(B_{s}\to\mu^{+}\mu^{-}\gamma\) branching ratio at high \(q^{2}\) is one such observable.
In the following we will discuss the significance one may expect for a shift as large as the real-\(\delta C_{9,10}\) scenario (first entry of Table 2) or the complex-\(\delta C_{LL}\) one (second entry of the same table) through a high-\(q^{2}\) analysis of \(\mathcal{B}(B_{s}\to\mu^{+}\mu^{-}\gamma)\), as a function of the data accumulated.
## 3 \(B_{s}\to\mu^{+}\mu^{-}\gamma\) Uncertainties and NP Reach
### Experimental uncertainties
In great synthesis, measuring \(B_{s}\to\mu^{+}\mu^{-}\gamma\) at LHC using the partially reconstructed method [31] relies on a fit to the \(q^{2}\)-differential distribution in an appropriate high-\(q^{2}\) region, using as external constraints the known BRs of the purely leptonic modes \(B_{d,s}\to\mu^{+}\mu^{-}\) as well as the shapes and normalizations for certain well-defined backgrounds. It is difficult to assess the sensitivity of such procedure to \(B_{s}\to\mu^{+}\mu^{-}\gamma\) in the absence of a search optimized for \(B_{s}\to\mu^{+}\mu^{-}\gamma\), rather than for the leptonic modes, as is the case for existing analyses.
With this important caveat in mind, the aim of the present section is to assess to the best of our knowledge the LHC prospects of measuring the \(B_{s}\to\mu^{+}\mu^{-}\gamma\) observable defined above. To this end, we need to make certain assumptions spelled out next. The first is that the efficiency of \(B_{s}\to\mu^{+}\mu^{-}\gamma\) in the considered integrated high-\(q^{2}\) region is equal to \(B_{s}\to\mu^{+}\mu^{-}\)'s. This assumption relies on two opposing effects. On the one side, because \(B_{s}\to\mu^{+}\mu^{-}\gamma\) is not reconstructed as a peak but as a shoulder within a broad10 dimuon invariant mass window, the efficiency will have a certain \(q^{2}\) dependence. Typically the efficiency will decrease moving from the \(B_{s}^{0}\) mass to lower \(q^{2}\) values. In fact, lowering \(q^{2}\) means a lower transverse momentum in the laboratory frame thus lowering the reconstruction and triggering efficiencies. It also means more abundant backgrounds, from more channels, to be rejected with tighter requirements. On the other side, this effect could be mitigated by developing an analysis optimised for the \(B_{s}\to\mu^{+}\mu^{-}\gamma\) decay, which Refs. [32, 33] are not. More precisely, a dedicated \(B_{s}\to\mu^{+}\mu^{-}\gamma\) analysis could mitigate the \(q^{2}\) dependence of the efficiency by deploying techniques, specific to partially-reconstructed decays, that were not necessary for the aims of Refs. [32, 33]. Explicitly, two dominant sources of background, in addition to the combinatorial one, are \(B\to h\mu\nu\) decays, with \(h\) being a hadron mis-identified as a muon, and \(B\to\pi^{0}\mu^{+}\mu^{-}\), with a \(\pi^{0}\) not reconstructed. Both decay sources need to be constrained in terms of the dimuon mass distribution as well as of the yield, in order not to limit the sensitivity to the signal. The improvements required are in the calibration of the trigger, and of the reconstruction and selection efficiencies as well as in particle (mis)-identification for muons (hadrons). These improvements are impossible to quantify reliably without a dedicated experimental study [77].
Footnote 10: As compared with the \(B_{s}\to\mu^{+}\mu^{-}\)\(q^{2}\) “window” namely the region where most of the \(B_{s}\to\mu^{+}\mu^{-}\) signal candidates are.
Clearly, since the background increases for smaller dimuon masses, the lower bound, \(q_{\rm min}^{2}\), of \(q^{2}\) integration, has to be chosen as a compromise between larger statistics and larger background. In addition below a certain \(q_{\rm min}^{2}\) value around \((4~{}{\rm GeV})^{2}\)\(c\bar{c}\) resonances start to play an important role, lessening experimental and theoretical sensitivities alike. Within a given \(q_{\rm min}^{2}\) choice, the actual sensitivity will rely on controlling the background within an associated error which is better than the expected signal yield.
\(B_{s}\to\mu^{+}\mu^{-}\gamma\) events in the analysis of Ref. [33], were \(N_{\rm exp}(B_{s}\to\mu^{+}\mu^{-}\gamma)=1.7\) with the full Run-2 dataset, for \(q_{\rm min}^{2}=4.9\) GeV\({}^{2}\) (table 7 of Ref. [33]) and a multivariate analysis BDT score \(>0.25\), meaning that the efficiency corresponding to the BDT requirement alone is 75%.11 This \(N_{\rm exp}(B_{s}\to\mu^{+}\mu^{-}\gamma)\) figure may be compared with the _accuracy_ to which the leading background is controlled. From table 9 of Ref. [33], this accuracy is 5 events with the same BDT requirement. This comparison suggests that background calibration has clearly to improve, but it leaves hope, because the two numbers (1.7 events of signal on a background uncertainty of 5) are not far from each other even within an analysis far from optimized for our signal of interest. It is important to note that the latter argument considers the expected yields and their uncertainties integrated over the _entire mass window_, while a fit to the dimuon mass distribution gives a superior sensitivity provided the shapes of the background contributions are known.
Footnote 11: Note that, to obtain this result, the BDT is trained with \(B_{s}\to\mu^{+}\mu^{-}\) as signal, i.e. is _not_ optimized for \(B_{s}\to\mu^{+}\mu^{-}\gamma\) instead: from e.g. a BDT score of 1, meaning a maximal \(B_{s}\to\mu^{+}\mu^{-}\) purity, one cannot unambiguously infer the \(B_{s}\to\mu^{+}\mu^{-}\gamma\) purity.
In the light of the above considerations, we will henceforth assume that all the backgrounds are under control, i.e. that their uncertainties will eventually fall safely below the signal yield. Under this "no-background" hypothesis the \(B_{s}\to\mu^{+}\mu^{-}\gamma\)-signal uncertainty is dominated by the sheer amount of data collected. We accordingly assess the \(B_{s}\to\mu^{+}\mu^{-}\gamma\) sensitivity to the NP scenarios in Table 2 as a function of the data size. Finally, while what follows assumes the LHCb experiment as reference, the same techniques can be used also in other setups where we can expect copious amounts of \(B_{s}\) mesons to be produced, e.g. at ATLAS and CMS. The level of the combinatorial background will however depend on the interactions pile-up and the control of the misidentified background will rely on details of the particle identification of each experiment.
### Theory uncertainties
A reappraisal of the theoretical uncertainty in \({\cal B}(B_{s}\to\mu^{+}\mu^{-}\gamma)\) for high \(q^{2}\) above narrow charmonium was presented in Ref. [78]. It was shown that the main theory uncertainty arises from the vector and axial form factors \(V_{\perp,\parallel}(q^{2})\) of the \(B_{s}\to\gamma\) transition; that tensor form factors \(T_{\perp,\parallel}(q^{2})\) play, in comparison with vector and axial ones, a subdominant role irrespective of the parameterization used; that broad charmonium plays an entirely negligible role for \(\sqrt{q^{2}}\gtrsim 4.2\) GeV. With the existing estimations of the vector and axial form factors--none of which is based exclusively on a first-principle calculation--the uncertainty in \({\cal B}(B_{s}\to\mu^{+}\mu^{-}\gamma)\) integrated in \(\sqrt{q^{2}}\in[4.2,5.0]\) GeV is currently on the order of 50%. To resolve \(\delta C_{9}^{(\mu)}/C_{9}^{(\mu),{\rm SM}}\simeq 15\%\), one has to control the vector and axial form factors multiplying this Wilson coefficient to an accuracy better than \(\delta C_{9}^{(\mu)}/C_{9}^{(\mu),{\rm SM}}\), because the BR has the _same_, dominantly quadratic dependence on _both_\(C_{9}^{(\mu)}\) and the vector and axial form factors. Controlling form factors with such accuracy is _already_ within reach of LQCD
calculations. Specifically, Ref. [79] calculates \(D_{s}\to\gamma\) vector and axial form factors with a quoted error around 10% or even below. An even higher precision has been achieved in the very recent Ref. [80], where some lattice data reach a relative error of few percent. Besides, these form factors are computed _directly_ in the very same high-\(q^{2}\) region of interest for the indirect \(\mathcal{B}(B_{s}\to\mu^{+}\mu^{-}\gamma)\) measurement, i.e. no kinematic extrapolation is required.12 In short, a theory error as small as needed is well within reach to the extent that the calculation in Refs. [79, 80] is extended to the \(B_{s}\) case. In synthesis, it is realistic to assume that ripe LQCD calculations of \(B_{s}\to\gamma\) vector and axial form factors in the very kinematic region of interest to us will come with typical errors not larger than 5%, which is negligible in comparison with the nominal NP shift mentioned above and also with the experimental error to be expected in the foreseeable future.
Footnote 12: Given the decay \(P(p)\to\mu\mu(q)+\gamma(k)\), with \(P\) the decaying meson, and introducing the variable \(x_{\gamma}\equiv 2pk/m_{P}^{2}=1-q^{2}/m_{P}^{2}\), Ref. [79] calculates \(D_{s}\to\gamma\) FFs for \(x_{\gamma}\in[0.05,0.4]\). Our fiducial \(q^{2}\) range for \(B_{s}\to\mu^{+}\mu^{-}\gamma\), \(\sqrt{q^{2}}\in[4.2,5.0]\,\text{GeV}\), corresponds to \(x_{\gamma}\in[0.39,0.13]\).
For the purpose of the present study we thus mostly need realistic central values for the vector and axial \(B_{s}\to\gamma\) form factors, and we adopt the recent parametrization proposed in Ref. [78]. In that work, the contributions from tensor and axial-tensor form factors were found to be negligible within the still large uncertainty on the vector and axial-vector form factors obtained in the same work. However, in the present case the vector and axial form-factor uncertainty is assumed to be below 5%, as already discussed, and the choice of tensor form factors does have an impact, as can be seen in Fig. 4. For example, one may compare the parameterization in Ref. [81] with setting \(T_{\perp,\parallel}=0\), which means assuming a 100% uncertainty. The difference between the two choices induces a difference in the BR prediction of the _same order_ as the NP shift one wants to access.
In short, a first-principle evaluation of the tensor form factors will also be required. But importantly, a limited accuracy, of the order 20%, will be sufficient in this case, given the subdominant nature of these contributions. In fact, a 20% error on tensor form factors, which appear in contributions whose overall nominal size does not exceed about 25% of those induced by vector or axial form factors, is sufficient to control tensor contributions with an overall theory uncertainty of about 5%, which is already below the size of the expected NP shift and the assumed uncertainty on the (axial-)vector form factors. The final considered uncertainties for the vector and tensor form factors is shown in Fig. 5.
As concerns the short-distance side of the tensor contributions, we vary \(\delta C_{7}\) in the \(1\sigma\) interval identified from the complex fit of Fig. 3. We do not pursue a sensitivity study to possible NP contributions to \(C_{7}\), in particular because \(B_{s}\to\mu^{+}\mu^{-}\gamma\) at high \(q^{2}\) is vastly more sensitive to \(\delta C_{9,10}^{(\mu)}\). Sensitivity to \(C_{7}\), whose SM value is much smaller than the \(C_{9,10}^{(\mu)}\) counterparts, requires to be close to the _lower_\(q^{2}\) endpoint. Also, sensitivity to \(C_{7}\) occurs through terms \(\propto\text{Re}(C_{7}\,C_{9,10}^{(\mu)*})\) and is thus only linear. Indeed, the complex NP shift to \(C_{7}\) shown in Fig. 3 is too small to be resolved.
### Outlook on \(B_{s}\to\mu^{+}\mu^{-}\gamma\) as a probe to NP
In this section, we put to work the considerations made in Secs. 3.1-3.2 and use them to infer the NP sensitivity of a measurement of \(\mathcal{B}(B_{s}\to\mu^{+}\mu^{-}\gamma)\) at high \(q^{2}\) with the partially reconstructed method [31]. For clarity, the main conclusions of Secs. 3.1-3.2 are that: _(i)_ we
Figure 4: (left) Differential branching fraction in \(q^{2}\) and (right) integrated branching ratio of \(B_{s}\to\mu^{+}\mu^{-}\gamma\) in the high-\(q^{2}\) region as a function of the lower bound of integration \(\sqrt{q_{\rm min}^{2}}\), for various tensor form factor parameterization and the NP scenarios under consideration. A 6% uncertainty is assumed for the (axial-)vector form factors, a 20% uncertainty is attached to the tensor form factors of Ref. [81], and \(C_{7}\) is varied in the \(1\sigma\) region authorized by the complex fit of Fig. 3.
Figure 5: Set of \(B_{s}\to\gamma\) transition form factors, with a 6% uncertainty assumed on the (axial-)vector form factors \(V_{\perp(\parallel)}(q^{2})\) with central values from Ref. [78], and 20% on the (axial-)tensor ones \(T_{\perp(\parallel)}(q^{2})\) with central values from Ref. [81].
may assume a theory uncertainty on \({\cal B}(B_{s}\to\mu^{+}\mu^{-}\gamma)\) around 6%, dominated by FFs. This uncertainty relies on \(V_{\perp,\parallel}\) in our \(q^{2}\) range of interest being determined with an accuracy of 5% or less, and \(T_{\perp,\parallel}\) with no more than a 20% accuracy; _(ii)_ the theory error is actually negligible with respect to the experimental error. The latter is difficult to infer with any confidence. Hereafter, we consider the case where it is dominated by sheer statistics, which may be a safe and not unrealistic assumption (see Sec. 3.1 for details).
As regards the assumed NP shifts we focus on two scenarios, referred to as "real \(\delta C_{9,10}\)" and "complex \(\delta C_{LL}\)", and corresponding to the first two entries of Table 2. In either of the two cases, the shift is assumed to be LFU,13 hence we drop the flavour index hereafter. Note that we do not consider other logically possible scenarios, to which the \(B_{s}\to\mu^{+}\mu^{-}\gamma\) sensitivity is either too small--e.g. scenarios involving \(C_{7}\)--or is similar to the real-\(\delta C_{9,10}\) and complex-\(\delta C_{LL}\) scenarios we focus on.
Footnote 13: In principle, the minimal assumption is light-lepton universality, as highlighted by the \({}^{u(e,\mu)}\) index in Table 2. However, the case of full lepton universality, denoted as LFU without further qualifications, is not currently distinguishable from the former.
Using the best-fit values in Table 2, and the theoretical uncertainties discussed in Sec. 3.2 and summarized above, we extract the integrated BR for \(B_{s}\to\mu^{+}\mu^{-}\gamma\) in eq. (3). A qualification is in order about the chosen \(q^{2}\) range. The photon in \(B_{s}\to\mu^{+}\mu^{-}\gamma\) is to be understood as "Initial State Radiation". Then, as discussed in Refs. [31, 78], \({\cal B}(B_{s}\to\mu^{+}\mu^{-}\gamma)\) is well-defined for \(\sqrt{q^{2}}<5\) GeV, because above this threshold "Final State Radiation" (FSR), or bremsstrahlung from the muons, is not negligible or dominant. However, experimentally \(B_{s}\to\mu^{+}\mu^{-}\gamma\) and \(B_{s}\to\mu^{+}\mu^{-}\) are two components of the same fit; di-muon candidates in this fit are bremsstrahlung recovered, i.e. FSR is in practice subtracted by MonteCarlo [82]; finally, choosing \([4.2,5.0]\) GeV or \([4.2~{}{\rm GeV},m_{B_{s}^{0}}]\) leads to a difference in the prediction by barely 2%. Hence, following Ref. [78], the predictions in eq. (3) refer to \(\sqrt{q^{2}}\in[4.2~{}{\rm GeV},m_{B_{s}^{0}}]\).
\[{\rm SM} : (1.22\pm 0.12_{(V_{\perp},V_{\parallel})}\pm 0.06_{(T_{\perp},T_{ \parallel})}\pm 0.04_{\rm other})\times 10^{-10}~{}, \tag{3}\] \[{\rm real}~{}\delta C_{9,10} : (0.90\pm 0.09_{(V_{\perp},V_{\parallel})}\pm 0.05_{(T_{\perp},T_{ \parallel})}\pm 0.03_{\rm other})\times 10^{-10}~{},\] \[{\rm complex}~{}\delta C_{LL} : (0.99\pm 0.10_{(V_{\perp},V_{\parallel})}\pm 0.04_{(T_{\perp},T_{ \parallel})}\pm 0.05_{\rm other})\times 10^{-10}~{}.\]
We emphasize that the quoted uncertainty is theoretical only, and takes into account all other known sources, including \(V_{cb}\) and the broad-charmonium resonances14 and also marginalizing over \(\delta C_{7}\) in the NP scenarios (lines 2 and 3 of eq. (3)). The central values in the second and third lines represent 20% shifts compared to the SM expectation (first line). The latter matches the estimate from Ref. [78], where central values for tensor FFs are taken from Ref. [81].
Footnote 14: We refer the reader to Ref. [78] for details about this aspect.
The sensitivity of \(B_{s}\to\mu^{+}\mu^{-}\gamma\) in the high-\(q^{2}\) region to the two NP scenarios is displayed in Fig. 6. The leftmost panel shows that \({\cal B}(B_{s}\to\mu^{+}\mu^{-}\gamma)\) provides stronger constraints on the real part of \(\delta C_{LL}\), as expected from a CP-even observable. Using this NP benchmark
(i.e. a \(\delta C_{LL}\) leading to the last of eqs. (3)) as the central-value prediction for the integrated branching ratio, one can compute the relative error on the \(B_{s}\to\mu^{+}\mu^{-}\gamma\) measurement. The pull of this observable to the SM is shown in Fig. 7. We note that, for each luminosity value in the figure, the range in the pull corresponds to the \(1\sigma\) range in the Wilson-coefficient shift. In turn, the sensitivity of \(B_{s}\to\mu^{+}\mu^{-}\gamma\) to the real-\(\delta C_{9,10}\) scenario is shown in the rightmost panel of Fig. 6. We see that sensitivity to this scenario is enhanced compared to the complex-\(\delta C_{LL}\) one. Even if the presence of NP _lowers_ the integrated BR--and thus the event yield--w.r.t. the complex-\(\delta C_{LL}\) case, we find that the pull to the SM in the real-\(\delta C_{9,10}\) scenario is larger than the pull for the complex-\(\delta C_{LL}\) one as a function of the acquired data and reaches the \(2\sigma\) level at the border of the \(1\sigma\) region for the Wilson-coefficient shift. We finally note that, although other scenarios in Table 2 have a higher \(\chi^{2}/\chi^{2,\rm SM}\) value, the BR for \(B_{s}\to\mu^{+}\mu^{-}\gamma\) has stronger constraining power in scenarios that involve the real parts of \(C_{9}\)_and_\(C_{10}\), if possible independently. Scenarios featuring imaginary shifts to Wilson coefficients perform better over the SM precisely because constraints from BRs are weaker. For such scenarios, one would have to resort to CP-sensitive observables, e.g. \(A_{\Delta\Gamma}\)[76] in the case of our decay.
## 4 Conclusions
Purpose of this paper is a quantitative study of the potential of \(B_{s}\to\mu^{+}\mu^{-}\gamma\), measured at high \(q^{2}\) as a partially reconstructed decay [31], as a probe of the origin of the discrepancies in semi-leptonic \(b\to s\) and \(b\to c\) decays.
Figure 7: Pull to the SM (top) in the complex \(\delta C_{LL}\) scenario and (bottom) in the real \(\delta C_{9,10}\) scenario. The colored area on the leftmost panels represent the \(1\sigma\) region authorized by the respective NP scenarios. The rightmost panels represent the pull for 300 fb\({}^{-1}\) of collected data.
As an initial step towards this end, we reassess these discrepancies, first in the weak effective theory, then in the SMEFT. Our global fit displays quantitative consistency across \(b\to s\) and \(b\to c\) semi-leptonic data--even after the \(R_{K^{(*)}}\) and \(B_{s}\to\mu^{+}\mu^{-}\) updates. Specifically, \(B_{s}\to\mu^{+}\mu^{-}\) makes the _muonic_\(\delta C_{9}=-\delta C_{10}\) direction less appealing than it used to be before the updates, because it prefers \(C_{10}\) to be SM-like within errors. Instead, data provide circumstantial support to a _tauonic_\(\delta C_{9}=-\delta C_{10}\equiv C_{LL}/2\) shift generated at a high scale, and leaving as imprints effects in \(R(D^{(*)})\) on the one side, _and_ a lepton-universal \(\delta C_{9}\) on the other side. SMEFT allows to make this connection between \(b\to s\) and \(b\to c\) effects quantitative. We find that the tauonic \(C_{LL}\) required by \(R(D^{(*)})\) implies a universal \(\delta C_{9}\) of precisely the size required, _separately_, by the anomalous \(b\to s\mu^{+}\mu^{-}\) BR data, and by the anomalous angular \(b\to s\mu^{+}\mu^{-}\) analyses. Needless to say, this conclusion is to be taken with great caution, because both imprints are far from established: first, angular, and especially BR data demand updates; second, the significance of the \(R(D^{(*)})\) anomalies is in the eye of the global-fit beholder and ranges from \(\sim 2\sigma\)[72, 73, 74] to higher significances [83], depending on the analysis strategy and/or on the theoretical inputs used.
The above-mentioned global fits allow us to identify reference scenarios, that we use as benchmarks for our stated aim of exploring the potential of \(B_{s}\to\mu^{+}\mu^{-}\gamma\) at high \(q^{2}\) as a probe of the flavour anomalies. For this purpose, we first discuss the outlook on the total error of this observable, from both theory and experimental standpoints. Importantly, the theory accuracy of \(\mathcal{B}(B_{s}\to\mu^{+}\mu^{-}\gamma)\) is not limited by the long-distance effects that inherently hinder predictions for semi-leptonic \(b\to s\) modes at low-\(q^{2}\), for example effects due to \(B\to D\bar{D}^{*}\) rescattering. In this respect, \(B_{s}\to\mu^{+}\mu^{-}\gamma\) offers a neat strategy to probe the very same short-distance physics possibly responsible for the anomalies, but in a _different_ kinematic region. The \(B_{s}\to\mu^{+}\mu^{-}\gamma\) theory error at high \(q^{2}\) is dominated by the form-factor component. While still large, this component is scalable, because for one thing high \(q^{2}\) is the preferred kinematic region for lattice-QCD calculations.
We find that, over the long haul, the total error for \(B_{s}\to\mu^{+}\mu^{-}\gamma\) at high \(q^{2}\) is dominated by the experimental component. Absent an analysis optimized for the \(B_{s}\to\mu^{+}\mu^{-}\gamma\) search, we estimate this error from the sheer statistical component. We then infer the \(B_{s}\to\mu^{+}\mu^{-}\gamma\) sensitivity as the distance of the SM prediction vs. the prediction within the aforementioned NP benchmarks, in units of the total error determined as described. In the example of the real-\(\delta C_{9,10}\) scenario, we find that the pull as a function of the acquired data reaches the \(2\sigma\) level at the border of the \(1\sigma\) region for the Wilson-coefficient shift.
In case such sensitivity may look underwhelming, we make the following final remarks. First, the above sensitivity is to be compared with other _single_-observable sensitivities that one can expect from e.g. BRs and angular analyses. For observables at low \(q^{2}\), sensitivities that are quoted in the literature tacitly rely on a breakthrough in the understanding of long-distance effects. As already emphasized, this issue is absent at _high_\(q^{2}\) in the case of \(B_{s}\to\mu^{+}\mu^{-}\gamma\). Yet, we are not aware of other detailed studies of high-\(q^{2}\) exclusive observables to compare our study against. An interesting (semi-)inclusive example is the recent Ref. [84].
Second, in all likelihood any NP in semi-leptonic \(B\) decays will first be established _collectively_, i.e. through many modes showing a coherent trend--and a persistent one with increasing statistics. Only later will such trend be consolidated by single observables getting to the canonical \(5\sigma\) departures required. If this is the case, then it is of the highest importance to find as many measurables as possible that allow to confirm the trend--e.g. experimental branching ratios below the theory prediction--in observables devoid of the long-distance issues that at present plague low \(q^{2}\). This study goes in this direction.
## Acknowledgments
We warmly thank Martino Borsato for useful feedback, and Alexandre Carvunis for help on various aspects of flavio. We are grateful to Francesco Dettori for several discussions and for input related to Sec. 3.1. This work is supported by ANR under contract n. 202650 (ANR-19-CE31-0016, GammaRare).
## Appendix: Global fits in SMEFT
In Fig. 8, we collect the counterparts of the top-left and top-right panels of Fig. 2, but in the plane of the underlying SMEFT Wilson coefficients. For the sake of self-consistency, we do not use here the abbreviations introduced in Sec. 2.2 and instead simply stick to the Warsaw-basis notation [69].
Figure 8: Counterparts of the top-left and top-right panels of Fig. 2, respectively, in the plane of the underlying SMEFT Wilson coefficients. |
2309.17107 | Three-pion scattering: From the chiral Lagrangian to the lattice | In recent years, detailed studies of three-pion systems have become possible
in lattice QCD. This has in turn led to interest in 3-to-3 scattering of pions
in the chiral perturbation theory framework. In addition to being an
interesting study of multi-meson dynamics in its own right, it provides a
valuable handle on finite-volume effects and the pion mass dependence, thus
complementing the lattice results. I present our derivation of the
next-to-leading order amplitude for this process, as well as its conversion
into the three-particle K-matrix, which enables direct comparison to the
lattice. Our results significantly improve the agreement between theory and
lattice, which was poor when only leading-order effects were taken into
account. | Jorge Baeza-Ballesteros, Johan Bijnens, Tomáš Husek, Fernando Romero-López, Stephen R. Sharpe, Mattias Sjö | 2023-09-29T10:07:10Z | http://arxiv.org/abs/2309.17107v1 | # Three-pion scattering: From the chiral Lagrangian to the lattice1
###### Abstract
In recent years, detailed studies of three-pion systems have become possible in lattice QCD. This has in turn led to interest in 3-to-3 scattering of pions in the chiral perturbation theory framework. In addition to being an interesting study of multi-meson dynamics in its own right, it provides a valuable handle on finite-volume effects and the pion mass dependence, thus complementing the lattice results. I present our derivation of the next-to-leading order amplitude for this process, as well as its conversion into the three-particle K-matrix, which enables direct comparison to the lattice. Our results significantly improve the agreement between theory and lattice, which was poor when only leading-order effects were taken into account.
keywords: Chiral Lagrangian, Hadronic Spectroscopy, Structure and Interactions, Lattice QCD +
Footnote †: journal: Nuclear Physics Letters B
## 1 Introduction
Lattice QCD and chiral perturbation theory (ChPT) are two widely used tools for low-energy QCD. While the lattice provides a general first-principles numerical approach, ChPT allows perturbative calculation of many processes and can be used to control certain systematics of the lattice, such as finite-volume effects and pion mass dependence. Thus, recent advances in lattice QCD (see references in Ref. [1], particularly Refs. [2; 3]) have caused interest in the study of 3-to-3 scattering using ChPT, a process that formerly had only been studied at leading order [4; 5]; the higher-order tree-level counterterms were also recently computed [6; 7].
These proceedings are partly based on Ref. [8], where we performed the first one-loop determination of the 3-to-3 pion scattering amplitude, and its follow-up Ref. [9], where we generalized that result to cases including more than 2 quark flavors; however, results for which \(m_{s}\neq m_{u,d}\) are still unavailable. It is also based on Ref. [1], in which this amplitude is converted into the 3-particle K-matrix, a scheme-dependent object that allows the finite-volume energy spectrum to be determined. The K-matrix can also be calculated on the lattice (as of this writing, Ref. [10] has the most precise values), allowing for comparison to ChPT. Previous results using leading-order ChPT [11] disagree strongly with the lattice results.
## 2 The \(3\pi\to 3\pi\) amplitude in ChPT
ChPT [12; 13] is the effective field theory that arises from the breaking of chiral symmetry down to its diagonal subgroup, giving the breaking pattern \(\mathrm{SU}(n)\times\mathrm{SU}(n)/\,\mathrm{SU}(n)\) for \(n\) quark flavors, as used in Ref. [9]. For \(n=2\), an equivalent formulation is O(4)/ O(3), which is used in Ref. [8].
In the O(4)/ O(3) formulation, the Lagrangian is
\[\begin{split}\mathcal{L}&=\tfrac{F^{2}}{2}\partial_{\mu} \Phi^{\mathsf{T}}\partial^{\mu}\Phi+F^{2}\chi^{\mathsf{T}}\Phi\\ &+\ell_{1}(\partial_{\mu}\Phi^{\mathsf{T}}\partial^{\mu}\Phi)( \partial_{\nu}\Phi^{\mathsf{T}}\partial^{\nu}\Phi)\\ &+\ell_{2}(\partial_{\mu}\Phi^{\mathsf{T}}\partial_{\nu}\Phi)( \partial^{\mu}\Phi^{\mathsf{T}}\partial^{\nu}\Phi)\\ &+\ell_{3}(\chi^{\mathsf{T}}\Phi)^{2}+\ell_{4}\partial_{\mu}\chi ^{\mathsf{T}}\partial^{\mu}\Phi+\ldots\,,\end{split} \tag{1}\]
where the first line is LO, and orders above NLO have been omitted. Here, \(\ell_{i}\) are (bare) coupling constants, \(\chi^{\mathsf{T}}=(M^{2},\mathbf{0})\) with \(M\) the (bare) pion mass, and \(\Phi\) is a 4-component vector that can be parametrized in terms of the pion fields in multiple ways; the one used in Ref. [1] was introduced by Weinberg [14] and is
\[\Phi=\frac{1}{1+\mathbf{\phi}^{\mathsf{T}}\mathbf{\phi}/4F^{2}}\left(1-\mathbf{\phi}^{ \mathsf{T}}\mathbf{\phi}/4F^{2},\tfrac{1}{\mathbf{\varepsilon}}\mathbf{\phi}^{\mathsf{T}} \right)^{\mathsf{T}}\,, \tag{2}\]
where \(\mathbf{\phi}^{\mathsf{T}}=(\phi_{1},\phi_{2},\pi^{0})\), \(\pi^{\pm}=\frac{1}{\sqrt{2}}(\phi_{1}\mp i\phi_{2})\), and \(F\) is the (bare) pion decay constant. See Ref. [8] for a more extensive description of the parametrizations, and Ref. [9] for a novel derivation of the most general parametrization for \(n\geq 2\).
The couplings \(\ell_{i}\) are renormalized to \(\ell_{i}^{\mathsf{r}}\) using
\[\ell_{i}=-\kappa\frac{\gamma_{i}}{2}\left[\tfrac{1}{\epsilon}-\log\tfrac{\mu^ {2}}{4\pi}+1\right]+\ell_{i}^{\mathsf{r}} \tag{3}\]
in \(4-2\epsilon\) dimensions at \(\mu=770\ \mathrm{MeV}\approx M_{\rho}\), with
\[\gamma_{1}=\tfrac{1}{3}\,,\quad\gamma_{2}=\tfrac{2}{3}\,,\quad\gamma_{3}= \tfrac{1}{3}\,,\quad\gamma_{4}=2 \tag{4}\]
and \(\kappa\equiv 1/(16\pi^{2})\). \(M,F\) are renormalized to \(M_{\pi},F_{\pi}\).
Figure 1 shows the Feynman diagrams relevant for the six-point (i.e., 3-to-3) amplitude, \(\mathcal{M}_{3}\). The amplitude partly factorizes over the propagator pole:
\[\mathcal{M}_{3}=-\sum\frac{\mathcal{M}_{2}^{(L)}\mathcal{M}_{2}^{(R)}}{b^{2}- M_{\pi}^{2}}+\mathcal{M}_{3}^{\text{(non-pole)}}\,, \tag{5}\]
where \(\mathcal{M}_{2}^{(L,R)}\) are 4-point amplitudes with one leg off-shell, corresponding to the left and right vertices of a diagram like fig. 0(b); \(\pm b\) is the momentum of those off-shell legs, and of the propagator that joins them; the sum is over the ways of distributing the 6 on-shell external legs between \(\mathcal{M}_{2}^{(L,R)}\); and \(\mathcal{M}_{3}^{\text{(non-pole)}}\) is whatever remains of the full amplitude. Such a pole is present in figs. 0(b), 0(d), 0(e), 0(h) and 0(j), but the correspondence is neither exact nor unique: The contribution of each diagram depends on the parametrization, and the structure of eq. (5) depends on the convention used for off-shell amplitudes.
Most of the non-factorizing part of the calculation follows as a simple extension of the 4-point amplitude derived in Ref. [15], since figs. 0(a), 0(c) and 0(i) are essentially 4-point diagrams with extra legs on some vertices. Only the triangle loop diagram, fig. 0(k), presents a problem, since conventional Passarino-Veltman reduction results in extremely long expressions. We found it necessary to devise a redundant basis of more symmetric triangle loop functions, labelled \(C\), \(C_{11}\), \(C_{21}\) and \(C_{3}\), which are listed in appendix A of Ref. [8]. In terms of these, \(\mathcal{M}_{3}^{\text{(non-pole)}}\) is listed in appendix B of Ref. [8] or, in a different and more general form, appendix D of Ref. [9].
## 3 The 3\(\pi\) K-matrix
Following the formalism of Ref. [16], the K-matrix, \(\mathcal{K}_{\text{df,3}}\), determines the finite-volume energy spectrum \(\{E_{n}\}\) of three pions in a box of size \(L\) through the quantization condition
\[\det\left[F_{3}^{-1}(E,\mathbf{P},L)+\mathcal{K}_{\text{df,3}}(E^{*})\right]=0 \quad\text{at }E=E_{n}\,. \tag{6}\]
Here, \((E,\mathbf{P})\equiv P\) is the total 4-momentum, \(E^{*}\equiv\sqrt{P^{2}}\) is the corresponding center-of-momentum energy. \(F_{3}\) is described in Ref. [16]. Equation (6) generalizes Luscher's 2-particle quantization condition [17; 18].
In this formalism, \(\mathcal{K}_{\text{df,3}}\) is Lorentz-invariant and constructed entirely from on-shell quantities; it does, however, contain a scheme-dependent cutoff function. The
Figure 1: LO (top row) and NLO Feynman topologies for the six-point amplitude. Black squares indicate NLO vertices; remaining vertices are LO. The number of distinct diagrams obtainable through crossing from each topology is indicated below it.
subscript "df" indicates that it is divergence-free: Unlike \(\mathcal{M}_{3}\), it has neither poles nor cuts. In general, the relation between \(\mathcal{K}_{\text{df},3}\) and \(\mathcal{M}_{3}\) is a complicated integral equation, but at NLO in the chiral expansion, it reduces to an algebraic subtraction. Importantly, \(\mathcal{K}_{\text{df},3}\) is purely real, so the subtraction of cuts can be wholly circumvented by dropping the imaginary parts. We only give a qualitative description of the subtraction here; the derivation and detailed form can be found in Ref. [1].
Despite being a 3-particle quantity, a large part of \(\mathcal{K}_{\text{df},3}\) is determined by 2-particle processes. In light of this, we subdivide the three initial (or final) particles into a scattering pair and a non-scattering spectator; see fig. 1(a). The pair kinematics are decomposed into partial waves, allowing the two-particle amplitude to be expressed as
\[\mathcal{M}_{2}(\mathbf{p})_{\ell^{\prime}m^{\prime};\ell m}\,, \tag{7}\]
where \(\ell m\) (\(\ell^{\prime}m^{\prime}\)) are the initial-(final-)state partial wave indices, and \(\mathbf{p}\) is the spectator momentum; everything is on-shell, so it is sufficient to use 3-momenta. Supplemented with the total momentum \(P\) (left implicit), this completely describes the kinematics.
We adopt the same description of 3-particle processes, and retain the notion of a spectator even though it is involved in the scattering; see fig. 1(b). (The particle exchange symmetries of the initial and final states can, in fact, be rephrased in terms of the freedom to choose a spectator.) With \(\mathbf{p}\) (\(\mathbf{p}^{\prime}\)) as the initial-(final-)state spectator momentum, the 3-particle amplitude is
\[\mathcal{M}_{3}(\mathbf{p}^{\prime},\mathbf{p})_{\ell^{\prime}m^{\prime};\ell m}\,. \tag{8}\]
\(\mathcal{K}_{\text{df},3}\) is expressed in the same form.
The last ingredient in \(\mathcal{K}_{\text{df},3}\) is \(G^{\infty}\), which is diagrammatically described in fig. 1(c). It serves to swap spectators between subsequent processes, and provides the mechanism for cancelling divergences. Its general form is
\[G^{\infty}(\mathbf{p}^{\prime},\mathbf{p})\sim\frac{H(x_{p^{\prime}})H(x_{p})}{b_{p^{ \prime}p}^{2}-M_{\pi}^{2}+i\epsilon}\,, \tag{9}\]
where \(b_{p^{\prime}p}\equiv P-p^{\prime}-p\), and \(H(x)\) is a cutoff function described below. (We have omitted barrier factors that ensure smoothness and correct partial-wave behavior.) Note how \(G^{\infty}\) is similar in form to a propagator carrying momentum \(b_{p^{\prime}p}\); at the pole of the propagaor, they match exactly and cancel. However, \(G^{\infty}\) only connects on-shell quantities, and differs significantly from a propagator away from the pole.
The cutoff function \(H(x)\), with \(x_{p}\equiv(P-p)^{2}/(4M_{\pi}^{2})\), may be any smooth function such that \(H(x)=0\) for \(x<0\) and \(H(x)=1\) for \(x>1\); necessarily, such a function will be non-analytic. It ensures that subtraction terms containing \(G^{\infty}\) vanish far from the divergences they cancel, so that the subtraction does not introduce new UV-behavior on top of the already UV-finite amplitude. The choice of \(H\) constitutes the scheme-dependence of \(\mathcal{K}_{\text{df},3}\); the standard choice is
\[H(x)=\exp\Big{[}-\tfrac{1}{x}\exp\Big{(}-\tfrac{1}{1-x}\Big{)}\Big{]}\,,\quad 0 <x<1\,. \tag{10}\]
Other choices and their effects on \(\mathcal{K}_{\text{df},3}\) are studied in appendix A of Ref. [1].
## 4 The calculation of \(\mathcal{K}_{\text{df},3}^{\text{NLO}}\)
We work in the maximum-isospin channel (e.g., the process \(3\pi^{+}\to 3\pi^{+}\)), since it is structurally the simplest and the only one for which lattice data are currently available. We also work in the threshold expansion, where the kinematics are expressed in terms of
\[\begin{split}\Delta\equiv\frac{P^{2}-9M_{\pi}^{2}}{9M_{\pi}^{2} }\,,\qquad\Delta_{i}^{(\prime)}\equiv\frac{\left(P-p_{i}^{(\prime)}\right)^{2 }-4M_{\pi}^{2}}{9M_{\pi}^{2}}\,,\\ \tilde{t}_{ij}\equiv\frac{(p_{i}^{\prime}-p_{j})^{2}}{9M_{\pi}^{ 2}}\,,\end{split} \tag{11}\]
which all vanish as \(\mathcal{O}(\Delta)\) at the 3-pion threshold. The maximum-isospin channel expands as [19]
\[M_{\pi}^{2}\mathcal{K}_{\text{df},3}=\mathcal{K}_{0}+\mathcal{K}_{1}\Delta+ \mathcal{K}_{2}\Delta^{2}+\mathcal{K}_{\text{A}}\Delta_{\text{A}}+\mathcal{K}_ {\text{B}}\Delta_{\text{B}}+\mathcal{O}(\Delta^{3})\,, \tag{12}\]
where
\[\Delta_{\text{A}}\equiv\sum_{i}\left(\Delta_{i}^{2}+\Delta_{i}^{\prime 2} \right)-\Delta^{2}\,,\qquad\Delta_{\text{B}}\equiv\sum_{i,j}\tilde{t}_{ij}^{2} -\Delta^{2}\,. \tag{13}\]
It thus remains to compute the five coefficients \(\mathcal{K}_{X}\) with \(X=0,1,2,\text{A},\text{B}\), of which only the last two are sensitive to the angular distribution of the particles.
Figure 2: Schematic representation of the components of \(\mathcal{K}_{\text{df},3}\). Bold lines represent spectators, and thinner lines represent interacting pairs.
At LO, the only divergence is the OPE (one-particle exchange) pole, exemplified by fig. 2(a), which is removed in \(\mathcal{K}_{\mathrm{df},3}\) by subtracting the term schematically given in fig. 2(b). That is to say,
\[\mathcal{K}_{\mathrm{df},3}^{\mathrm{LO}}(\mathbf{p}^{\prime},\mathbf{p})=\mathcal{M}_{3 }^{\mathrm{LO}}-\mathcal{M}_{2}^{\mathrm{LO}}(\mathbf{p}^{\prime})G^{\infty}(\mathbf{p }^{\prime},\mathbf{p})\mathcal{M}_{2}^{\mathrm{LO}}(\mathbf{p})\,, \tag{14}\]
leaving the indices implicit. \(\mathcal{K}_{\mathrm{df},3}^{\mathrm{LO}}\) is not scheme-dependent, since the cutoff functions are identically 1 in the OPE subtraction. The LO calculation was done already in Ref. [11] and is reproduced in Ref. [1].
At NLO, the OPE subtraction must be augmented by promoting either 2-particle amplitude in eq. (14) to NLO. This introduces many more terms and partial waves, and requires great care in performing the calculations. The subtraction is matched to the factorizing part of \(\mathcal{M}_{3}\), given in eq. (5), using the off-shell convention of Ref. [8]. See sec. 4.4 of Ref. [1] for more details.
NLO also introduces a cut due to the triangle diagram, fig. 0(k) or 3(a), which is subtracted by what we call the "bull's head" subtraction, fig. 3(b), equal to
\[-\int_{r}\mathcal{M}_{2}^{\mathrm{LO}}(\mathbf{p}^{\prime})G^{\infty}(\mathbf{p}^{ \prime},\mathbf{r})\mathcal{M}_{2}^{\mathrm{LO}}(\mathbf{r})G^{\infty}(\mathbf{r},\mathbf{p}) \mathcal{M}_{2}^{\mathrm{LO}}(\mathbf{p})\,, \tag{15}\]
where the integral is over all on-shell momenta \(\mathbf{r}\). Since we are only considering the finite real part, this can be treated separately from the triangle loop.
The cutoff functions involving the on-shell loop momentum \(\mathbf{r}\) are not identically 1, and this non-analyticity makes the integral rather challenging. The most fruitful approach is to threshold-expand the integrand before integration, after which the angular part of the integral can be easily performed. This involves expanding \(H(x_{r})\) in the vicinity of its essential singularities at \(x_{r}=0\) and 1, but this can be shown to be valid, since the derivatives of \(H(x_{r})\) vanish exponentially in this region.
These operations leave integrals of the type
\[H_{m,n}\equiv\frac{1}{\pi^{2}}\int_{0}^{1/\sqrt{3}}\mathrm{d}z\ \frac{\sqrt{1+z^{2}}}{z^{m}}\frac{\mathrm{d}^{n}}{\mathrm{d}x_{r}^{n}}[H^{2}(x _{r})]\,, \tag{16}\]
where \(x_{r}=1-3z^{2}\). These possess endpoint singularities, which are intractable to the usual principal-value approach but can be regularized with Hadamard finite-part integration; for sufficiently smooth integrands, it works also when these singularities are essential [20].
The regularized integrals can be remarkably closely approximated by setting \(H=1\) in eq. (16), giving easy-to-evaluate integrals plus small remainders to compute numerically. These remainders encapsulate all the scheme-dependence in \(\mathcal{K}_{\mathrm{df},3}\). See sec. 4.3 of Ref. [1] for more details, including several complementary methods.
The last part of the calculation is to threshold-expand \(\mathcal{M}_{3}^{\mathrm{non-pole}}\). Its real part can be extracted using Cauchy principal values, as described in sec. 4.2 of Ref. [1].
\(\mathcal{K}_{\mathrm{df},3}\) is the sum of the threshold-expanded amplitude, the OPE subtraction, and, at NLO, the bull's head subtraction. We have performed several individual derivations and cross-checks of each part, using Wolfram Mathematica or FORM [21] for analytic calculations, and Mathematica or C++ with CHIRON [22], _LoopTools_[23] and GSL for the numerics.
## 5 Results and comparison to the lattice
The LO contributions to \(\mathcal{K}_{\mathrm{df},3}\) are
\[\mathcal{K}_{0}\supset 18\left(\frac{M_{\pi}}{F_{\pi}}\right)^{\!4}\,,\qquad \mathcal{K}_{1}\supset 27\left(\frac{M_{\pi}}{F_{\pi}}\right)^{\!4}. \tag{17}\]
The quadratic-order terms in the threshold expansion vanish at LO. The NLO contributions are [suppressing an overall factor of \((M_{\pi}/F_{\pi})^{6}\)]
\[\mathcal{K}_{0}\supset\left[-3\kappa(35+12\log 3)-\mathcal{D}_{0}+11 1L+\ell_{\{0\}}^{\pi}\right], \tag{18}\] \[\mathcal{K}_{1}\supset\left[-\frac{\kappa}{20}(1999+1920\log 3)- \mathcal{D}_{1}+384L+\ell_{\{1\}}^{\pi}\right],\] \[\mathcal{K}_{2}\supset\left[\frac{207\kappa}{1400}(2923-420\log 3)- \mathcal{D}_{2}+360L+\ell_{\{2\}}^{\pi}\right],\] \[\mathcal{K}_{\mathrm{A}}\supset\left[\frac{9\kappa}{560}(21809-1050 \log 3)-\mathcal{D}_{\mathrm{A}}-9L+\ell_{\{\mathrm{A}\}}^{\pi}\right],\] \[\mathcal{K}_{\mathrm{B}}\supset\left[\frac{27\kappa}{1400}(6698-245 \log 3)-\mathcal{D}_{\mathrm{B}}+54L+\ell_{\mathrm{B}}^{\pi}\right],\]
Figure 4: The “bull’s head” triangle loop and its subtraction, drawn schematically as in fig. 2. The version of (a) with its “horns” crossed, so that each vertex connects to one initial- and one final-state particle, is finite and lacks a corresponding subtraction term.
Figure 3: The OPE (one-particle exchange) pole and its subtraction, drawn schematically as in fig. 2. The \(s\)-channel version of (a) is not present at maximum isospin.
where \(L\equiv\kappa\log(M_{\pi}^{2}/\mu^{2})\) with \(\mu\) and \(\kappa\) described below eq.3, and \(\ell_{(X)}^{\rm r}\) contain the couplings, namely
\[\ell_{(0)}^{\rm r}=-288\ell_{1}^{\rm r}-432\ell_{2}^{\rm r}-36\ell_ {3}^{\rm r}+72\ell_{4}^{\rm r}\,,\] \[\ell_{(1)}^{\rm r}=-612\ell_{1}^{\rm r}-1170\ell_{2}^{\rm r}+108 \ell_{4}^{\rm r}\,,\quad\ell_{(2)}^{\rm r}=-432\ell_{1}^{\rm r}-864\ell_{2}^{ \rm r}\,,\] \[\ell_{(\rm A)}^{\rm r}=27\ell_{1}^{\rm r}+\tfrac{27}{2}\ell_{2}^{ \rm r}\,,\quad\ell_{(\rm B)}^{\rm r}=-162\ell_{1}^{\rm r}-81\ell_{2}^{\rm r}\,, \tag{19}\]
where we use the phenomenological values [24; 25]
\[\tilde{\ell}_{1}=-0.4(6)\,,\qquad\tilde{\ell}_{2}=4.3(1)\,, \tag{20}\] \[\tilde{\ell}_{3}=3.07(64)\,,\qquad\tilde{\ell}_{4}=4.02(45)\]
throughout. Lastly, \(\mathcal{D}_{X}\) are the small cutoff-dependent remainders from the bull's head subtraction, whose values using the standard cutoff choice, eq.10, are
\[\mathcal{D}_{0}\approx-0.0563\,,\qquad\mathcal{D}_{1}\approx 0.130 \,,\qquad\mathcal{D}_{2}\approx 0.432\,, \tag{21}\] \[\mathcal{D}_{\rm A}\approx 9.07\cdot 10^{-4}\,,\qquad\mathcal{D}_{ \rm B}\approx 1.62\cdot 10^{-4}\,.\]
Their relative size compared to \(\mathcal{K}_{X}\), evaluated at the physical pion mass, range from \(\sim 10\%\) for \(X=2\) to less than \(0.1\%\) for \(X={\rm A}\). \(\mathcal{D}_{X}\) stay consistently small for a wide selection of cutoff functions, as long as they are not too sharp.
As shown in fig.5, the threshold expansion is in good agreement with the exact result up to the 5-pion threshold, were the \(\mathcal{K}_{\rm df,3}\) formalism breaks down.
Figure6 shows the comparison between our results and the lattice data [10] at leading (\(\mathcal{K}_{0}\)) and subleading (\(\mathcal{K}_{1}\)) order in the threshold expansion. Unlike the LO results, the NLO results are in decent agreement with the data, especially for \(\mathcal{K}_{0}\).
Data at quadratic order in the threshold expansion is limited to \(\mathcal{K}_{\rm B}\), also shown in fig.6. It agrees poorly with our results; however, this is a very suppressed coefficient, and since it enters first at NLO, it is possible that there are large NNLO corrections. Overall, we do not have a satisfactory understanding of why NLO corrections are so large, except
Figure 5: Comparison between numerical results and the threshold expansion for \(\mathcal{K}_{\rm df,3}\), evaluated using a fixed kinematic configuration described in Ref. [1], as a function of \(\Delta\) defined in eq.11. The comparison is presented for \(M_{\pi}=M_{\rm phys}\) and \(M_{\pi}=340\) MeV, the latter corresponding to the heaviest pion mass used in Ref. [10]. The dashed vertical line indicates the inelastic threshold, which occurs at \(E^{*}=5M_{\pi}\).
Figure 6: LO (dashed black line) and NLO (grey line and band) ChPT predictions for \(\mathcal{K}_{0}\) (top), \(\mathcal{K}_{1}\) (middle) and \(\mathcal{K}_{\rm B}\) (bottom) as functions of \(M_{\pi}/F_{\pi}\), using couplings from eq.20. These are compared to lattice results from Ref. [10] (orange points) and the best fit to the lattice data (dotted orange line and orange band, not given for \(\mathcal{K}_{\rm B}\)). For reference, the physical point is at \((M_{\rm phys}/F_{\rm phys})^{4}\approx 5.25\), \((M_{\rm phys}/F_{\rm phys})^{6}\approx 12.0\).
itatively new contributions such as \(\ell_{i}^{\rm t}\)-dependence and the bull's head. Calculations at NNLO are expected to be very difficult.
## 6 Summary and upcoming results
Our results advance the state of the art in more-than-4-point ChPT scattering, mostly resolve the discrepancy between ChPT and the lattice for \(\mathcal{K}_{\rm df,3}\) at maximum isospin, and pave the way for similar studies in other systems. We also find that the threshold expansion converges well and that the cutoff dependence is small, adding confidence in the validity of the approach.
Work is ongoing to extend our results to general isospin, where the main difficulty is the greater number of channels and the now nontrivial particle exchange (i.e., spectator choice) symmetry, which replaces eq. (12) by more complicated threshold expansions. It is challenging to extract \(\mathcal{K}_{\rm df,3}\) from lattice data for non-maximal isospin, and no results are currently available, but we anticipate that they will appear eventually.
Preliminary studies are also ongoing regarding the introduction of kaons and other heavier particles, for which there are some recent results [26] featuring tension with LO ChPT similar to that we just resolved. This would require an enhancement of the existing scattering amplitude, which currently supports 3-flavor ChPT [9] but not multiple mass scales.
|
2309.10668 | Language Modeling Is Compression | It has long been established that predictive models can be transformed into
lossless compressors and vice versa. Incidentally, in recent years, the machine
learning community has focused on training increasingly large and powerful
self-supervised (language) models. Since these large language models exhibit
impressive predictive capabilities, they are well-positioned to be strong
compressors. In this work, we advocate for viewing the prediction problem
through the lens of compression and evaluate the compression capabilities of
large (foundation) models. We show that large language models are powerful
general-purpose predictors and that the compression viewpoint provides novel
insights into scaling laws, tokenization, and in-context learning. For example,
Chinchilla 70B, while trained primarily on text, compresses ImageNet patches to
43.4% and LibriSpeech samples to 16.4% of their raw size, beating
domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively.
Finally, we show that the prediction-compression equivalence allows us to use
any compressor (like gzip) to build a conditional generative model. | Grégoire Delétang, Anian Ruoss, Paul-Ambroise Duquenne, Elliot Catt, Tim Genewein, Christopher Mattern, Jordi Grau-Moya, Li Kevin Wenliang, Matthew Aitchison, Laurent Orseau, Marcus Hutter, Joel Veness | 2023-09-19T14:50:38Z | http://arxiv.org/abs/2309.10668v2 | # Language Modeling Is Compression
###### Abstract
It has long been established that predictive models can be transformed into lossless compressors and vice versa. Incidentally, in recent years, the machine learning community has focused on training increasingly large and powerful self-supervised (language) models. Since these large language models exhibit impressive predictive capabilities, they are well-positioned to be strong compressors. In this work, we advocate for viewing the prediction problem through the lens of compression and evaluate the compression capabilities of large (foundation) models. We show that large language models are powerful general-purpose predictors and that the compression viewpoint provides novel insights into scaling laws, tokenization, and in-context learning. For example, Chinchilla 70B, while trained primarily on text, compresses ImageNet patches to 43.4% and LibriSpeech samples to 16.4% of their raw size, beating domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively. Finally, we show that the prediction-compression equivalence allows us to use any compressor (like gzip) to build a conditional generative model.
## 1 Introduction
Information theory and machine learning are inextricably linked and have even been referred to as "two sides of the same coin" (MacKay, 2003). One particularly elegant connection is the essential equivalence between probabilistic models of data and lossless compression. The source coding theorem (Shannon, 1948) is the fundamental theorem describing this idea, i.e., the expected message length in bits of an optimal entropy encoder is equal to the negative log\({}_{2}\)-likelihood of the statistical model. In other words, maximizing the log\({}_{2}\)-likelihood (of the data) is equivalent to minimizing the number of bits required per message. Indeed, lossless compression with a probabilistic model can be achieved in a variety of different ways, including Huffman coding (Huffman, 1952), arithmetic coding (Pasco, 1977; Rissanen, 1976), and asymmetric numeral systems (Duda, 2009).
Arithmetic coding, in particular, is known to be optimal in terms of coding length, meaning that the overall compression performance depends on the capabilities of the probabilistic model (Fig. 1). Incidentally, in recent years, large pre-trained Transformers (Vaswani et al., 2017), so-called _foundation models_(Bommasani et al., 2021), have proven to be highly successful across a wide range of predictive tasks (Bubeck et al., 2023; Rae et al., 2021) and are thus promising candidates for use with arithmetic coding. Indeed, Transformer-based compression with arithmetic coding has produced state-of-the-art results both in the online (Bellard, 2021; Mao et al., 2022) and offline settings (Valmeekam et al., 2023). In the online setting, a pseudo-randomly initialized model is directly trained on the stream of data that is to be compressed, while the offline setting, which we consider in our work, trains the model on an external dataset before employing it to compress a (potentially different) data stream. Consequently, offline compression is performed _in-context_, with a fixed set of model parameters. Transformers have demonstrated impressive in-context learning abilities (Brown et al., 2020; Genewein et al., 2023; Laskin et al., 2023; Wei et al., 2022), which renders them ideally suited for offline compression. However, as we will discuss in this work, Transformers are actually trained to compress well, and therefore _must_ have good in-context learning abilities.
The context length is a key limiting factor in offline compression, as it dictates the maximum number of bytes a model can compress at a time. Transformers can only compress a few kilobytes (each "token" being coded with 2 or 3 bytes), while requiring a lot of compute. Correspondingly, many challenging predictive tasks (e.g., algorithmic reasoning or long-term memory) require long contexts (Deletang et al., 2023), and thus extending these models' context lengths is a key challenge which is gaining increased attention (Bulatov et al., 2023; Guo et al., 2022; Zaheer et al., 2020). The in-context compression view provides insights into the failure modes of current foundation models.
This WorkWe advocate for using (lossless) compression to study foundation models. To that end, we conduct an extensive empirical investigation of the offline (in-context) compression capabilities of large language models, with the rationale that they have recently become readily available (Hoffmann et al., 2022; Touvron et al., 2023) and can thus be used for compression without the training overhead. We empirically demonstrate that these models, while (meta-)trained primarily on text, also achieve state-of-the-art compression rates across different data modalities, using their context to condition a general-purpose compressor to excel at a particular task. Moreover, we shed new light on scaling laws (Kaplan et al., 2020), showing that they also hold true for compression but that measuring the compression rates instead of the log loss adds a twist: Scaling beyond a certain point will deteriorate the compression performance since the model parameters need to be accounted for in the compressed output. Finally, we advocate for framing (self-supervised) prediction through the lens of compression as it encompasses generalization: a model that compresses well generalizes well (Hutter, 2006).
Figure 1: Arithmetic encoding of the sequence ‘AXI’ with a probabilistic (language) model \(P\) (both in blue) resulting in the binary code ‘0101001’ (in green). Arithmetic coding compresses data by assigning unique intervals to symbols based on the probabilities assigned by \(P\). It progressively refines these intervals to output compressed bits, which represent the original message. To decode, arithmetic coding initializes an interval based on the received compressed bits. It iteratively matches intervals with symbols using the probabilities given by \(P\) to reconstruct the original message.
ContributionsWe make the following contributions:
* We empirically investigate the lossless compression capabilities of foundation models. To that end, we review how to compress with predictive models via arithmetic coding and call attention to the connection between current language modeling research and compression.
* We show that foundation models, trained primarily on text, are general-purpose compressors due to their in-context learning abilities. For example, Chinchilla 70B achieves compression rates of 43.4% on ImageNet patches and 16.4% on LibriSpeech samples, beating domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively.
* We provide a novel view on scaling laws, showing that the dataset size provides a hard limit on model size in terms of compression performance and that scaling is not a silver bullet.
* We leverage the compression-prediction equivalence to employ compressors as generative models and visually illustrate the performance of the underlying compressor.
* We demonstrate that tokenization, which can be viewed as a pre-compression, does, in general, not improve compression performance, but allows models to increase the information content in their context and is thus generally employed to improve prediction performance.
## 2 Background
In this section, we review the necessary background on information theory and its relation to likelihood maximization. To that end, we consider streams of data \(x_{1:n}:=x_{1}x_{2}\ldots x_{n}\in\mathcal{X}^{n}\) of length \(n\) from a finite set of symbols \(\mathcal{X}\). We write \(x_{\leq j}=x_{<j+1}:=x_{1:j}\) for \(j\leq n\) and denote the empty string as \(\epsilon\). Finally, we denote the concatenation of two strings \(s\) and \(r\) by \(sr\).
Coding DistributionsA coding distribution \(\rho\) is a sequence of probability mass functions \(\rho_{n}:\mathcal{X}^{n}\mapsto(0,1]\), which for all \(n\in\mathbb{N}\) satisfy the constraint that \(\rho_{n}(x_{1:n})=\sum_{y\in\mathcal{X}}\rho_{n+1}(x_{1:n}y)\) for all \(x_{1:n}\in\mathcal{X}^{n}\), with the base case \(\rho_{0}(\epsilon):=1\). From here on out, whenever the meaning is clear from the argument to \(\rho\), we drop the subscript on \(\rho\). Under this definition, the conditional probability of a symbol \(x_{n}\) given previous data \(x_{<n}\) is defined as \(\rho(x_{n}\mid x_{<n}):=\rho(x_{1:n})/\rho(x_{\cap n})\), with the familiar chain rules \(\rho(x_{1:n})=\prod_{i=1}^{n}\rho(x_{i}\mid x_{<i})\) and \(\rho(x_{jk}\mid x_{<j})=\prod_{i=j}^{k}\rho(x_{i}\mid x_{<i})\) following.
Lossless CompressionThe goal of lossless compression is to encode a stream of symbols \(x_{1:n}\) sampled from a coding distribution \(\rho\) into a bitstream of minimal (expected) length, while ensuring that the original data sequence is recoverable from the bitstream. To that end, we use a binary source code \(c:\mathcal{X}^{*}\mapsto\{0,1\}^{*}\), which assigns to each possible data sequence \(x_{1:n}\) a binary code word \(c(x_{1:n})\) of length \(\ell_{c}(x_{1:n})\) (in bits). Thus, the aim is to minimize the expected bits per sequence \(L:=E_{x\sim\rho}[\ell_{c}(x)]\), i.e., encoding rare sequences with more bits and frequent sequences with fewer bits. Shannon's source coding theorem establishes the limit on possible data compression as \(L\geq H(\rho)\) for any possible code, where \(H(\rho):=\mathbb{E}_{x\sim\rho}[-\log_{2}\rho(x)]\) is the Shannon entropy (Shannon, 1948).
Arithmetic CodingGiven a coding distribution \(\rho\) and a sequence \(x_{1:n}\), arithmetic coding (Pasco, 1977; Rissanen, 1976) constructs a code with almost optimal length. It directly connects coding and compression with prediction and modeling: compressing well means modeling well in a log-loss sense and vice-versa. Assuming infinite precision for the arithmetic operations involved, the arithmetic code has length \(-\lceil\log\rho(x_{1:n})\rceil+1\) bits, whereas the optimal code length is \(-\log\rho(x_{1:n})\) bits. A practical implementation that is subject to \(B\) bit precision adds further \(O(n2^{-B})\) bits (Howard
& Vitter, 1991), which is negligible for 32- or 64-bit arithmetic. In the following we consider infinite precision arithmetic coders and refer to Witten et al. (1987) for the finite-precision implementation.
Arithmetic EncoderThe arithmetic code of a sequence \(x_{1:n}\) is the binary representation of a number \(\lambda\in[0,1)\). We identify \(\lambda\) by narrowing down an interval that encloses \(\lambda\) step by step (maintaining a growing prefix of the binary representation of \(\lambda\) throughout the process). Initially, this interval is \(I_{0}=[0,1)\). In step \(k>0\) (i.e., encoding \(x_{k}\)), we first partition the previous interval \(I_{k-1}=[l_{k-1},u_{k-1})\) into \(N\) sub-intervals \(\tilde{I}_{k}(x_{1}),\tilde{I}_{k}(x_{2}),\dots\), one for each letter from \(\mathcal{X}=\{x_{1},x_{2},\dots,x_{N}\}\). The size of sub-interval \(\tilde{I}_{k}(y)\) that represents letter \(y\) is \((u_{k-1}-l_{k-1})\cdot\rho(y\mid x_{<k})\). Formally, we define
\[\tilde{I}_{k}(x):=\left[l_{k-1}+(u_{k-1}-l_{k-1})\cdot\sum_{y<x}\rho(y\mid x_ {<k}),\quad l_{k-1}+(u_{k-1}-l_{k-1})\cdot\sum_{y\leq x}\rho(y\mid x_{<k}) \right), \tag{1}\]
assuming a strict order on \(\mathcal{X}\). To encode \(x_{k}\) we proceed with its corresponding interval, i.e., \(I_{k}=\tilde{I}_{k}(x_{k})\). Finally, we choose \(\lambda\in I_{n}\) with the shortest binary representation in the terminating interval \(I_{n}\) and use that binary representation to encode \(x_{1:n}\). Fig. 1 illustrates this process.
Arithmetic DecoderGiven \(\lambda\) and \(\rho\) decoding the \(k\)-th letter is easy: Starting with \(I_{0}=[0,1)\), find \(y\) such that \(\lambda\in\tilde{I}_{k}(y)\) to decode \(x_{k}=y\), then set \(l_{k}=\tilde{I}_{k}(x_{k})\) and proceed with the \(k\)+1-st letter.
Likelihood MaximizationIn practice, the source distribution \(\rho\) is usually unknown and is instead estimated with a parametric probabilistic model \(\hat{\rho}\). Thus, instead of achieving code length - \(\sum_{i=1}^{n}\log_{2}\rho(x_{i}\mid x_{<i})\) for the sequence \(x_{1:n}\), we obtain the suboptimal length - \(\sum_{i=1}^{n}\log_{2}\hat{\rho}(x_{i}\mid x_{<i})\). As a result, the expected (suboptimal) number of bits is the _cross-entropy_:
\[H(\rho,\hat{\rho}):=\mathbb{E}_{x\sim\rho}\left[\sum_{i=1}^{n}-\log_{2}\hat{ \rho}(x_{i}\mid x_{<i})\right]. \tag{2}\]
Thus, we can minimize the expected length of the encoded data stream with symbols distributed according to \(\rho\) by minimizing the cross-entropy with respect to some \(\hat{\rho}\), which is equivalent to likelihood maximization (MacKay, 2003). However, Eq. (2) is exactly the same objective used to train current foundation models, i.e., the log-loss. Thus, minimizing the log-loss is equivalent to minimizing the compression rate of that model used as a lossless compressor with arithmetic coding, i.e., current language model training protocols use a maximum-compression objective.
Compression-Based Sequence PredictionAnalogous to how a predictive distribution can be used for lossless compression via arithmetic coding (described above), any compressor can be employed for sequence prediction (Frank et al., 2000). The main idea is to define \(\rho(x_{1:n})\) as the coding distribution \(2^{-\ell_{c}(\cdot)}\), where \(\ell_{c}(x_{1:n})\) is the length of sequence \(x_{1:n}\) when encoded with compressor \(c\) (e.g., gzip). We thus recover the conditional distribution \(\rho(x_{i}\mid x_{<i})\) by computing \(2^{\ell_{c}(x_{<i})-\ell_{c}(x_{<i}x_{<i})}\), for all \(x_{i}\).
Universal CodingAbove we discussed optimal (arithmetic) coding with respect to data sampled from a fixed distribution \(\rho\). In contrast, universal (optimal) source coding with respect to all computable sampling distributions can, in theory, be achieved by choosing \(\ell_{c}(x_{1:n})\) as the Kolmogorov complexity of \(x_{1:n}\)(Kolmogorov, 1998; Li & Vitanyi, 2019). For this choice, the conditional distribution described above is universally optimal over \(x_{<i}\), recovering the Solomonoff predictor (Rathmanner & Hutter, 2011; Solomonoff, 1964a,b). The Solomonoff predictor is a Bayesian mixture of _all_ predictors that can
be programmed in a chosen Turing-complete programming language. More precisely, for a predictor \(q\) of program-length \(\ell_{c}(q)\) bits, the Solomon predictor assigns a prior weight of \(2^{-\ell_{c}(q)}\) to predictor \(q\). That is, if \(\mathcal{Q}\) is the set of all predictors that can be programmed and computed, the Solomonoff predictor assigns probability \(S(x_{1:n})=\sum_{q\in\mathcal{Q}}2^{-\ell_{c}(q)}q(x_{1:n})\) to a sequence \(x_{1:n}\), if every predictor \(q\) assigns that sequence probability \(q(x_{1:n})\). Therefore, \(S(x_{1:n})\geq 2^{-\ell_{c}(q)}q(x_{1:n})\) for all \(q\in\mathcal{Q}\), and thus \(-\log_{2}S(x_{1:n})\leq-\log_{2}q(x_{1:n})+\ell_{c}(q)\). Observe that \(\ell_{c}(q)\) is a constant of \(q\) that is independent of the sequence length. Therefore, compressing optimally is equivalent to predicting optimally and vice versa (Hutter, 2005).
## 3 Experimental Evaluation
We now present our evaluation of the (in-context) compression capabilities of foundation models.
CompressorsWe compare our arithmetic coding-based language model compressors to two competitive general-purpose lossless compressors: gzip (Deutsch, 1996) and its improvement LZMA2 (Pavlov, 2019), used by the 7zip software. Both are based on Huffman coding (Huffman, 1952) and the Lempel-Ziv-Welch algorithm (Welch, 1984). We also consider specialized lossless compressors for image and audio data, i.e., PNG (Boutell, 1997) and FLAC (Coalson, 2008), respectively. Finally, we evaluate two types of language models (of different sizes) with arithmetic coding: vanilla decoder-only Transformers (Vaswani et al., 2017), which we pretrain on the enwik8 dataset, and pretrained Chinchilla-like foundation models (Hoffmann et al., 2022).
### Datasets
We consider datasets of three different modalities, text, image, and audio, which have (a priori) very different biases for compression and thus provide a good testbed for evaluating a compressor's general capabilities. To render the results comparable across modalities, all our datasets are 1GB.
A key question is how to reconcile the different context lengths \(C\) of the compressors we consider. Transformers are restricted to short contexts (\(C=2048\) bytes, i.e., \(2048\) tokens of \(8\) bits that represent the ASCII characters, for our trained models and roughly \(10\) kilobytes for Chinchilla models), while gzip uses a maximum context of \(32\) kilobytes, and LZMA2 has a virtually "infinite" context length. Having a longer context allows a compressor to exploit more sequential dependencies to achieve a better compression rate. For compressors with finite contexts, there are two approaches to compress sequences that are longer than the context length: (i) slide the compressor byte by byte, thus always processing a history of the previous \(C-1\) bytes when compressing a new byte, and (ii) chunk the data stream into \(S\) sequences of \(C\) bytes and evaluate the in-context compression (without any history) averaged across batches. For Transformers, we consider the latter approach since sliding would increase their (already very long) running time by a factor of \(S\). Therefore, we chunk all datasets into sequences of \(2048\) bytes and feed them to the compressors one-by-one. However, since classical compressors usually include a header in their compressed output, which can be larger than the compressed data in some cases, we only count it once for all batches, yielding a compression rate of (header + \(\sum\) (\(l_{c}\)(batch) - header))/num_batches. Moreover, since chunking deteriorates the performance of classical compressors, which have context lengths \(C\gg 2048\), we also report their compression rates on the unchunked datasets. We consider the following datasets:
enwik9The enwik9 dataset (Hutter, 2006) consists of the first \(1\,000\,000\,000\) (\(1\) billion) bytes of the English Wikipedia XML dump on March 3rd, 2006 and is typically used to measure a model's
ability to compress data. It is an extension of the enwik8 dataset that only contains the first 100 million bytes. We train our vanilla Transformer models on enwik8, but evaluate on both enwik8 and enwik9 (to evaluate the out-of-distribution compression performance). While enwik8 is included in enwik9, it only represents the first 10% and thus still constitutes a significant distribution shift.
ImageNetThe ImageNet dataset (Russakovsky et al., 2015) contains 14 197 122 annotated images from the WordNet hierarchy. Since 2010, the dataset has been used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection. We extract contiguous patches of size \(32\times 64\) from all images, flatten them, convert them to grayscale (so that each byte represents exactly one pixel) to obtain samples of 2048 bytes. We then concatenate 488 821 of these patches, following the original dataset order, to create a dataset of 1 GB.
LibriSpeechLibriSpeech (Panayotov et al., 2015) is a corpus of approximately 1000 hours of 16kHz English speech. The data is derived from audiobooks from the LibriVox project and has been carefully segmented and aligned. We chunk the samples into batches of 2048 bytes and gather 488 821 such chunks into dataset of size 1 GB.
### Comparing Compression Rates
Table 1 shows the compression rates for all compressors and datasets. We show both the raw compression rate, which does not take the model size (in bytes) into account, as well as the adjusted rate, which does. The size of the Python program for classical compressors is very small (a few kilobytes at most) and thus barely affects the compression rate. In contrast, language models suffer a
huge loss in compression rate due to their large size, which cannot be offset when compressing only 1GB of data. We encode each neural network parameter with 2 bytes, using a float16 representation since quantizing weights to this level does not significantly affect performance (Tao et al., 2022) and is standard for model inference. Note that further compressing the float16 parameters using classical compressors does not significantly reduce their size (we obtained rates of 92.2% and 89.1% on a 38M parameter Transformer with gzip and LZMA2, respectively). Also, recall that we only consider the offline setting, which computes the adjusted compression rate using a two-part code (i.e., it adds the model size to the log-loss of the data). In contrast, prequential (online) coding would provide an alternative view on adjusted compression by computing the adjusted compression rate as the log-loss plus the size of the training script (not the model parameters). According to prior work, prequential coding leads to better compression with overparametrized neural networks (Blier and Ollivier, 2018), however, it requires training the model online (which reduces performance and cannot be performed with foundation models) both during encoding and decoding (which is very costly for our models).
Foundation Models Are General-Purpose CompressorsA lossless compressor induces an injective function over bit sequences, meaning that we cannot compress all sequences equally well (by the pigeonhole principle). Consequently, in practice, compressors are often tailored to a particular setting, e.g., FLAC for audio or PNG for images, and thus fail to compress other data modalities well (see Table 1). In contrast, general-purpose compressors, such as gzip, offer good performance on a wide range of data sources. Surprisingly, Chinchilla models, while trained primarily on text, also appear to be general-purpose compressors, as they outperform all other compressors, even on image and audio data (see Table 1). Note that Chinchilla models have not been trained on this kind of data according to Appendix A. of Hoffmann et al. (2022), which states that the training dataset consists of a mix of internet text data (Wikipedia, websites, github) and books. However, it is still possible (but unlikely) that some images or audio samples were encoded into text on some websites. Thus, Chinchilla models achieve their impressive compression performance by conditioning a (meta-)trained model to a particular task at hand via in-context learning (Genewein et al., 2023). In contrast, smaller
Figure 2: Adjusted compression rates (compressed size / raw size) for Transformers of different sizes, trained on enwik8 and evaluated on enwik (both axes are logarithmic). Here, the compressed size does not only consider the size of the compressed output (roughly equal to the log-loss) but also the model size, which causes all curves to increase at some point. Every dataset gives rise to an optimal model size, with a good trade-off between performance (the size of the compressed data) and cost of the model (the number of parameters). The larger the dataset, the more parameters we can afford.
Transformers, trained manually on enwik8, only achieve good compression rates on similar Wikipedia data, i.e., enwik9. However, larger models' stronger in-context compression (or in-context learning) comes at a price: the number of parameters, which has to be offset with increasingly large data sources when computing the adjusted compression rate (see Section 3.3). Finally, note that since Chinchilla has been trained on Wikipedia, the enwik9 results are in-distribution.
### Optimal Model-Dataset Size Tradeoff
As shown in Table 1, foundation models incur a huge cost in compression rates when accounting for their size, which is in the order of hundreds of GBs for billions of parameters. In theory, if the dataset is infinite, we can ignore the model's size since it is insignificant compared to the size of the dataset. However, in practice, a foundation model can only achieve non-trivial (adjusted) compression rates when evaluated on datasets in the order of TBs (or more). Since this is infeasible under reasonable hardware constraints, we instead investigate the optimal model size with smaller Transformers that we train on enwik8. Recall that the model size (in bytes) is twice the number of (float16) parameters.
Fig. 2 visualizes the adjusted compression rate for vanilla Transformers of different model sizes for the enwik datasets. We observe that larger models achieve better compression rates on larger datasets, thus justifying recent trends in model scaling (Kaplan et al., 2020). However, they achieve worse rates on smaller datasets, indicating that scaling laws are, in fact, dependent on the size of the test set. That is, for each dataset, the model sizes reach a critical point, after which the adjusted compression rate starts to increase again since the number of parameters is too big compared to the size of the dataset. Note that we evaluate offline compression, i.e., we do not necessarily compress the data the model was trained on, meaning that the results on enwik7 and enwik8 are in-distribution, while the enwik9 results are out-of-distribution. Nevertheless, larger models still achieve better compression rates on enwik9 than enwik8, illustrating the benefits of scaling.
### Compressors as Generative Models
In Section 2, we discussed how any compressor can be employed as a sequence prediction model. Concretely, for compressor \(c\), we sample the next byte according to the distribution \(\hat{\rho}(x_{i}\mid x_{<l})\sim 2^{\ell_{c}(x_{<i})-\ell_{c}(x_{<i}x_{i})}\), i.e., we compute the length \(\ell_{c}\) of the compressed sequence \(c(x_{<i}b)\) for all possible \(b\in\mathcal{X}\). Thus, if a byte \(b\) leads to a particularly short compressed sequence (when concatenated with \(x_{<l}\)), it will have a higher probability of being sampled next. Note that any constant in the length function (e.g., the header for classical compressors) disappears when we normalize the distribution.
Since generic compressors have a low intrinsic bias, sampling data without conditioning does not yield interesting results as it looks random. Thus, we condition the compressors on part of an existing sequence (1948 bytes for enwik9, half of the sample for ImageNet and LibriSpeech) and generate the remaining bytes with the compression-based generative model. We compare the generative performance of gzip and Chinchilla 70B across all three data modalities in Figs. 3 to 5 for text, image, and audio data, respectively. In general, generative models can be evaluated using one of two ways: sampling the next byte \(\hat{\rho}(x_{i}\mid x_{<i})\) (i) using teacher forcing, i.e., conditioning on the true subsequence \(x_{<i}\), or (ii) via autoregressive sampling, i.e., conditioning on the model's previous outputs. The latter induces a distribution shift, and with it undesired side effects (Ortega et al., 2021), but is standard and thus what we choose to visualize.
#### Context Text (1948 Bytes)
ction Act 1876)]. They are selected by the Prime Minister, but are formally appointed by the Sovereign. A Lord of Appeal in Ordinary must retire at the age of 70, or, if his or her term is extended by the Government, at the age of 75; after reaching such an age, the Law Lord cannot hear any further legal cases. The number of Lords of Appeal in Ordinary (excluding those who are no longer able to hear cases due to age restrictions) is limited to twelve, but may be changed by [[statutory instrument]]. Lords of Appeal in Ordinary traditionally do not participate in political debates, so as to maintain judicial independence. Lords of Appeal in Ordinary hold seats the House of Lords for life, remaining members even after reaching the retirement age of 70 or 75. Former Lord Chancellors and holders of other high judicial office may also sit as Law Lords under the Appellate Jurisdiction Act, although in practice this right is infrequently exercised. After the coming into force of the Constitutional Reform Act 2005, the Lords of Appeal in Ordinary will become judges of the Supreme Court of the United Kingdom and will be barred from sitting or voting until they retire as judges. \(\backslash\)n\(\backslash\)nThe largest group of Lords Temporal, and indeed of the whole House, are [[Life peer] life peers]]. Life peers with seats in the House of Lords rank only as barons or baronses, and are created under the [[Life Peerages Act 1958]]. Like all other peers, life peers are created by the Sovereign, who acts on the advice of the Prime Minister. By convention, however, the Prime Minister allows leaders of other parties to select some life peers so as to maintain a political balance in the House of Lords. Moreover, some non-party life peers (the number being determined by the Prime Minister) are nominated by an independent House of Lords Appointments Commission. If an hereditary peer also holds a life peerage, he or
#### Ground Truth (100 Bytes)
* she remains a member of the House of Lords without a need for an election. In [[2000]], the governm
#### gzip Samples (100 Bytes)
* (0k5Ezatme,isbebmvscouL(mxschife peu7vewt parriswfommeeaa are nombban hm, c,on., pncmm.serg uam
* Suasa8g thiformpoluof Lo e7vkoasakea w8viuiuoumb,xbepe,deto.,5mdfSu tteeep,reggs5,be.dcdyh2vL.nary
* CxOsic,*auEfOlnknkm ) eaaoplutfpa(afcnuChann,areovervr LoventiL_myheim;nrnynwosa7oseg Apo,arejyehm;.
#### Chinchilla 70B Samples (100 bytes)
* she may use either title, but the hereditary peerage is considered to be superior. Lords Temporal c
* she may choose which title to use, though the title of the life peerage is normally used. The Sover
* she may elect to sit in the House as a life peer, rather than as a hereditary peer. Life peers are
Figure 3 | Compression-based generation for text data. We condition gzip and Chinchilla on a context text of size 1948 bytes (from enwik9) and then sample 100 bytes (\(N\) tokens) autoregressively. Since Chinchilla employs a tokenizer, the sampled sequences will contain \(N\) tokens, which do not necessarily decode to 100 bytes. Chinchilla's predictions are significantly more coherent than gzip's.
Figure 4: | Compression-based generation for audio data. We condition gzip and Chinchilla on the first 1024 bytes of the base sequence (from LibriSpeech) and then sample the remaining 1024 bytes autoregressively. Chinchilla predictions exhibit a typical “loop” pattern of autoregressive generation.
### Sequential Evolution of In-Context Compression
Language models take a very different "approach" to compression compared to classical compressors. Classical compressors have a small program size and optimize for a large context length to exploit sequential dependencies in the data. In contrast, foundation models consist of billions of parameters, which enable rapid adaptation in their (relatively) short context window (Genewein et al., 2023). Thus, arithmetic coding-based compressors rely heavily on the predictive models' in-context learning capabilities to achieve competitive compression performance. We investigate this phenomenon in Fig. 6, which visualizes the compression rate across sequence lengths for gzip, Chinchilla 1B and a Transformer pretrained on enwik8. Intuitively, the longer the sequence, the more data the model can process in its context, and therefore, the better the compression. As expected, most compression rates decrease quickly with increasing sequence length, indicating that the models learn some data statistics in-context, without any gradient-based training. As in Table 1, the Chinchilla model achieves the best compression rates accross all three data modalities and sequence lengths.
### Tokenization Is Compression
Transformers are generally not trained on raw input data but on tokenized versions thereof, both for efficiency and performance reasons. As a consequence, Transformers are trained on compressed data, with tokenizers acting as the compressor. Since tokenization is known to have an impact on the generalization performance (Radford et al., 2019), we investigate its impact on the compression rate in Table 2. Concretely, we train Transformers on enwik8 using different tokenizers: ASCII, i.e., an alphabet of size 256 (no tokenization), and byte-pair encoding trained on enwik8, with various
Figure 5: Compression-based generation for image data. We condition gzip and Chinchilla on the first half of every row of the ImageNet image and then sample the remaining half autoregressively. Both models produce incoherent samples, but Chinchilla looks much less noisy than gzip.
Figure 6: In-context compression rate over sequence length. For every dataset, we compute the compression rate for all subsequences of 2048 bytes, averaged over 100 sequences.
vocabulary sizes (1K, 2K, 5K, 10K, and 20K tokens). Note that the tokenizations are lossless.
Increasing the number of tokens (i.e., the "alphabet size") reduces the length of the sequence and thus increases the amount of information in a models context. However, decreasing the sequence length comes at a price: the number of tokens is larger, which makes the prediction task more challenging since reducing the entropy of the conditional distribution \(\rho(x_{i}\mid x_{:i})\) is increasingly difficult for larger alphabet size. In theory, as the tokenization is a lossless compression, the two effects should compensate. In practice, we observe that if the model is small, increasing the number of possible tokens boosts the compression performance. In contrast, for bigger models, it seems that the converse happens: having a larger token vocabulary harms the final compression rate of the model. Nevertheless, short sequence lengths also help Transformers since their time complexity scales quadratically with context length, and it has been shown they do not generalize well to long contexts (Deletang et al., 2023; Ruoss et al., 2023). This explains why most practical Transformer implementations still use some form of tokenization, e.g., SentencePiece (Kudo and Richardson, 2018).
## 4 Related work
Prediction vs. CompressionLeveraging Shannon's source coding theorem (Shannon, 1948), a plethora of approaches exploit the connection between prediction and compression. For example, context-tree weighting (CTW) (Willems et al., 1995) mixes the predictions of many underlying Markov models to achieve lossless compression via arithmetic coding (Pasco, 1977; Rissanen, 1976). Similarly, prediction by partial matching (PPM) (Cleary and Witten, 1984) also leverages arithmetic coding, but uses a contiguous context matching method to create probability distributions based on the history of characters in a sequence. Likewise, PAQ8 (Knoll and de Freitas, 2012) uses a weighted combination of predictions from a large number of models (most of them based on context matching, but unlike PPM also noncontiguous context matches). In a different setting, Veness et al. (2015) demonstrated how to employ compression to obtain value estimates of a policy in an environment. Frank et al. (2000) and later Teahan and Harper (2003) introduced the idea of classification with compressors. Recently, Jiang et al. (2023) applied this technique with NLP tasks, paired with a k-nearest-neighbour algorithm. The results are surprisingly good for simple general purpose compressors like gzip. Jiang et al. (2022) exploit the same idea but train the compressor on a vast amount of unlabeled data first. Finally, van den Oord and Schrauwen (2014) apply arithmetic coding to image compression using Student distribution mixtures and Gaussian processes as predictors.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{2}{c}{Raw Compression Rate (\%)} \\ \cline{2-4}
**Tokenization** & **200K** & **6.4M** & **38M** \\ \hline ASCII & 22.9 & **13.6** & **6.4** \\ BPE 1000 & 25.4 & 14.8 & 6.9 \\ BPE 2000 & 25.6 & 15.7 & 7.4 \\ BPE 5000 & 23.1 & 17.1 & 8.7 \\ BPE 10000 & 21.3 & 17.0 & 8.9 \\ BPE 20000 & **19.3** & 16.4 & 9.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Raw compression rates (compressed size / raw size) on enwik9 for Transformers trained on enwik8 with different tokenizers, ASCII and byte-pair encoding (BPE), with various vocabulary sizes. Transformers compress better with simpler tokenizers. However, larger vocabulary sizes reduce the length of the sequence more, meaning more information can be packed into the context.
Compression With Neural NetworksPrior work demonstrated that neural predictive distributions can be employed to perform lossless compression via arithmetic coding (Cox, 2016; Goyal et al., 2019; Knoll, 2014; Liu et al., 2019; Mahoney, 2000; Mentzer et al., 2019, 2020; Mikolov, 2012; Rhee et al., 2022; Schiopu and Munteanu, 2020; Schiopu et al., 2018; Schmidhuber and Heil, 1996). Similarly, neural networks were also shown to achieve strong lossless compression rates when replacing arithmetic coding with asymmetric numeral systems (Barzen et al., 2022; Hoogeboom et al., 2019; Kingma et al., 2019; Townsend et al., 2019). While these approaches assume the existence of a separate training set, a different line of work investigated arithmetic coding-based neural compression in a purely online fashion, i.e., training the model only on the data stream that is to be compressed (Bellard, 2019, 2021; Goyal et al., 2020; Mao et al., 2022). Finally, concurrent work (Valmeekam et al., 2023) also investigated lossless offline compression with foundation models, using arithmetic coding with LLaMA-7B (Touvron et al., 2023).
Compression Biases: Tokenization, Model Size, etc.Much effort has been devoted on understanding the inductive biases of neural networks. Here, we are mostly interested in the biases of Natural Language Processing (NLP) and Transformers. Kudo and Richardson (2018) defined a tokenizer for NLP-related research, an improvement of well-known techniques like byte-pair encoding (BPE) (Sennrich et al., 2016), BPE dropout (Provilkov et al., 2020), and subword regularization (Kudo, 2018). In this paper, we show how these tokenization techniques act as pre-compressors for the data, and can significantly affect the final compression rates when paired with a neural model. More general studies have been performed on generalization (Neyshabur et al., 2017), which, we argue, is equivalent to the model's compressive power when accounting parameters code-length. Finally, some work has been done on compressing the neural models' parameters themselves (Cheng et al., 2017).
## 5 Conclusion
In this paper we investigated how and why compression and prediction are equivalent. Arithmetic coding transforms a prediction model into a compressor, and, conversely, a compressor can be transformed into a predictor by using the coding lengths to construct probability distributions following Shannon's entropy principle. We evaluated large pretrained models used as compressors against various standard compressors, and showed they are competitive not only on text but also on modalities they have never been trained on (images, audio data). We showed that the compression viewpoint provides novel insights on scaling laws since it takes the model size into account, unlike the log-loss objective, which is standard in current language modeling research. Consequently, we showed that the optimal model size is inextricably linked to the dataset size and cannot be scaled without limit.
#### Acknowledgments
We thank Jorg Bornschein, Nando de Freitas, Slav Petrov, and Zhengdong Wang for their helpful feedback and insightful discussions.
|
2306.17782 | Efficient Federated Low Rank Matrix Recovery via Alternating GD and
Minimization: A Simple Proof | This note provides a significantly simpler and shorter proof of our sample
complexity guarantee for solving the low rank column-wise sensing problem using
the Alternating Gradient Descent (GD) and Minimization (AltGDmin) algorithm.
AltGDmin was developed and analyzed for solving this problem in our recent
work. We also provide an improved guarantee. | Namrata Vaswani | 2023-06-30T16:37:25Z | http://arxiv.org/abs/2306.17782v2 | # A Simple Proof for
###### Abstract
This note provides a significantly simpler and shorter proof of our sample complexity guarantee for solving the low rank column-wise compressive sensing (LRCS) problem using the Alternating Gradient Descent (GD) and Minimization (AltGDmin) algorithm. AltGDmin was developed and analyzed for solving LRCS in our recent work. We also provide an improved guarantee.
## I Introduction
We study the low rank column-wise compressive sensing (LRCS) problem which involves recovering a low rank (LR) matrix from independent compressive measurements of each of its columns. The alternating gradient descent (GD) and minimization (altGDmin) algorithm for solving it in a fast, memory/communication-efficient and private fashion was developed and analyzed in our recent work [1]. This brief note provides a significantly simpler and shorter proof of our sample complexity guarantee for AltGDmin. In fact, it also improves our guarantee for the altGDmin iterations.
## II Problem statement, notation, and algorithm
### _Problem statement and assumption_
The goal is to recover an \(n\times q\) rank-\(r\) matrix \(\mathbf{X}^{\star}=[\mathbf{x}_{1}^{\star},\mathbf{x}_{2}^{\star},\dots,\mathbf{x}_{q}^{\star}]\) from \(m\) linear projections (sketches) of each of its \(q\) columns, i.e. from
\[\mathbf{y}_{k}:=\mathbf{A}_{k}\mathbf{x}_{k}^{\star},\ k\in[q] \tag{1}\]
where each \(\mathbf{y}_{k}\) is an \(m\)-length vector, \([q]:=\{1,2,\dots,q\}\), and the measurement/sketching matrices \(\mathbf{A}_{k}\) are mutually independent and known. The setting of interest is low-rank (LR), \(r\ll\min(n,q)\), and undersampled measurements, \(m<n\). Our guarantees assume that each \(\mathbf{A}_{k}\) is random-Gaussian: each entry of it is independent and identically distributed (i.i.d.) standard Gaussian. Let \(\mathbf{X}^{\star}\stackrel{{\mathrm{SVD}}}{{\longrightarrow}}\mathbf{U} ^{\star}\mathbf{\Sigma}^{\star}\mathbf{V}^{\star}:=\mathbf{U}^{\star}\mathbf{B}^{\star}\) denote its reduced (rank \(r\)) SVD, and \(\kappa:=\sigma^{\star}_{\max}/\sigma^{\star}_{\min}\) the condition number of \(\mathbf{\Sigma}^{\star}\). We let \(\mathbf{B}^{\star}:=\mathbf{\Sigma}^{\star}\mathbf{V}^{\star}\).
Since no measurement \(\mathbf{y}_{ki}\) is a global function of the entire matrix, \(\mathbf{X}^{\star}\), we need the following assumption, borrowed from LR matrix completion literature, to make our problem well-posed (allow for correct interpolation across columns).
**Assumption II.1** (Incoherence of right singular vectors).: _Assume that \(\|\mathbf{b}_{k}^{\star}\|^{2}\leq\mu^{2}r\sigma_{\max}^{\star}{}^{2}/q\) for a numerical constant \(\mu\)._
The communication complexity and privacy discussion assumes a vertically federated setting, i.e., one in which a different subset of \(\mathbf{y}_{k}\)s is measured/sketched at each node.
### _Notation_
\(\|.\|_{F}\) denotes the Frobenius norm, \(\|.\|\) without a subscript denotes the (induced) \(l_{2}\) norm, \({}^{\top}\) denotes matrix or vector transpose, \(\mathbf{e}_{k}\) is used to denote the \(k\)-th canonical basis vector (\(k\)-th column of \(\mathbf{I}\)), and \(\mathbf{M}^{\dagger}:=(\mathbf{M}^{\top}\mathbf{M})^{-1}\mathbf{M}^{\top}\). For two \(n\times r\) matrices \(\mathbf{U}_{1},\mathbf{U}_{2}\) that have orthonormal columns, we use
\[\mathrm{SD}_{2}(\mathbf{U}_{1},\mathbf{U}_{2}):=\|(\mathbf{I}-\mathbf{U}_{1}\mathbf{U}_{1}^{\top} )\mathbf{U}_{2}\|\]
as the Subspace Distance (SD) measure. In our previous work, we used the Frobenius norm SD,
\[\mathrm{SD}_{F}(\mathbf{U}_{1},\mathbf{U}_{2}):=\|(\mathbf{I}-\mathbf{U}_{1}\mathbf{U}_{1}^{\top} )\mathbf{U}_{2}\|_{F}.\]
Clearly, \(\mathrm{SD}_{F}(\mathbf{U}_{1},\mathbf{U}_{2})\leq\sqrt{r}\mathrm{SD}_{2}(\mathbf{U}_{1}, \mathbf{U}_{2})\). We reuse the letters \(c,C\) to denote different numerical constants in each use with the convention that \(c<1\) and \(C\geq 1\). We use \(\sum_{k}\) as a shortcut for the summation over \(k=1\) to \(q\) and \(\sum_{ki}\) for that for the summation over \(i=1\) to \(m\) and \(k=1\) to \(q\).
### _Review of AltGDmin algorithm_
AltGDmin, summarized in Algorithm 1, imposes the LR constraint by factorizing the unknown matrix \(\mathbf{X}\) as \(\mathbf{X}=\mathbf{U}\mathbf{B}\) with \(\mathbf{U}\) being an \(n\times r\) matrix and \(\mathbf{B}\) an \(r\times q\) matrix. It minimizes \(f(\mathbf{U},\mathbf{B}):=\sum_{k=1}^{q}\|\mathbf{y}_{k}-\mathbf{U}\mathbf{b}_{k}\|^{2}\) as follows:
1. _Truncated spectral init_: Initialize \(\mathbf{U}\) (explained below).
2. At each iteration, update \(\mathbf{B}\) and \(\mathbf{U}\) as follows: 1. _Min for_ \(\mathbf{B}\): keeping \(\mathbf{U}\) fixed, update \(\mathbf{B}\) by solving \(\min_{\mathbf{B}}f(\mathbf{U},\mathbf{B})\). Due to the form of the LRCS measurement model, this minimization decouples across columns, making it a cheap least squares problem of recovering \(q\) different \(r\) length vectors. It is solved as \(\mathbf{b}_{k}=(\mathbf{A}_{k}\mathbf{U})^{\dagger}\mathbf{y}_{k}\) for each \(k\in[q]\). 2. _Projected-GD for_\(\mathbf{U}\): keeping \(\mathbf{B}\) fixed, update \(\mathbf{U}\) by a GD step, followed by orthonormalizing its columns: \(\mathbf{U}^{+}=QR(\mathbf{U}-\eta\nabla_{\mathbf{U}}f(\mathbf{U},\mathbf{B}))\). Here \(QR(.)\) orthonormalizes the columns of its input.
We initialize \(\mathbf{U}\) by computing the top \(r\) singular vectors of
\[\mathbf{X}_{0}:=\sum_{k}\mathbf{A}_{k}^{\top}\mathbf{y}_{k,true}\mathbf{e}_{k}^{\top},\ \mathbf{y}_{k,true}:=\mathrm{trunc}(\mathbf{y}_{k},\alpha)\]
Here \(\alpha:=\tilde{C}\sum_{k}\|\mathbf{y}_{k}\|^{2}/mq\) with \(\tilde{C}:=9\kappa^{2}\mu^{2}\). The function truncuates (zeroes out) all entries of \(\mathbf{y}_{k}\) with magnitude greater than \(\sqrt{\alpha}\), i.e., for all \(j\in[n]\), \(\operatorname{trunc}(\mathbf{y},\alpha)_{j}=(\mathbf{y})_{j}\mathbf{1}_{|\mathbf{y}_{j}|\leq \sqrt{\alpha}}\)), with \(\mathbf{1}\) being the indicator function.
Sample-splitting is assumed, i.e., each new update of \(\mathbf{U}\) and \(\mathbf{B}\) uses a new independent set of measurements and measurement matrices, \(\mathbf{y}_{k},\mathbf{A}_{k}\).
The use of minimization to update \(\mathbf{B}\) at each iteration is what helps ensure that we can show exponential error decay with a constant step size. At the same time, due to the decoupling, its time complexity is only as much as that of computing one gradient w.r.t. \(\mathbf{U}\). Both steps need1 time of order \(mqnr\). This is only \(r\) times more than "linear time" (time needed to read the algorithm inputs, here \(\mathbf{y}_{k},\mathbf{A}_{k}\)'s). To our knowledge, \(r\)-times linear-time is the best known time complexity for any algorithm for any LR matrix recovery problem. Moreover, due to the use of the \(\mathbf{X}=\mathbf{U}\mathbf{B}\) factorization, AltGDmin is also communication-efficient. Each node needs to only send \(nr\) scalars (gradients w.r.t \(\mathbf{U}\)) at each iteration.
Footnote 1: The LS step time is \(\max(q\cdot mnr,q\cdot mr^{2})=mqnr\) (maximum of the time needed for computing \(\mathbf{A}_{k}\mathbf{U}\) for all \(k\), and that for obtaining \(\mathbf{b}_{k}\) for all \(k\)) while the GD step time is \(\max(q\cdot mnr,nr^{2})=mqnr\) (maximum of the time needed for computing the gradient w.r.t. \(\mathbf{U}\), and time for the QR step).
## III New Guarantee
Let \(m_{0}\) and \(m_{1}\) denote the total number of samples per column needed for initialization and let \(m_{1}\) denote this number for each GDmin iteration. Then, the total sample complexity per column is \(m=m_{0}+m_{1}T\).
**Theorem 3.1**.: _Assume that Assumption 2.1 holds. Set \(\eta=0.4/m{\sigma_{\max}^{*}}^{2}\) and \(T=C\kappa^{2}\log(1/\epsilon)\). If_
\[mq\geq C\kappa^{6}\mu^{2}(n+q)r(\kappa^{2}r+\log(1/\epsilon))\]
_and \(m\geq C\max(\log n,\log q,r)\log(1/\epsilon)\), then, with probability (w.p.) at least \(1-n^{-10}\),_
\[\operatorname{SD}_{2}(\mathbf{U},\mathbf{U}^{*})\leq\epsilon\text{ and }\|\mathbf{x}_{k}-\mathbf{x }_{k}^{*}\|\leq\epsilon\|\mathbf{x}_{k}^{*}\|\text{ for all }k\in[q].\]
_The time complexity is \(mqnr\cdot T=C\kappa^{2}mqnr\log(1/\epsilon)\). The communication complexity is \(nr\) per node per iteration._
Proof.: We prove the three results needed for proving this in Sections IV-B, IV-D, and V below. We use these to prove the above result in Sec. VI.
### _Discussion_
We use \(a\gtrsim b\) to mean that \(a\geq C_{\kappa,\mu}b\) where \(C_{\kappa,\mu}\) includes terms dependent on \(\kappa,\mu\). As done in past works, our discussion treats \(\kappa,\mu\) as numerical constants (that do not grow with \(n,q,r\)). Also, whp means w.p. at least \(1-n^{-10}\).
Our result from [1] needed
\[mq\geq C\kappa^{6}\mu^{2}(n+q)r^{2}\log(1/\epsilon)\]
Our new result improves the dependence on \(r,\epsilon\) from order \(r^{2}\log(1/\epsilon)\) to order \(r\cdot\max(r,\log(1/\epsilon))\). Let \(\max=\max(r,\log(1/\epsilon))\); then the old one [1] needs \(mq\gtrsim n(\max)^{3}\) while the new one needs \(mq\gtrsim n(\max)^{2}\). This improvement is obtained because our new guarantee for the GD step (Theorem 4.4) only needs \(m_{1}q\gtrsim nr\) at each iteration. On the other hand, the older result needed \(m_{1}q\gtrsim nr^{2}\). Both guarantees need \(m_{0}q\gtrsim nr^{2}\) for initialization, see Theorem 5.7.
We get this improvement because we now use a simpler proof technique that works for the LRCS problem, but not for its LR phase retrieval (LRPR) generalization. LRPR involves recovering \(\mathbf{X}^{*}\) from \(\mathbf{z}_{k}:=|\mathbf{y}_{k}|,k\in[q]\). In [1], we were attempting to solve both problems. There are two differences in our new proof compared with that of [1]: (i) we use the 2-norm subspace distance \(\operatorname{SD}_{2}(\mathbf{U},\mathbf{U}^{*})\) instead of \(\operatorname{SD}_{F}(\mathbf{U},\mathbf{U}^{*})\), and (ii) we do not use the fundamental theorem of calculus [2, 3] for analyzing the GD step, but instead use a much simpler direct approach2. In [1], we used the Frobenius norm SD because it helped obtain the desired \(nr^{3}\) guarantee for LRPR3. Also, in hindsight, the use of the fundamental theorem of calculus was unnecessary. It has been used in earlier work [3] for analyzing a GD based algorithm for standard PR and for LR matrix completion and that originally motivated us to adapt the same approach for LRCS and LRPR.
## IV New proof: GDmin iterations
### _Definitions and preliminaries_
Let \(\mathbf{U}\) be the estimate at the \(t\)-th iteration. Define
\[\mathbf{g}_{k} :=\mathbf{U}^{\top}\mathbf{x}_{k}^{\star},k\in[q],\text{ and }\mathbf{G}:=\mathbf{U}^{ \top}\mathbf{X}^{\star},\] \[\mathbf{P} :=\mathbf{I}-\mathbf{U}^{\star}{\mathbf{U}^{\star}}^{\top},\] \[\mathrm{Grad} :=\nabla_{\mathbf{U}}f(\mathbf{U},\mathbf{B})=\sum_{k}\mathbf{A}_{k}^{\top}(\mathbf{ A}_{k}\mathbf{U}\mathbf{b}_{k}-\mathbf{y}_{k})\mathbf{b}_{k}^{\top}\] \[=\sum_{ki}(\mathbf{y}_{ki}-\mathbf{a}_{ki}{}^{\top}\mathbf{U}\mathbf{b}_{k})\mathbf{a }_{ki}\mathbf{b}_{k}^{\top}\]
For an \(n_{1}\times n_{2}\) matrix, \(\mathbf{Z}\), \(\sigma_{\min}(\mathbf{Z})=\sigma_{\min}(\mathbf{Z}^{\top})=\sigma_{\min(n_{1},n_{2})}( \mathbf{Z})\). Thus, if \(\mathbf{A}\) is tall, then \(\sigma_{\min}(\mathbf{A})=\sqrt{\lambda_{\min}(\mathbf{A}^{\top}\mathbf{A})}\). Using this, it follows that, if \(\mathbf{A}=\mathbf{B}\mathbf{C}\) and \(\mathbf{A}\) and \(\mathbf{B}\) are tall (or square), then \(\sigma_{\min}(\mathbf{A})\geq\sigma_{\min}(\mathbf{B})\sigma_{\min}(\mathbf{C})\).
### _Minimization step_
Assume \(\mathrm{SD}_{2}(\mathbf{U},\mathbf{U}^{\star})\leq\delta_{t}\) with \(\delta_{t}<0.02\).
We use the following lemma.
**Lemma 4.1** ([1]).: _Let \(\mathbf{g}_{k}:=\mathbf{U}^{\top}\mathbf{x}_{k}^{\star}\). Then, w.p. at least \(1-\exp(\log q+r-cm)\),_
\[\|\mathbf{g}_{k}-\mathbf{b}_{k}\|\leq 0.4\|\left(\mathbf{I}_{n}-\mathbf{U}\mathbf{U}^{\top} \right)\mathbf{U}^{\star}\mathbf{b}_{k}^{\star}\|\]
By Lemma 4.1, if \(m\gtrsim\max(\log q,\log n,r)\), then, whp
\[\|\mathbf{b}_{k}-\mathbf{g}_{k}\|\leq 0.4\|(\mathbf{I}-\mathbf{U}\mathbf{U}^{\top})\mathbf{U}^{\star} \mathbf{b}_{k}^{\star}\|\]
This directly implies
1. \(\|\mathbf{b}_{k}-\mathbf{g}_{k}\|\leq 0.4\delta_{t}\|\mathbf{b}_{k}^{\star}\|\)
2. \(\|\mathbf{b}_{k}\|\leq\|\mathbf{g}_{k}\|+0.4\cdot 0.02\|\mathbf{b}_{k}^{\star}\|\leq 1.1\|\mathbf{b }_{k}^{\star}\|\)
3. \(\|\mathbf{x}_{k}-\mathbf{x}_{k}^{\star}\|\leq 1.4\delta_{t}\|\mathbf{b}_{k}^{\star}\|\)
Using above,
\[\|\mathbf{B}-\mathbf{G}\|_{F}\leq 0.4\delta_{t}\sqrt{\sum_{k}\|\mathbf{b}_{k}^{\star} \|^{2}}=0.4\delta_{t}\sqrt{r}\sigma_{\max}^{\star}\]
and similarly for \(\|\mathbf{X}-\mathbf{X}^{\star}\|_{F}\). Thus,
1. \(\|\mathbf{B}-\mathbf{G}\|_{F}\leq 0.4\delta_{t}\|\mathbf{B}^{\star}\|_{F}\leq 0.4\sqrt{r} \delta_{t}\sigma_{\max}^{\star}\)
2. \(\|\mathbf{X}-\mathbf{X}^{\star}\|_{F}\leq 1.4\sqrt{r}\delta_{t}\sigma_{\max}^{\star}\)
Furthermore,
\[\sigma_{\min}(\mathbf{B})\geq\sigma_{\min}(\mathbf{G})-\|\mathbf{B}-G\|\geq\sigma_{\min}( \mathbf{G})-\|\mathbf{B}-G\|_{F}\]
We have
\[\sigma_{\min}(\mathbf{G})=\sigma_{\min}(\mathbf{G}^{\top})\geq\sigma_{\min}^{\star} \sigma_{\min}(\mathbf{U}^{\star}{}^{\top}\mathbf{U})\]
and
\[\sigma_{\min}(\mathbf{U}^{\star}{}^{\top}\mathbf{U})=\sqrt{1-\|\mathbf{P}\mathbf{U}\|^{2}} \geq\sqrt{1-\delta_{t}^{2}}\]
This follows using \(\sigma_{\min}^{2}(\mathbf{U}^{\star}{}^{\top}\mathbf{U})=\lambda_{\min}(\mathbf{U}^{\top} \mathbf{U}^{\star}{}^{\top}\mathbf{U})=\lambda_{\min}(\mathbf{U}^{\top}(\mathbf{I}-\mathbf{P})\bm {U})=\lambda_{\min}(\mathbf{I}-\mathbf{U}^{\top}\mathbf{P}\mathbf{U})=\lambda_{\min}(\mathbf{I}- \mathbf{U}^{\top}\mathbf{P}^{2}\mathbf{U})=1-\lambda_{\max}(\mathbf{U}^{\top}\mathbf{P}^{2}\mathbf{U}) =1-\|\mathbf{P}\mathbf{U}\|^{2}\).
Combining the above three bounds, if \(\delta_{t}<0.02/\sqrt{r}\kappa\), then
\[\sigma_{\min}(\mathbf{B})\geq\sqrt{1-\delta_{t}^{2}}\sigma_{\min}^{\star}-0.4\sqrt {r}\delta_{t}\sigma_{\max}^{\star}\geq 0.9\sigma_{\min}^{\star}\]
and
\[\sigma_{\max}(\mathbf{B})\leq\|\mathbf{G}\|+0.4\sqrt{r}\delta_{t}\sigma_{\max}^{\star} \leq 1.1\sigma_{\max}^{\star}\]
since \(\|\mathbf{G}\|\leq\|\mathbf{B}^{\star}\|=\sigma_{\max}^{\star}\).
Thus, we have proved the following claim:
**Theorem 4.2**.: _Assume that \(\mathrm{SD}_{2}(\mathbf{U},\mathbf{U}^{\star})\leq\delta_{t}\). If \(\delta_{t}\leq 0.02/\sqrt{r}\kappa\), and if \(m\gtrsim\max(\log q,\log n,r)\), then whp,_
1. \(\|\mathbf{b}_{k}-\mathbf{g}_{k}\|\leq 0.4\delta_{t}\|\mathbf{b}_{k}^{\star}\|\)__
2. \(\|\mathbf{b}_{k}\|\leq\|\mathbf{g}_{k}\|+0.4\cdot 0.02\|\mathbf{b}_{k}^{\star}\|\leq 1.1\|\mathbf{b}_{k}^{\star}\|\)__
3. \(\|\mathbf{B}-\mathbf{G}\|_{F}\leq 0.4\delta_{t}\|\mathbf{B}^{\star}\|_{F}\leq 0.4\sqrt{r} \delta_{t}\sigma_{\max}^{\star}\)__
4. \(\|\mathbf{x}_{k}-\mathbf{x}_{k}^{\star}\|\leq 1.4\delta_{t}\|\mathbf{b}_{k}^{\star}\|\)__
5. \(\|\mathbf{X}-\mathbf{X}^{\star}\|_{F}\leq 1.4\sqrt{r}\delta_{t}\sigma_{\max}^{\star}\)__
6. \(\sigma_{\min}(\mathbf{B})\geq 0.9\sigma_{\min}^{\star}\)__
7. \(\sigma_{\max}(\mathbf{B})\leq 1.1\sigma_{\max}^{\star}\)__
_(only the last two bounds require the upper bound on \(\delta_{t}\))._
### _New bounds on the expected gradient and deviation from it_
Using independence of \(\mathbf{A}_{k}\) and \(\{\mathbf{U},\mathbf{b}_{k}\}\) (due to sample splitting),
\[\mathbb{E}[\mathrm{GradU}]=\sum_{k}m(\mathbf{x}_{k}-\mathbf{x}_{k}^{\star})\mathbf{b}_{k}{}^{\top}\]
Using bounds on \(\|\mathbf{B}\|\) and \(\|\mathbf{X}^{\star}-\mathbf{X}\|_{F}\) from Theorem 4.2, if \(\delta_{t}<\frac{c}{\sqrt{r}\kappa}\),
\[\|\mathbb{E}[\mathrm{GradU}]\| =\|\sum_{k}m(\mathbf{x}_{k}-\mathbf{x}_{k}^{\star})\mathbf{b}_{k}{}^{\top}\|=m \|(\mathbf{X}-\mathbf{X}^{\star})\mathbf{B}^{\top}\|\] \[\leq m\|\mathbf{X}-\mathbf{X}^{\star}\|\cdot\|\mathbf{B}\|\] \[\leq m\|\mathbf{X}-\mathbf{X}^{\star}\|_{F}\cdot\|\mathbf{B}\|\leq 1.1m\delta_{t} \sqrt{r}\sigma_{\max}^{\star}{}^{2}.\]
w.p. \(1-\exp(\log q+r-cm)\).
Next, we bound \(\|\mathrm{GradU}-\mathbb{E}[\mathrm{GradU}]\|=\max_{\|\mathbf{w}\|=1,\|\mathbf{z}\|=1} \mathbf{w}^{\top}(\sum_{k}\sum_{i}\mathbf{a}_{ki}\mathbf{a}_{ki}^{\top}(\mathbf{x}_{k}-\mathbf{x}_{k}^ {\star})\mathbf{b}_{k}^{\top}-\mathbb{E}[\cdot])\mathbf{z}\). This also uses independence of \(\
In the above, we used (i) \(\sum_{k}(\mathbf{z}^{\top}\mathbf{b}_{k})^{2}=\|\mathbf{z}^{\top}\mathbf{B}\|^{2}\leq\|\mathbf{B}\|^{2}\) since \(\mathbf{z}\) is unit norm, (ii) Theorem IV.2 to bound \(\|\mathbf{B}\|\leq 1.1\sigma_{\max}^{\star}\), and (iii) Theorem IV.2 followed by Assumption II.1 (right incoherence) to bound \(\|\mathbf{x}_{k}-\mathbf{x}_{k}^{\star}\|\leq\delta_{t}\cdot\mu\sigma_{\max}^{\star} \sqrt{r/q}\) and \(|\mathbf{z}^{\top}\mathbf{b}_{k}|\leq\|\mathbf{b}_{k}\|\leq 1.1\|\mathbf{b}_{k}^{\star}\|\leq 1.1 \mu\sigma_{\max}^{\star}\sqrt{r/q}\).
For \(\epsilon_{1}<1\), the first term above is smaller (since \(1/\kappa^{4}\leq 1/\kappa^{2}\)), i.e., \(\min(\frac{\epsilon^{2}}{\sum_{k_{i}}K_{k_{i}}^{2}},\frac{\epsilon}{\max_{k_{ i}}\kappa_{ki}})=c\frac{\epsilon_{1}^{2}mq}{\kappa^{4}\mu^{2}r}\). Thus, by sub-exponential Bernstein, w.p. at least \(1-\exp(-c\frac{\epsilon_{1}^{2}mq}{\kappa^{4}\mu^{2}r})-\exp(\log q+r-cm)\), for a given \(\mathbf{w},\mathbf{z}\),
\[\mathbf{w}^{\top}(\mathrm{GradU}-\mathbb{E}[\mathrm{GradU}])\mathbf{z}\leq\epsilon_{ 1}\delta_{t}m{\sigma_{\min}^{\star}}^{2}\]
Using a standard epsilon-net argument to bound the maximum of the above over all unit norm \(\mathbf{w},\mathbf{z}\), e.g., using [1, Proposition IV.7], we can conclude that
\[\|\mathrm{GradU}-\mathbb{E}[\mathrm{GradU}]\|\leq 1.1\epsilon_{1}\delta_{t}m{ \sigma_{\min}^{\star}}^{2}\]
w.p. at least \(1-\exp(C(n+r)-c\frac{\epsilon_{1}^{2}mq}{\kappa^{4}\mu^{2}r})-\exp(\log q+r-cm)\). The factor of \(\exp(C(n+r))\) is due to the epsilon-net over \(\mathbf{w}\) and that over \(\mathbf{z}\): \(\mathbf{w}\) is an \(n\)-length unit norm vector while \(\mathbf{z}\) is an \(r\)-length unit norm vector. The smallest epsilon net covering the hyper-sphere of all \(\mathbf{w}\)s is of size \((1+2/\epsilon_{net})^{n}=C^{n}\) with \(\epsilon_{net}=c\) while that for \(\mathbf{z}\) is of size \(C^{r}\). Union bounding over both thus gives a factor of \(C^{n+r}\). By replacing \(\epsilon_{1}\) by \(\epsilon_{1}/1.1\), our bound becomes simpler (and \(1/1.1^{2}\) gets incorporated into the factor \(c\)). We have thus proved the following.
**Lemma IV.3**.: _Assume that \(\mathrm{SD}_{2}(\mathbf{U},\mathbf{U}^{\star})\leq\delta_{t}\). The following hold:_
1. \(\mathbb{E}[\mathrm{GradU}]=m(\mathbf{X}-\mathbf{X}^{\star})\mathbf{B}^{\top}=m(\mathbf{U}\mathbf{B} \mathbf{B}^{\top}-\mathbf{X}^{\star}\mathbf{B}^{\top})\)__
2. \(\|\mathbb{E}[\mathrm{GradU}]\|\leq 1.1m\delta_{t}\sqrt{\sigma_{\max}^{\star}}^{2}\)__
3. _If_ \(\delta_{t}<\frac{c}{\sqrt{r\kappa}}\)_, then, w.p. at least_ \(1-\exp(C(n+r)-c\frac{\epsilon_{1}^{2}mq}{\kappa^{4}\mu^{2}r})-\exp(\log q+r-cm)\)_,_ \[\|\mathrm{GradU}-\mathbb{E}[\mathrm{GradU}]\|\leq\epsilon_{1}\delta_{t}m{ \sigma_{\min}^{\star}}^{2}\]
The above lemma is an improvement over the bounds given in [1] because \(\delta_{t}\) is now the bound on the 2-norm SD, and still it only needs \(mq\gtrsim nr/\epsilon_{1}^{2}\).
### _GD step_
Assume that \(\mathrm{SD}_{2}(\mathbf{U},\mathbf{U}^{\star})\leq\delta_{t}\) with \(\delta_{t}<0.02\).
Recall the Projected GD step for \(\mathbf{U}\):
\[\mathbf{\tilde{U}}^{+} =\mathbf{U}-\eta\mathrm{GradU}\text{ and }\mathbf{\tilde{U}}^{+} \stackrel{{\mathrm{QR}}}{{=}}\mathbf{U}^{+}\mathbf{R}^{+}\]
Since \(\mathbf{U}^{+}=\mathbf{\tilde{U}}^{+}(\mathbf{R}^{+})^{-1}\) and since \(\|(\mathbf{R}^{+})^{-1}\|=1/\sigma_{\min}(\mathbf{R}^{+})=1/\sigma_{\min}(\mathbf{\tilde{U} }^{+})\), thus, \(\mathrm{SD}_{2}(\mathbf{U}^{+},\mathbf{U}^{\star})=\|\mathbf{P}\mathbf{U}^{+}\|\) can be bounded as
\[\mathrm{SD}_{2}(\mathbf{U}^{+},\mathbf{U}^{\star}) \leq\frac{\|\mathbf{P}\mathbf{\tilde{U}}^{+}\|}{\sigma_{\min}(\mathbf{\tilde {U}}^{+})}\leq\frac{\|\mathbf{P}\mathbf{\tilde{U}}^{+}\|}{\sigma_{\min}(\mathbf{U})-\eta \|\mathrm{GradU}\|} \tag{2}\]
Consider the numerator. Adding/subtracting \(\mathbb{E}[\mathrm{GradU}]\), left multiplying both sides by \(\mathbf{P}\), and using Lemma IV.3 (first part),
\[\mathbf{\tilde{U}}^{+} =\mathbf{U}-\eta\mathbb{E}[\mathrm{GradU}]+\eta(\mathbb{E}[\mathrm{ GradU}]-\mathrm{GradU}),\text{ thus, }\] \[\mathbf{P}\mathbf{\tilde{U}}^{+} =\mathbf{P}\mathbf{U}-\eta m\mathbf{P}\mathbf{U}\mathbf{B}^{\top}+\eta\mathbf{P}(\mathbb{E}[ \mathrm{GradU}]-\mathrm{GradU})\]
The last row used \(\mathbf{P}\mathbf{X}^{\star}=0\). Thus,
\[\|\mathbf{P}\mathbf{\tilde{U}}^{+}\|\leq\|\mathbf{P}\mathbf{U}\|\|\mathbf{I}-\eta m\mathbf{B}\mathbf{B}^{ \top}\|+\eta\|\mathbb{E}[\mathrm{GradU}]-\mathrm{GradU}\| \tag{3}\]
Using Theorem IV.2, we get
\[\lambda_{\min}(\mathbf{I}-\eta m\mathbf{B}\mathbf{B}^{\top})=1-\eta m\|\mathbf{B}\|^{2}\geq 1-1. 2\eta m{\sigma_{\max}^{\star}}^{2}\]
Thus, if \(\eta<0.5/m{\sigma_{\max}^{\star}}^{2}\), then the above matrix is p.s.d. This along with Theorem IV.2 then implies that
\[\|\mathbf{I}-\eta m\mathbf{B}\mathbf{B}^{\top}\|=\lambda_{\max}(\mathbf{I}-\eta m\mathbf{B}\mathbf{B}^{ \top})\leq 1-0.9\eta m{\sigma_{\min}^{\star}}^{2}\]
Using the above, (3), and the bound on \(\|\mathbb{E}[\mathrm{GradU}]-\mathrm{GradU}\|\) from Lemma IV.3, we conclude the following: If \(\eta\leq 0.5/m{\sigma_{\max}^{\star}}^{2}\), and \(\delta_{t}\leq c/\sqrt{r}\kappa\), then
\[\|\mathbf{P}\mathbf{\tilde{U}}^{+}\| \leq\|\mathbf{P}\mathbf{U}\|\|\mathbf{I}-\eta m\mathbf{B}\mathbf{B}^{\top}\|+\eta\| \mathbb{E}[\mathrm{GradU}]-\mathrm{GradU}\|\] \[\leq\delta_{t}(1-0.9\ \eta m{\sigma_{\min}^{\star}}^{2})+\eta m \epsilon_{1}\delta_{t}{\sigma_{\min}^{\star}}^{2} \tag{4}\]
w.p. at least \(1-\exp(C(n+r)-c\frac{\epsilon_{1}^{2}mq}{\kappa^{4}\mu^{2}r})-\exp(\log q+r-cm)\). This probability is at least \(1-n^{-10}\) if \(mq\gtrsim\kappa^{4}\mu^{2}nr/\epsilon_{1}^{2}\) and \(m\gtrsim\max(\log n,\log q,r)\).
Next we use (4) with \(\epsilon_{1}=0.1\) and Lemma IV.3 in (2). Set \(\eta=c_{\eta}/m{\sigma_{\max}^{\star}}^{2}\). If \(c_{\eta}\leq 0.5\), if \(\delta_{t}\leq c/\sqrt{r}\kappa^{2}\), and lower bounds on \(m\) from above hold, (2) implies that, whp,
\[\mathrm{SD}_{2}(\mathbf{U}^{+},\mathbf{U}^{\star})\] \[\leq\frac{\|P\mathbf{\tilde{U}}^{+}\|}{\sigma_{\min}(\mathbf{U})-\eta\| \mathrm{GradU}\|}\] \[\leq\frac{\delta_{t}(1-\eta m{\sigma_{\min}^{\star}}^{2}(0.9-0.1))}{ \sigma_{\min}(\mathbf{U})-\eta\|\mathbb{E}[\mathrm{GradU}]\|-\eta\|\mathrm{ GradU}-\mathbb{E}[\mathrm{GradU}]\|}\] \[\leq\frac{\delta_{t}(1-0.8\eta m{\sigma_{\min}^{\star}}^{2})}{1- \eta m\delta_{t}\sqrt{r}{\sigma_{\max}^{
### _Results taken from [1]_
Recall from Algorithm 1 that \(\alpha\) uses a different set of measurements that is independent of those used for \(\mathbf{X}_{0}\). We use the following four results from [1].
**Lemma 5.1** ([1]).: _Conditioned on \(\alpha\), we have the following conclusions. Let \(\zeta\) be a scalar standard Gaussian r.v. Define_
\[\beta_{k}(\alpha):=\mathbb{E}[\zeta^{2}\,\mathbb{1}_{\{\|\mathbf{x}_{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}\|^{2}}\zeta^{2} \leq\alpha\}}]}.\]
_Then,_
\[\mathbb{E}[\mathbf{X}_{0}|\alpha]=\mathbf{X}^{\star}\mathbf{D}(\alpha),\] \[\text{where }\mathbf{D}(\alpha):=diagonal(\beta_{k}(\alpha),k \in[q])\]
_i.e. \(\mathbf{D}(\alpha)\) is a diagonal matrix of size \(q\times q\) with diagonal entries \(\beta_{k}\) defined above._
**Fact 5.2** ([1]).: _Let_
\[\mathcal{E}:=\left\{\tilde{C}(1-\epsilon_{1})\frac{\|\mathbf{X}^{\star}\|_{F}^{ 2}}{q}\leq\alpha\leq\tilde{C}(1+\epsilon_{1})\frac{\|\mathbf{X}^{\star}\|_{F}^{2}} {q}\right\}.\]
\(\Pr(\alpha\in\mathcal{E})\geq 1-\exp(-\tilde{c}mq\epsilon_{1}^{2})\)_. Here \(\tilde{c}=c/\tilde{C}=c/\kappa^{2}\mu^{2}\)._
**Fact 5.3** ([1]).: _For any \(\epsilon_{1}\leq 0.1\), \(\min_{k}\mathbb{E}\left[\zeta^{2}\mathbb{1}_{\{|\zeta|\leq\tilde{C}\sqrt{ \frac{\sqrt{-1}}{\sqrt{q}}\|\mathbf{x}_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\bm |
2309.09339 | Pulsational pair-instability supernovae in gravitational-wave and
electromagnetic transients | Current observations of binary black-hole ({BBH}) merger events show support
for a feature in the primary BH-mass distribution at
$\sim\,35\,\mathrm{M}_{\odot}$, previously interpreted as a signature of
pulsational pair-instability (PPISN) supernovae. Such supernovae are expected
to map a wide range of pre-supernova carbon-oxygen (CO) core masses to a narrow
range of BH masses, producing a peak in the BH mass distribution. However,
recent numerical simulations place the mass location of this peak above
$50\,\mathrm{M}_{\odot}$. Motivated by uncertainties in the progenitor's
evolution and explosion mechanism, we explore how modifying the distribution of
BH masses resulting from PPISN affects the populations of gravitational-wave
(GW) and electromagnetic (EM) transients. To this end, we simulate populations
of isolated {BBH} systems and combine them with cosmic star-formation rates.
Our results are the first cosmological BBH-merger predictions made using the
\textsc{binary\_c} rapid population synthesis framework. We find that our
fiducial model does not match the observed GW peak. We can only explain the
$35\,\mathrm{M}_{\odot}$ peak with PPISNe by shifting the expected CO core-mass
range for PPISN downwards by $\sim{}15\,\mathrm{M}_{\odot}$. Apart from being
in tension with state-of-the art stellar models, we also find that this is
likely in tension with the observed rate of hydrogen-less super-luminous
supernovae. Conversely, shifting the mass range upward, based on recent stellar
models, leads to a predicted third peak in the BH mass function at
$\sim{}64\,\mathrm{M}_{\odot}$. Thus we conclude that the
$\sim{}35\,\mathrm{M}_{\odot}$ feature is unlikely to be related to PPISNe. | D. D. Hendriks, L. A. C. van Son, M. Renzo, R. G. Izzard, R. Farmer | 2023-09-17T18:05:01Z | http://arxiv.org/abs/2309.09339v1 | # Pulsational pair-instability supernovae in gravitational-wave and electromagnetic transients
###### Abstract
Current observations of binary black-hole (BBH) merger events show support for a feature in the primary BH-mass distribution at \(\sim 35\,\mathrm{M}_{\odot}\), previously interpreted as a signature of pulsational pair-instability (PPISN) supernovae. Such supernovae are expected to map a wide range of pre-supernova carbon-oxygen (CO) core masses to a narrow range of BH masses, producing a peak in the BH mass distribution. However, recent numerical simulations place the mass location of this peak above \(50\,\mathrm{M}_{\odot}\). Motivated by uncertainties in the progenitor's evolution and explosion mechanism, we explore how modifying the distribution of BH masses resulting from PPISN affects the populations of gravitational-wave (GW) and electromagnetic (EM) transients. To this end, we simulate populations of isolated BBH systems and combine them with cosmic star-formation rates. Our results are the first cosmological BBH-merger predictions made using the binary_c rapid population synthesis framework. We find that our fiducial model does not match the observed GW peak. We can only explain the \(35\,\mathrm{M}_{\odot}\) peak with PPISNe by shifting the expected CO core-mass range for PPISN downwards by \(\sim 15\,\mathrm{M}_{\odot}\). Apart from being in tension with state-of-the art stellar models, we also find that this is likely in tension with the observed rate of hydrogen-less super-luminous supernovae. Conversely, shifting the mass range upward, based on recent stellar models, leads to a predicted third peak in the BH mass function at \(\sim 64\,\mathrm{M}_{\odot}\). Thus we conclude that the \(\sim 35\,\mathrm{M}_{\odot}\) feature is unlikely to be related to PPISN.
keywords: gravitational waves - stars: black holes - (stars:) supernovae: general - (transients:) black hole mergers - transients: supernovae
## 1 Introduction
The gravitational-wave (GW) observatories operated by the LIGO VIRGO KAGRA (LVK) collaboration have started to measure signals from GW mergers (LIGO Scientific Collaboration and Virgo Collaboration et al., 2019; Abbott et al., 2021), and with the recent release of the GWTC-3 there are now \(\sim 90\) confirmed compact object merger observations (Abbott et al., 2023; The LIGO Scientific Collaboration et al., 2023), the majority of which are binary black hole (BBH) mergers. These observations show structure in the distribution of primary masses, \(M_{\mathrm{primary}}\), i.e., the most massive object in the binary at the time of BBH merger. Parameteric models of the observations (e.g., Abbott et al., 2021, 2023; Farah et al., 2023), as well as non-parametric models (e.g., Sadig et al., 2022; Callister and Farr, 2023), consistently infer a feature, e.g., a change in power-law slope, or the presence of a Gaussian peak, between 32 and \(38\,\mathrm{M}_{\odot}\), suggesting that this feature is robust. The exact nature and origin of this feature is unclear, but the often-proposed explanation is that it originates from a pile up of BH masses due to PPISNe (Talbot and Thrane, 2018; Stevenson et al., 2019; Belczynski et al., 2020; Karathanasis et al., 2023). However, several alternative explanations for such a feature have also been proposed (e.g., Li et al., 2022; Antonini et al., 2023; Briel et al., 2023).
PPISNe occur when a very massive star is dynamically unstable due to runaway electron-positron pair-formation in their cores which remove high-energy photons. This leads to a decrease in the radiation pressure and an increase of the mass-density, and causes a softening of the equation of state, i.e., a decrease of the adiabatic index. The process results in an initial collapse, the explosive ignition of oxygen in the core, and a subsequent pulsating behaviour through which the star loses mass, or a single violent explosion that leaves behind no remnant (Barkat et al., 1967; Rakavy and Shaviv, 1967; Woosley et al., 2007; Woosley et al., 2017; Marchant et al., 2019; Renzo et al., 2020, 2020; Farmer et al., 2020; Farag et al., 2022). PPISNe cause a wide range of pre-supernova core masses to form BHs in a narrow range of remnant masses, which leads to an over-density and a subsequent mass-gap at higher BH masses. The magnitude of this over-density, or pile-up, depends on the width of the pre-supernova core-mass
range that undergoes PPISN and consequently on how sensitive the PPISN mass-loss is to the pre-supernova core mass. A broader pre-supernova core-mass range, i.e., a shallower PPISN remnant mass curve, leads to a larger pile-up.
Detailed models of single stars allow estimates of the mass lost during the pulsations at a given helium (He) or CO core mass (Renzo et al., 2022). These are used to calculate an upper limit on the BH remnant mass that can form after subsequent core-collapse (CC) supernovae (Farmer et al., 2019, from hereon F19; Farag et al., 2022) and the location of a feature in the primary-mass distribution due to PPISNe which is consistently predicted at masses \(\gtrsim 40-45\,\mathrm{M}_{\odot}\)(Talbot and Thrane, 2018; Stevenson et al., 2019; Belczynski et al., 2020). This mass is significantly (\(\gtrsim 5\,\mathrm{M}_{\odot}\)) greater than the \(\sim 35\,\mathrm{M}_{\odot}\) location of the feature inferred from GW data. Moreover, it is remarkably robust against the most common uncertainties in massive stellar evolution, such as metallicity, mixing, and neutrino physics (Farmer et al., 2019).
However, there are uncertainties that lead to larger variations in the mass range and remnant mass of stars that undergo PPISN. Several processes have been suggested that lead to shifts in the CO-core masses that undergo PPISN. These include uncertainties in the nuclear burning rates that affect the carbon-to-oxygen (C/O) ratio in the core (deBoer et al., 2017; Farmer et al., 2019, 2020; Costa et al., 2021; Woosley and Heger, 2021; Mehta et al., 2022; Farag et al., 2022; Shen et al., 2023), rotation, which provides both more massive cores and enhanced dynamical stability (Glatzel et al., 1985; Maeder and Meynet, 2000; Chatzopoulos and Wheeler, 2012; Marchant and Morris, 2020), beyond-Standard-Model physics which can either affect the C/O ratio (Croon et al., 2020), or lead to reduced dynamical stability (Croon et al., 2020; Sakstein et al., 2022; Mori et al., 2023) at lower masses, and lastly dark-matter annihilation which acts like an additional heating source (Ziegler and Freese, 2021, 2022). These processes could lower the CO core masses that undergo PPISN by up to \(-20\,\mathrm{M}_{\odot}\) (axion instability) or increase them by up to \(+10\,\mathrm{M}_{\odot}\) (reaction rates, rotation). Moreover, some theoretical studies predict additional mass loss, either in the post-PPI CC due to changes in core structure of PPI stars affecting the propagation of the core-bounce shock (Marchant et al., 2019; Renzo et al., 2020; Powell et al., 2021; Rahman et al., 2022), or by how convection transports energy during the PPI (Renzo et al., 2020). Furthermore, recent SN observations are well modelled by post-PPI mass loss (Ben-Ami et al., 2014; Kuncarayakati et al., 2023; Lin et al., 2023). Both theoretical studies and observational estimates find, at most, \(10\,\mathrm{M}_{\odot}\) in additional mass loss.
The location of the PISN mass-gap has broad implications. If the high-mass feature at \(\sim 35\,\mathrm{M}_{\odot}\) is indeed caused by PPISN, it observationally constrains the maximum BH mass that stars can form below the full disruption by PISN, and thus the lower bound of the so-called PISN mass-gap (Woosley et al., 2002; Renzo et al., 2020; Woosley and Heger, 2021). Only stars with He-core masses \(\gtrsim 120\,\mathrm{M}_{\odot}\) at the onset of collapse, which experience photo-dissociation during the pair-instability, can directly collapse into more-massive BHs (Bond et al., 1984; Renzo et al., 2020; Siegel et al., 2022). The PISN mass-gap further helps determine the fractional contribution of different gravitational-wave source channels to the overall population of BBH mergers, because systems with masses in the gap originate from channels other than isolated-binary evolution (Arca Sedda et al., 2020; Baibhav et al., 2020; Safarzadeh, 2020; Wong et al., 2021). The location of the mass gap also constrains stellar physics, like the aforementioned uncertain nuclear reaction rates \({}^{12}\mathrm{C}(\alpha,\gamma)^{16}\mathrm{O}\) and \(3\alpha\)(Farmer et al., 2019; Mehta et al., 2022; Farag et al., 2022; Shen et al., 2023). Lastly, the location of the mass-gap and the pile-up may be redshift independent sign-posts for cosmological applications (Farr et al., 2019, and references therein).
One way to constrain the physics of PPISNe is to compare our simulations directly to the observed rate of electromagnetic transients. Unfortunately, unambiguous transient observations of PPISN are currently not available. Theoretical modelling of PISNe lightcurves show that their light-curves generally rise slowly and that some are very luminous at peak luminosity (Kozyreva et al., 2014), while those caused by PPISNe and the subsequent interaction of their ejecta with the circumstellar medium or previously ejected mass-shells (Moriya and Langer, 2015; Woosley, 2017; Renzo et al., 2020) have shorter rise-times but equally-high peak luminosities (Woosley et al., 2007). Some super-luminous supernovae (SLSNe) could be powered by either PISNe or PPISNe, and indeed some of these are suggested observations of PISNe or PPISNe (e.g., Lunnan et al., 2018; Lin et al., 2023; Schulze et al., 2023; Aamer et al., 2023), although as of yet none of them have been confirmed to be caused by either PISNe or PPISNe. Moreover, there is growing evidence from light curves, spectra and rates that not all SLSNe are powered by PISNe or PPISNe (Nicholl et al., 2013; Kozyreva and Blinnikov, 2015; Perley et al., 2016; Gilmer et al., 2017). Estimates of (P)PISN event-rate densities are useful to study stellar evolution (e.g. du Buisson et al., 2020; Briel et al., 2022; Tanikawa et al., 2023), and could be compared to SLSN rates to determine whether these rates are in tension (Nicholl et al., 2013). Although uncontroversial detections are lacking, there are debated candidates for both PISNe and PPISNe, e.g., _SN 1961V_(Woosley and Smith, 2022), _SN 1000+0216_(Cooke et al., 2012), _SN 2010mb_(Ben-Ami et al., 2014), _PTF10mm_(Kozyreva et al., 2014), _iPTF14h_(Wang et al., 2022), _iPTF16eh_(Lunnan et al., 2018), _SN 2016ier_(Gomez et al., 2019), _SN 2017gem_(Lin et al., 2023), _SN 2018ibb_(Schulze et al., 2023) and _SN 2019szu_(Aamer et al., 2023). However, their interpretation is sufficiently uncertain that estimating a rate from these observations is still a challenge (however, see also Nicholl et al., 2013).
In this study we explore how the remnants of PPISNe affect the distribution of \(M_{\mathrm{primary}}\) for the BBH systems merging at redshift \(z\sim 0.2\), and compare this primary-mass distribution to the current observations. We focus our results on redshift \(z\sim 0.2\) because this is where current observations provide the strongest constraints (Abbott et al., 2023). We evolve isolated binary systems and convolve the resulting BBH systems with recent star-formation rate prescriptions (van Son et al., 2022), combined with a new PPISNe remnant-mass prescription (Renzo et al., 2022). We introduce variations in this prescription to capture the effects of uncertain or new physics. Moreover, we estimate the rates of PPISNe and PISNe and compare them to the observed SLSNe to constrain our variations. We aim to evaluate whether the peak at \(35\,\mathrm{M}_{\odot}\) is explained by BHs formed through PPISNe.
The layout of this paper is as follows. In Section 2 we explain our method to simulate populations of BBHs through population synthesis (Section 2.1 and 2.2) and describe our approach to convolving our binary populations with star-formation rates (Section 2.3). In Section 3 we explain our variations of the PPISN mechanism. In Section 4 we show the primary-mass distributions at \(z=0.2\), and BBH merger and EM transient event-rate densities as a function of redshift in our fiducial populations and the populations with variations on the PPISNe mechanism. We discuss our findings and conclude in Sections 5 and 6.
## 2 Method
We simulate populations of binary-star systems using binary_c, a binary population-synthesis framework based on the stellar-evolution algorithm of Hurley et al. (2000, 2002), which makes use of the single-star models of Pols et al. (1998) and provides analytical fits to their evolution as in Tout et al. (1997) with updates in Izzard et al. (2004, 2006, 2009); Claeys et al. (2014); Izzard et al. (2018); Izzard and Jermyn (2022); Hendriks and Izzard (2023).
We combine the results of these populations with cosmological star-formation rates, similar to, e.g., Dominik et al. (2013, 2015); Belczynski et al. (2016); Mandel and de Mink (2016); Chruslinska et al. (2019); Neijssel et al. (2019); van Son et al. (2022a); Tanaka et al. (2023), to estimate the rate and mass distribution of merging BBH systems as a function of redshift.
### Population synthesis and input physics
For an in-depth review of the relevant physical processes in binary stellar physics, see Langer (2012); Postnov and Yungelson (2014); De Marco and Izzard (2017) and Petrovic (2020). We highlight our choices of physics prescriptions for the processes relevant to this study in the following sections.
#### 2.1.1 Mass transfer, stability, and common-envelope evolution
During their evolution, stars in binary systems interact with their companion by expanding and overflowing their Roche-Lobe (RL), resulting in mass flowing from the donor star to its companion. We take the mass-transfer rate of the donor from Claeys et al. (2014). When the accretor has a radiative envelope, we limit the mass-accretion rate to 10 times its thermal limit, \(\dot{M}_{\rm acc\ thermal\ limit}=10\,\dot{M}_{\rm KH,\ acc}\), where \(\dot{M}_{\rm KH,\ acc}=M_{\rm acc}/\tau_{\rm KH}\,\rm M_{\odot}\,yr^{-1}\) with \(\tau_{\rm KH}\) the global Kelvin-Helmholtz timescale of the accretor and the factor of 10 roughly accounts for the fact that initially only the outer envelope, which has a shorter timescale than the global \(\tau_{\rm KH}\), responds to mass accretion. We do not similarly limit the accretion rate of giant-type stars with convective envelopes because we assume that they shrink in response to mass accretion (Hurley et al., 2002). We do not expect this assumption to have a dominant impact because, over all redshifts, only up to 10 per cent of the merger rate of our BBH mergers consists of systems that undergo any episode of mass transfer onto a giant-like star. We further limit the accretion rate onto compact objects by the Eddington accretion rate limit. We assume any mass transfer exceeding the accretion rate limits is lost from the system. Moreover, we assume that that mass carries a specific angular momentum equal to the specific orbital angular momentum of the accretor, the so-called isotropic re-emission mass loss (Soberman et al., 1997). We calculate the stability of mass transfer based on the critical mass ratio, \(q_{\rm crit}=M_{\rm accretor}/M_{\rm donor}\), at the onset of mass transfer. For stars on the main sequence, Hertzsprung gap, giant branch, early AGB and thermally pulsing AGB we use the \(q_{\rm crit}\) of Ge et al. (2015, 2020). For the remaining stellar types we use the \(q_{\rm crit}\) of Claeys et al. (2014).
Recent studies suggest that the rate of BBH mergers that experience and survive common-envelope (CE) evolution might be overestimated (Marchant et al., 2016; Klencki et al., 2021; Gallegos-Garcia et al., 2021; Olejak et al., 2021), and they argue that mass transfer should either generally be more stable or the ejection of the envelope much more difficult and hence the stars merge (see, however, Renzo et al., 2023). Independently, both van Son et al. (2022b) and Briel et al. (2023) showed that the CE channel is not necessary to explain the rate of BBH mergers, while the converse is true for binary neutron-star mergers (Chruslinska et al., 2018; Tanaka et al., 2023). More importantly, van Son et al. (2022a) shows that high-mass BBH systems are almost exclusively formed through the stable mass-transfer channel, and that the CE channel is inefficient for the formation of systems with \(M_{\rm primary}>25\,\rm M_{\odot}\). Other population synthesis studies like Belczynski et al. (2022, 23, 23, 23), Mappelli et al. (2019, 2022, 2023) and Briel et al. (2023, 23), come to the same conclusion. In this work we test this with binary_c and find the same results (Section 4.1 and Appendix B). Therefore, we focus on the stable mass-transfer channel and generally exclude merging systems that survive a CE (indicated with '_excluding CE_') from our primary mass distribution results and our merger rate densities unless explicitly indicated with '_including CE_' (see also Fig. 6).
#### 2.1.2 Wind mass loss
We follow Schneider et al. (2021) in our choice of wind mass loss prescriptions, with the exception of their LBV-wind prescription. For hot-star (\(T_{\rm eff}>1.1\,\times\,10^{4}\rm K\)) winds we use the prescriptions from Vink et al. (2000, 2001). For Wolf-Rayet star wind mass loss we use the prescription of Yoon (2017). For low-temperature (\(T_{\rm eff}<10^{4}\rm K\)) stellar winds we use Reimers (1975) mass loss on the first giant branch, with \(\eta=0.4\), and Vassiliadis and Wood (1993) on the asymptotic giant branch (AGB). At intermediate temperatures we linearly interpolate. Beyond the Humphreys-Davidson limit (Humphreys and Davidson, 1994) we use the prescription for LBV-winds as described in Hurley et al. (2000). We do not include the effects of rotationally-enhanced mass loss.
#### 2.1.3 Neutrino loss during compact object formation
For stars that only experience a CCSN we calculate the baryonic remnant mass, \(M_{\rm rem,\ bary}\), using the delayed prescription of Fryer et al. (2012). We calculate the gravitational remnant mass, \(M_{\rm rem,\ grav}\), of BHs formed through PPISNe and CCSNe from
\[M_{\rm rem,\ grav}=M_{\rm rem,\ bary}-\min\left(0.5\,\rm M_{\odot},0.1\,\times \,M_{\rm rem,\ bary}\right) \tag{1}\]
(Zevin et al., 2020). Equation 1 reduces the compact-object mass because of loss of neutrinos during the collapse of the star. Because even in extremely massive stars the CC releases a few \(10^{53}-10^{54}\) erg in neutrinos, we limit this correction to \(0.5\,\rm M_{\odot}\simeq 10^{54}\,\rm erg/c^{2}\)(Aksenov and Chechetkin, 2016; Zevin et al., 2020; Rahman et al., 2022).
#### 2.1.4 Envelope ejection following neutrino losses
During CC, rapid changes in core mass because of neutrino emission change the potential energy of a star, and lead to a pressure wave travelling outward. This pressure wave, in some cases, evolves into a shock wave. In stars with low envelope binding energy (\(>-10^{48}\) erg), like red super giants, this leads to a loss of (part of) the outer envelope (Nadezhn, 1980; Lovegrove and Woosley, 2013; Piro, 2013). Because the expected mass loss depends on the structure of the core and the binding energy of the envelope, most mass is lost from red (super) giants. Stars with compact envelopes, such as blue and yellow super giants or Wolf-Rayet stars, are not expected to lose much mass (Fernandez et al., 2018; Ivanov and Fernandez, 2021). We thus apply this effect only to red (super) giants when the explosion is expected to fail (i.e. \(f_{\rm fallback}=1\)), and assume that everything outside the He core is lost,
\[\Delta M_{\rm\nu,\ env}=M_{\rm tot}-M_{\rm He}, \tag{2}\]
where \(\Delta M_{\rm v_{\rm r},~{}env}\) is the ejected mass due to neutrino loss, \(M_{\rm tot}\) is the total mass of the star and \(M_{\rm He}\) is the mass of its He core. We assume this mass is ejected symmetrically and does not introduce a natal kick to the star, other than a 'Blauuw' kick (Blauuw, 1961), due to the change in centre of mass. We do not apply this mass loss term to blue and yellow supergiants and Wolf-Rayet progenitors. In cases where the explosion is successful, the matter that may be ejected because of the neutrino losses would anyway be easily removed by the SN shock (as accounted for by the delayed), therefore we do not need to apply Eq. 2 when \(f_{\rm fallback}<1\).
#### 2.1.5 Supernova natal kick
Stars that undergo CC may receive a natal momentum kick due to asymmetries in the resulting explosion (Shklovskii, 1970; Fryer, 2004; Janka, 2013a; Grefenstette et al., 2016; Holland-Ashford et al., 2017; Katsuda et al., 2018). We calculate the supernova kick by sampling a kick speed, \(V_{\rm sampled\,kick}\) from a Maxwellian distribution with dispersion of \(\sigma_{\rm kick}=265\,{\rm km~{}s^{-1}}\) and sampling a direction isotropically on a sphere (Hobbs et al., 2005). We scale the natal kick speed with the fallback fraction, \(f_{\rm fallback}=M_{\rm fallback}/M_{\rm SNejecta}\), where \(M_{\rm fallback}\) is the total mass that falls back onto the remnant and \(M_{\rm SNejecta}\) is the initial total supernova ejecta, as
\[V_{\rm scaled\,kick}=V_{\rm sampled\,kick}(1-f_{\rm fallback}). \tag{3}\]
We calculate this fraction through the delayed CCSN prescription of Fryer et al. (2012). In Appendix C we discuss a different scaling. Moreover, even if the supernova ejecta do not impart a natal kick, as long there is any mass ejected, the system still experiences a Blaauw kick. In PPISNe we assume spherically-symmetric ejecta, and no natal kick other than the Blaauw kick (Chen et al., 2014, 2020, 2022).
### Simulated populations
Binary-star systems are characterised by their initial primary mass, \(M_{1}\), secondary mass, \(M_{2}\), orbital period, \(P\), eccentricity, \(e\), and metallicity, \(Z\). To evolve a population of binary systems, we vary each of these initial properties by sampling from their probability distributions. In this study we assume all the probability distributions are separable and can be calculated independently.
For the initially more massive star (primary mass, \(M_{1}\)) we assume an initial mass function (IMF) of Kroupa (2001). We sample \(N_{M1}\) stars between 7.5 and 300 \({\rm M_{\odot}}\). Stars of an initially lower mass do not form BHs and we do not include these in our populations. We sample the initially less-massive star from a flat distribution in \(q=M_{2}/M_{1}\)(Sana et al., 2012) between \(0.08/M_{1}\) and 1, with a resolution \(N_{q}\). We sample the orbital period \(P_{\rm orb}\) of the binary systems from a logarithmically spaced distribution between 0.15 and 5.5 \(\log_{10}\left(P_{\rm orb}/{\rm d}\right)\), with the distribution function from Kobulnicky & Fryer (2007) for systems with a primary mass below 15 \({\rm M_{\odot}}\) and the power-law in \(\log_{10}\left(P_{\rm orb}/{\rm d}\right)\) distribution function with exponent 0.55 from Sana et al. (2012) for systems with \(M_{1}>15\,{\rm M_{\odot}}\). We neglect the possibility of initially eccentric binaries because with the tidal circularisation model of Hurley et al. (2002) that we employ, they all circularise before interaction (de Mink & Belczynski, 2015).
We assign a probability, \(p_{i}\), to each system, \(i\), which is a product of the probability density functions of each variable and the step size in phase space, see Izzard & Halabi (2018) for a detailed explanation of the method. Appendix D shows how we use the probabilities \(p_{i}\) and the binary fraction \(f_{\rm bin}\) in our merger-rate calculation. Throughout this study we assume a constant binary fraction \(f_{\rm bin}=0.7\).
We use a resolution of \(N_{M1}=750\) for our single-star system parameter distributions. We use a resolution of \(N_{M1}=75\), \(N_{q}=75\), \(N_{P}=75\) for our binary-system parameter distributions. We simulate \(N_{Z}=12\) populations of single and binary systems, with metallicity equally spaced in \(\log_{10}(Z)\) between \(\log_{10}(Z)=-4\) (corresponding to very metal-poor stars with negligible wind mass-loss) and \(\log_{10}(Z)=-1.6\) (corresponding to super-solar stars with strong wind mass loss). At each supernova we sample the natal kick direction and magnitude (Section 2.1.5) \(S_{\rm SN\,kick}=4\) times, and divide the probability fraction of the system as \(p_{i}/\nu_{\rm SN\,kick}\). This amounts to an initial total of \(\sim 4\times 10^{5}\) binaries at each metallicity, of which a subset splits due to multiple kick samples.
### Cosmological star formation history
We calculate the intrinsic redshift-dependent merger-rate density of BBH systems, \(\mathcal{R}_{\rm BBH}(z_{\rm merge},~{}\zeta)\), merging at redshift \(z_{\rm merge}\), or the corresponding merging lookback time, \(t_{\rm merge}\), with a set of system properties, \(\zeta\), (e.g., orbital period, primary mass, metallicity) similarly to the compas code (Neijssel et al., 2019; Broekgaarden et al., 2021; van Son et al., 2022; Riley et al., 2022). We define the intrinsic redshift-dependent merger-rate density as,
\[\begin{split}\mathcal{R}_{\rm BBH}(z_{\rm merge},~{}\zeta)=\int_{Z _{\rm min}}^{Z_{\rm max}}dZ\int_{0}^{t_{\rm file\,g\rm NR}-t_{\rm merge}^{t_{ \rm file\,g\rm NR}-t_{\rm merge}^{t_{\rm file\,g\rm NR}}}}dt_{\rm delay}\\ \mathcal{N}_{\rm form}(Z,~{}t_{\rm delay},~{}\zeta)\times{\rm SFR }(Z,~{}z_{\rm birth}).\end{split} \tag{4}\]
The integrand consists of the number of BBH systems per formed solar mass, \(\mathcal{N}_{\rm form}(Z,~{}t_{\rm delay},~{}\zeta)\), as a function of metallicity \(Z\), delay time, \(t_{\rm delay}\), and system properties, \(\zeta\), and the star-formation rate density, \({\rm SFR}(Z,z_{\rm birth})\), as a function of \(Z\) and the birth redshift, \(z_{\rm birth}\). The delay time, \(t_{\rm delay}\), is the sum of the time it takes from the systems birth to the moment the second BH forms (DCO formation), \(t_{\rm form}\), and the time it takes the DCO to inspiral and merge due to emission of gravitational wave radiation, \(t_{\rm inspiral}\). The inspiral time \(t_{\rm inspiral}\) of the BBH system is computed from Peters (1964). The birth redshift corresponds the birth lookback time of the system, \(t_{\rm birth}^{*}=t_{\rm merge}^{*}+t_{\rm delay}\). Generally, times with a * superscript are lookback times and those without are durations. We integrate this over metallicity between the metallicity bounds \(Z_{\rm min}\) and \(Z_{\rm max}\) (Section 2.2), and over the delay time \(t_{\rm delay}\) between 0 and \(t_{\rm first\,g\rm FR}^{*}-t_{\rm merge}^{*}\) to avoid integrating beyond \(t_{\rm first\,g\rm FR}^{*}\), which is the lookback time of first star-formation and has the corresponding first star-formation redshift \(z_{\rm first\,g\rm FR}\).
We determine \(\mathcal{N}_{\rm form}(Z,~{}t_{\rm delay},~{}\zeta)\) by simulating populations of binary stars (Section 2.2 and Appendix D) with primary stars between 7.5 and 300 \({\rm M_{\odot}}\), and we convolve BBH systems with the star formation rate, \({\rm SFR}(z,Z)\), of van Son et al. (2022), with redshifts between 0 and \(z_{\rm first\,g\rm FR}=10\) and a step size of \(dz=0.025\) through the discretized version of equation 4. We use the _PLANCK13_(Ade et al., 2014) cosmology to calculate redshift as a function of the age of the Universe and the volume spanned by the redshift shells.
To calculate the total merger-rate density at a given redshift we integrate equation 4 over all system properties \(\zeta\),
\[\mathcal{R}_{\rm BBH}(z_{\rm merge})=\int\mathcal{R}_{\rm BBH}(z_{\rm merge},~{} \zeta)~{}d\zeta. \tag{5}\]
While the total merger rate is degenerate in both the adopted cosmology/star-formation rate prescription and the adopted stellar physics (e.g. Broekgaarden et al., 2022), the locations of the features in the mass distribution of merging BBHs are robust under the uncertainties of the star-formation rates (van Son et al., 2023). In this study we therefore fix the star-formation rate prescription and only vary the prescription for PPISNe.
## 3 The PPSN Remnant Mass Prescription and Its Variations
We model the mass loss of (P)PISNe with the prescription of Renzo et al. (2022), which is based on the detailed models of F19. This prescription takes a 'top-down' approach, that is it prescribes the total mass lost for a given CO core mass, rather than directly prescribes a remnant mass. This allows us to incorporate all possible mass-loss mechanisms when compact objects with masses above and below the PPISN regime form without introducing artificial jumps in the remnant mass function. We show an example pre-SN core to remnant-mass relation for our fiducial model at metallicity \(Z=0.001\) in Fig. 1. F19 also provide a remnant-mass prescription based on their detailed models which we include in our study. We call this the F19 model.
We assume that stars with a minimum CO core mass \(M_{\rm CO,~{}Min,~{}PPISN}=38\,{\rm M}_{\odot}\) after carbon burning undergo PPISNe. If pulsations lead to a remnant mass below \(10\,{\rm M}_{\odot}\) we regard the supernova as a PISN which leaves no remnant behind (Marchant et al., 2019). For CO core masses greater than \(114\,{\rm M}_{\odot}\) we assume direct collapse to a BH following the photodisintegration instability (Bond et al., 1984; Renzo et al., 2020). If, at the onset of pulsations, the star still has a hydrogen-envelope, we assume this is always expelled, and the hydrogen envelope mass is added to the prescribed mass loss due to pulsations (appendix B, Renzo et al., 2020).
In Section 1 we mention several processes that introduce a large uncertainty in CO core masses that undergo PPT compared to our fiducial model. This motivates us to consider introducing a parameter to shift the CO core-mass range that undergoes PPI in our prescription. Moreover, the observational (Ben-Ami et al., 2014; Kuncaraykati et al., 2023; Lin et al., 2023) and theoretical (Powell et al., 2021; Rahman et al., 2022) indications of additional post-PPI mass-loss motivates us to consider this in our prescription as well.
We capture these effects by modifying our prescription from Renzo et al. (2022) to allow such variations and hence our predicted PPIS-Nmass loss is,
\[\begin{split}\Delta M_{\rm PPI}&=(0.0006\log_{10}Z+ 0.0054)\times\\ &(M_{\rm CO}-\Delta M_{\rm PPI,~{}Coshift}-34.8)^{3}\\ &-0.0013\times(M_{\rm CO}-\Delta M_{\rm PPI,~{}Coshift}-34.8)^{2} \\ &+\Delta M_{\rm PPI,~{}extraML},\end{split} \tag{6}\]
where \(\Delta M_{\rm PPI,~{}Coshift}\) is the mass by which we shift the CO core-mass requirement for PPISNe. Negative \(\Delta M_{\rm PPI,~{}Coshift}\) shifts the core-mass range to lower masses, and vice-versa. \(\Delta M_{\rm PPI,~{}extraML}\) represents additional, post-pulsation, mass loss, and \(Z\) is the metallicity of the star.
We vary \(\Delta M_{\rm PPI,~{}Coshift}\) between \(-20\,{\rm M}_{\odot}\) and \(+10\,{\rm M}_{\odot}\), and \(\Delta M_{\rm PPI,~{}extraML}\) between \(0\,{\rm M}_{\odot}\) and \(+20\,{\rm M}_{\odot}\), in steps of \(5\,{\rm M}_{\odot}\). We note that \(\Delta M_{\rm PPI,~{}Coshift}=+10\,{\rm M}_{\odot}\) qualitatively behaves like Farge et al. (2022) and \(\Delta M_{\rm PPI,~{}Coshift}=-15\,{\rm M}_{\odot}\) behaves qualitatively like the \(f=0.5\) model of Mori et al. (2023), corresponding to an axion mass of half the electron mass.
Varying \(\Delta M_{\rm PPI,~{}Coshift}\) and \(\Delta M_{\rm PPI,~{}extraML}\) allows us to determine how a shift of the PPISN CO core-mass range or additional mass loss from PPISNe affects the remnant-mass distribution, and specifically whether these changes lead to a feature in the primary-mass distribution at \(\sim 35\,{\rm M}_{\odot}\).
Fig. 1 shows an example of the remnant-mass distribution as a function of the pre-SN CO core mass at \(Z=0.001\) with a CO core mass shift of \(\Delta M_{\rm PPI,~{}Coshift}=\pm 5\,{\rm M}_{\odot}\). The shift of the range of CO core masses that undergo PPISN both affects the (P)PISNe as well as CCSNe. By shifting the range to lower or higher masses, the CO core masses that undergo CC decrease or increase respectively. It is important to note that a translation in the CO core mass that undergoes PPISN does not translate directly to the same shift in ZAMS masses. This difference is caused by a non-linear relation between the ZAMS mass and the pre-SN CO core mass (e.g., Limongi and Chieffi, 2018). In Appendix A we show examples of the dependence of the remnant mass as a function of ZAMS mass and metallicity.
We further note that additional mass loss (\(\Delta M_{\rm PPI,~{}extraML}\)) only affects stars that already undergo PPISNe. Hence it does not affect the rate of CCSNe but only the rate of, and ratio between, PPISNe and PISNe, because too much additional mass loss turns a PPISN into a PISN. Another effect of our implementation is that at high additional mass loss (\(\Delta M_{\rm PPI,~{}extraML}>10\)), the most massive BH formed through single-star evolution is from direct CC, not through PPISN+CC. We show how \(\Delta M_{\rm PPI,~{}extraML}=10\,{\rm M}_{\odot}\) affects the initial-final-mass relation in a grid of masses and metallicities Appendix A.
## 4 Results
In this section we present the results of our simulations. We present the primary BH-mass distributions of our models with varying PPISNe mechanism properties in Section 4.1 and the event-rate densities of EM-transient events as well as GW-merger events in Section 4.2. We emphasise that in Section 4.1 we exclude systems that undergo CE evolution. See Section 2.1.1 for our motivation and Appendix B for results that include CE evolution.
### Primary-mass distributions
We show the primary-mass distribution of merging BBHs for our CO core-mass shift models, \(\Delta M_{\rm PPI,~{}Coshift}\) and our additional mass loss models, \(\Delta M_{\rm PPI,~{}extraML}\), in Figures 2 and 3. Panels 2a and 3a show the merger-rate density of BBH systems as a function of primary-BH mass. Panels 2 (b) and 3 (b) show the fraction of primary-mass BHs that are formed through PPISNe.
Our fiducial primary-mass distribution at redshift \(z=0.2\) (Fig. 2a, orange line) peaks at about \(10\,{\rm M}_{\odot}\) in good agreement with the LVK observations (Abbott et al., 2023). Moreover, in the intermediate range of \(12<M_{\rm primary}/{\rm M}_{\odot}<25\) we predict more mergers than observed, which is seen in several BSE-based rapid population-synthesis codes (e.g., Mapelli et al., 2022; van Son et al., 2023), though the origin of this over-production is unknown. This region contains systems that undergo at least one stable mass-transfer episode and we do find indications that the mass-transfer stability prescriptions affect the width and height of this over-density. Additionally, we find that among systems with \(M_{\rm primary}>12\,{\rm M}_{\odot}\) some merge without undergoing any mass transfer but are able to merge because they form with very high eccentricity (\(e>0.9\)) upon DCO formation. The fraction of systems that merge through this channel is low (\(<10\) per cent) at low (\(12\,{\rm M}_{\odot}\)) primary mass but slowly increases with primary mass to about \(30\) per cent above \(45\,{\rm M}_{\odot}\). We note that we find that at a primary mass of \(15\,{\rm M}_{\odot}\), \(50\) per cent of the merging systems undergo CE, down to \(10\) per cent at \(25\,{\rm M}_{\odot}\) and \(0\) per cent at \(30\,{\rm M}_{\odot}\) (Appendix B). This justifies our exclusion of systems that go through CE and our choice to focus on the high-mass end of the primary-mass distribution.
Related to PPISNe, we find the following results. First, in our fiducial model, we find a PPISN pile-up between \(50-52\,{\rm M}_{\odot}\) and the rate in this pile-up is double the rate of systems with primary masses just below the peak (\(M_{\rm primary}=48-50\,{\rm M}_{\odot}\)). PPISNe also lead to a maximum primary mass \(M_{\rm PPISN,cutoff}\sim 55\,{\rm M}_{\odot}\), which
sets the lower edge of the PISN-mass gap. Secondly, within the 51-55 M\({}_{\odot}\) region associated with the pile-up, 100 per cent of the BHs form through PPISNe. We find an extended region between 49-57 M\({}_{\odot}\) where, for a given primary mass, at least 25 per cent of BHs are formed through PPISN (Fig. 2 (b), orange line). Thirdly, we find that systems in the mass range where the primary BHs are predominantly formed through PPISNe show very high (\(\geq 0.9\)) eccentricity upon DCO formation. Systems in this region do not gain much eccentricity due to the low mass ejecta of the PPISNe. We find, however, that the eccentricity is mostly a result of the supernova of the initially lower mass-companions. More generally, we note that from 30 M\({}_{\odot}\) upward, our merging systems almost exclusively have high (\(\geq 0.9\)) eccentricity at DCO formation, i.e. after the second SN. This indicates that they merge primarily because of their eccentricity which strongly reduces their inspiral time (Peters, 1964). Without this eccentricity, the majority of these systems are too wide to merge in a Hubble time. We find that the systems that undergo no mass transfer also form with high eccentricity and merge because their inspiral time is reduced because of this. Especially in the mass range where the primary BHs are predominantly created through PPISNe (\(50<M_{\rm primary}/{\rm M}_{\odot}<55\), many (40 per cent) never undergo mass transfer, but are formed with large (\(>0.9\)) eccentricities upon DCO formation.
The distribution of primary masses in our F19 model shows similar behaviour to our fiducial model for masses below \(M_{\rm primary}<36\) M\({}_{\odot}\), but differs at the high-mass end (\(M_{\rm primary}\geq 36\) M\({}_{\odot}\)). The most massive primary mass is \(M_{\rm PPISN\,cutoff}=49\) M\({}_{\odot}\) and there is a slight over-density at 42 M\({}_{\odot}\). Moreover, around the over-dense region (\(36<M_{\rm primary}/{\rm M}_{\odot}<46\)), the fraction of primary masses that undergo PPISN is at most \(\sim 0.5\), meaning that a large fraction of systems in that over-density have primary BHs that undergo no PPISN, but are rather formed directly through a CCSN (Fig. 1 and Section 3).
#### 4.1.1 Shift in CO core mass for pair-instability
To reflect uncertainties in the CO core masses that undergo PPISNe, we vary \(\Delta M_{\rm PPI,\,CO\,shift}\). We show our primary-mass distributions from our \(\Delta M_{\rm PPI,\,CO\,shift}\) models in Fig. 2. While our fiducial model is based on the detailed stellar models of F19, the \(\Delta M_{\rm PPI,\,CO\,shift}=+10\) M\({}_{\odot}\) variation behaves like the more-recent results of Mehta et al. (2022) and Farag et al. (2022) with more densely sampled \({}^{12}\)C\((\alpha,\gamma)^{16}\)O reaction rates, and improved spatial and temporal resolution. Below 20 M\({}_{\odot}\), the distribution of primary BH masses is not strongly affected by these variations. Reducing the CO core-mass threshold for PPISNe decreases the most massive BH mass, and shifts the location of pile-up from PPISN downwards. We have to shift the range of CO core masses that undergo PPISN down by \(10-15\) M\({}_{\odot}\) to move the PPISN pile-up near the observed feature at 35 M\({}_{\odot}\). Our upward-shift variation models show an increase in maximum BH mass, and generally a less pronounced, but not absent, pile-up of BHs formed through PPISN. All primary-mass distributions in our CO core-mass shift models show that the mass-range around the pile-up is entirely populated by primary BHs that are formed through PPISNe.
In summary, we find that varying \(\Delta M_{\rm PPI,\,CO\,shift}\) shifts the location of the PPISN pile-up. To have the PPISN feature appear near the observed \(32-38\) M\({}_{\odot}\) peak, we need a shift of \(\Delta M_{\rm PPI,\,CO\,shift}\simeq-15\) M\({}_{\odot}\). The variations motivated by the models of Farag et al. (2022), i.e. an upward shift of \(\Delta M_{\rm PPI,\,CO\,shift}\simeq 10\) M\({}_{\odot}\), create a shallow over-density at \(\simeq 64\) M\({}_{\odot}\). Current observations show no structure in this region, but the current (O4) and planned (O5) observing runs will
Figure 1: Remnant mass vs. pre-SN CO core mass in single stars at metallicity \(Z=0.001\). The grey, dotted line shows the pre-SN mass, the black line shows the remnant mass with our fiducial implementation (Renzo et al., 2022), and the purple dash-dotted line shows the remnant mass as given by the prescription of F19. The dashed-coloured lines indicate example variations on the PPISNe prescription. The orange-dashed line indicates a downward CO core-mass shift of \(\Delta M_{\rm PPI,\,CO\,shift}=-5\) M\({}_{\odot}\), the red long-dashed line indicates an additional mass loss of \(\Delta M_{\rm PPI,\,extra\,ML}=+5\) M\({}_{\odot}\) and the blue loosely-dashed line indicates an upward CO core-mass shift of \(\Delta M_{\rm PPI,\,CO\,shift}=+5\) M\({}_{\odot}\). The corresponding circles indicate the CO core mass of the PPISN onset for each variation. The green shaded region indicates the range of PPISNe onset CO core masses spanned by the example variations.
help unveil any existing structure in the primary BH mass distribution in this mass range.
#### 4.1.2 Extra mass loss during, or after, pulsational pair-instability
Both theory and observations suggest that some amount of additional mass loss occurs post-PPI, which we model with \(\Delta M_{\rm{PPI,extra\,ML}}\). We show our \(\Delta M_{\rm{PPI,extra\,ML}}\) variation simulations in Fig. 3. Our results show that introducing additional mass loss to the PPISNe affects the distribution of primary-BH masses in several ways. First, removing extra mass lowers \(M_{\rm{PPISNe\,cutoff}}\), and affects the location and magnitude of the pile-up. Our additional mass loss models, \(\Delta M_{\rm{PPI,extra\,ML}}=5\,{\rm M_{\odot}}\) and \(\Delta M_{\rm{PPI,extra\,ML}}=10\,{\rm M_{\odot}}\), shift \(M_{\rm{PPISNe\,cutoff}}\) down by up to \(10\,{\rm M_{\odot}}\). This is associated with an increased magnitude of a pile-up of up to an order of magnitude. The F19 model peaks at the same mass as our \(\Delta M_{\rm{PPI,extra\,ML}}=10\,{\rm M_{\odot}}\) models, and shows a similar fraction of primaries that are formed through PPISNe in the region of their pile-up. The \(\Delta M_{\rm{PPI,extra\,ML}}=10\,{\rm M_{\odot}}\) model, however, shows a pile-up with double the magnitude of the F19 model. Though some of our \(\Delta M_{\rm{PPI,extra\,ML}}\) models increase the magnitude of the pile up feature, these features are no longer exclusively populated by systems that undergo PPISN. This is for similar reasons as the feature in the F19 model: the additional mass loss introduces a jump in the remnant-mass function such that the most massive BH comes from a CCSN (Fig. 1). Removing more than \(10\,{\rm M_{\odot}}\) does not affect \(M_{\rm{PPISNe\,cutoff}}\) because any BH formed by a PPISN is of lower mass than the most massive BH formed through CC. Moreover, the distribution of primary BHs less massive than \(\sim 30\,{\rm M_{\odot}}\) is not affected by our \(\Delta M_{\rm{PPI,extra\,ML}}\) models.
In summary, we find that additional mass loss, \(0\leq\Delta M_{\rm{PPI,extra\,ML}}\leq 10\,{\rm M_{\odot}}\), lowers the location of the peak by up to \(10\,{\rm M_{\odot}}\). Moreover, the rate in the pile-up increases by almost an order of magnitude. This mechanism does not allow us to match the observed \(32-38\,{\rm M_{\odot}}\) peak. After applying \(\Delta M_{\rm{PPI,extra\,ML}}\geq 10\,{\rm M_{\odot}}\) the PPISNe are sub-dominant across the entire mass range (\(\mathcal{F}_{\rm{PPISN,primary}}<0.1\)), and stop affecting the primary-mass distribution.
Figure 2: Panel (a): Merger rate density as a function of primary mass for BBH mergers at \(z=0.2\), for our fiducial model (orange solid), the F19 model (grey dotted), and our CO core-mass shift variations \(\Delta M_{\rm{PPI,CO\,shift}}=+10\,{\rm M_{\odot}}\) (dark-purple dashed), \(\Delta M_{\rm{PPI,CO\,shift}}=+5\,{\rm M_{\odot}}\) (blue dashed-dotted), \(\Delta M_{\rm{PPI,CO\,shift}}=-5\,{\rm M_{\odot}}\) (light-blue dotted), \(\Delta M_{\rm{PPI,CO\,shift}}=-10\,{\rm M_{\odot}}\) (dark-green long-dashed) \(\Delta M_{\rm{PPI,CO\,shift}}=-15\,{\rm M_{\odot}}\) (green dashed), \(\Delta M_{\rm{PPI,CO\,shift}}=-20\,{\rm M_{\odot}}\) (yellow long dash-dotted). The lines connect the centres of the bins (stepped) of width \(2.5\,{\rm M_{\odot}}\). The translucent bands around the lines are the regions of 90 per cent confidence-intervals obtained by 50 bootstrap samples of the DCO populations. The dark-grey line indicates the mean of the power law + peak model of Abbott et al. (2023) at \(z=0.2\) and the grey shaded region indicates the 90 per cent confidence interval. These models indicate that the fiducial model does not peak at the observed location, and we need to introduce a shift of as much as \(\Delta M_{\rm{PPI,CO\,shift}}=-15\,{\rm M_{\odot}}\) to make the distribution peak at the observed mass. The upward shift of \(M_{\rm{PPI,CO\,shift}}=+10\,{\rm M_{\odot}}\) that matches Farag et al. (2022) forms a slight over-density at \(\sim 64\,{\rm M_{\odot}}\). Panel (b): fraction of systems in that bin where the primary BH formed through PPISN.
### Event-rate densities as a function of redshift
In the following section we present our supernova-event rate density, i.e. the event rate in a given volume of space, and BBH-merger rate densities as a function of redshift in our fiducial model as well as our core-mass shift models \(\Delta M_{\rm PPI,\,CO\,shift}=-15\,\rm M_{\odot}\) and \(\Delta M_{\rm PPI,\,CO\,shift}=+10\,\rm M_{\odot}\). We choose to show only these two models because the former leads to a match of our modelled PPISN pile-up location with the observed peak, while the latter fits with the latest estimates of stellar evolution and nuclear-reaction rates (Farag et al., 2022). We calculate the intrinsic supernova rate density similar to the merger rate, except that we use the time that the star took from birth to supernova, \(t_{\rm SN}\), as the timescale in the convolution (equation 4), instead of the delay time, \(t_{\rm delay}\).
Our CCSNe include type Ibc and type II supernovae, but exclude _failed_ supernovae, i.e. CC supernovae where the shock fails to unbind any mass according to the delayed prescription of Fryer et al. (2012). We note that all our PPISNe and PISNe are hydrogen-poor, i.e., PPISNe-I and PISNe-I, and we do not find any hydrogen-rich PPISNe or PISNe in our simulations. In our models we find that our stars (self-)strip and lose their hydrogen envelope before they undergo (P)PISN, which may be caused by overestimated wind mass loss (Beasor & Smith, 2022).
In Fig. 4 we show the event-rate density, \(\mathcal{R}_{\rm event}\), which is the number of events, \(N_{\rm event}\), per unit time, \(\rm d\)r, per unit comoving volume, \(\rm d\)v\({}_{\rm c}\), in units of number per yr per Gpc\({}^{3}\), of the supernova and BBH-merger events, both including and excluding systems that undergo CE evolution from our fiducial, \(\Delta M_{\rm PPI,\,CO\,shift}=-15\,\rm M_{\odot}\) and \(\Delta M_{\rm PPI,\,CO\,shift}=+10\,\rm M_{\odot}\) models. The rates are intrinsic, i.e., are not weighted by detectability in any particular survey.
In Fig. 4 we also show the volumetric rates at \(z=0.028\) based on bias-corrected ZTF observations (Frohmaier et al., 2021). These include the combined hydrogen-rich and hydrogen-poor stripped-envelope CCSNe, at a rate of \(1.151^{+0.15}_{-0.13}\times 10^{5}\) Gpc\({}^{-3}\) yr\({}^{-1}\), SLSNe-I, at a rate of \(3.5^{+2.5}_{-1.3}\times 10^{1}\) Gpc\({}^{-3}\) yr\({}^{-1}\). We compare these to our predicted CCSN and (P)PISN-I rates. Moreover, we indicate estimates of CCSNe and SLSNe at higher redshifts from other sources that are tabulated in Briel et al. (2022).
We show the SLSN-I rate because PPISNe-I and PISNe-I may be associated with a subset of SLSNe-I, but we stress that not all PPISNe and PISNe necessarily display SLSNe-like transients. We summarise the SN event-rate results at \(z=0.028\) from our selected models and compare them to the observations from Frohmaier et al. (2021) in Table 1.
Our fiducial model shows a CCSN transient-rate density of \(\sim 10^{5}\) Gpc\({}^{-3}\) yr\({}^{-1}\) at \(z=0\), increasing to \(\sim 7\times 10^{5}\) Gpc\({}^{-3}\) yr\({}^{-1}\) by redshift \(z\sim 3\) then decreasing to \(\sim 6\times 10^{4}\) Gpc\({}^{-3}\) yr\({}^{-1}\) at \(z=8\)
Our fiducial CCSN rate, as well as the rates of either variations, match closely the rates of Frohmaier et al. (2021). This indicates that overall we reproduce the observed CCSN-rate density and also that variations in the PPISN mechanism do not affect this rate strongly. This is because the IMF disfavours stars massive enough to undergo PPISN relative to all CCSN progenitors. Overall we find a reasonable match with the other sources for CCSNe that are tabulated in Briel et al. (2022), often matching the lower-bound estimate of the rate.
We find a BBH merger rate of \(\sim 10\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\) at \(z\sim 0\), excluding systems that undergo CE, which increases to \(\sim 40\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\) at \(z\sim 2.5\) then decreases to \(\sim 4\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\) at \(z=8\). These rates are not significantly affected by changes in the CO core-mass range of PPISNe, because the merger rate is dominated by systems with primary masses around \(\sim 10\,\mathrm{M}_{\odot}\)(Fig. 2, Li et al., 2021; Veske et al., 2021; Edelman et al., 2022; Tiwari, 2022; Abbott et al., 2023). The rate of BBH mergers, if we include those that undergo a CE event and survive, is about a factor of 3 larger than those that exclude CE events over all redshifts. At \(z=0.2\) the rate including CE systems matches well with GW observations. Section 4.1 shows that our fiducial BBH mergers excluding CE systems match the overall shape of the observed primary-mass distribution well, but here we find that including CE systems is needed to match the observed rate integrated over all BH masses.
Our fiducial PPISN-I transient-rate density at \(z=0\) is \(\sim 10^{2}\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\), which increases to \(\sim 3\times 10^{3}\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\) at \(z\sim 3\), and then it decreases to \(\sim 7\times 10^{2}\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\) at \(z=8\). Our fiducial PISN-I transient-rate density at \(z=0\) is \(\sim 10\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\), which increases to \(\sim 8\times 10^{2}\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\) at \(z\sim 3\), and then it decreases to \(\sim 1.5\times 10^{2}\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\) at \(z=8\). Both our PPISN and PISN rates evolve with redshift but deviate from the shape of the total SFR-density. Both peak at \(z\sim 3\), coinciding with the cosmic star-formation rate density peak (Fig. 8), but at low redshift (\(z=0\)) their event-rate density is lower, by at least a factor of 5, than at high redshift (\(z=8\)). This is because (PPISNe occur in very massive stars only, and thus their formation strongly depends on their metallicity. Even if the star formation rate density at \(z=0\) (\(\sim 2\times 10^{7}\,\mathrm{M}_{\odot}\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\)) exceeds that at \(z=8\) (\(\sim 8\times 10^{6}\,\mathrm{M}_{\odot}\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\)), the metallicity distribution at high \(z\) trends towards lower metallicities, compensating for their lower star-formation rates, because stars at lower metallicity lose less mass and remain massive enough to undergo (P)PISN.
In Table 1 we compare our PPISN-I rate density estimate to the inferred SLSN-I rate density of Frohmaier et al. (2021), expressed as the ratio between our predicted and their observed rates. The inferred rate density of SLSNe-I at \(z=0.028\), \(3.5^{+2.5}_{-1.3}\,\times\,10^{1}\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\), falls between our predicted PISNe-I and PPISNe-I rates in our fiducial model. With our predicted PPISN-I rate density (at \(z=0.028\)), \(1.06\times 10^{2}\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\), we find a ratio to the CCSN rate of \(9.21^{+1.02}_{-1.25}\,\times\,10^{-4}\). With our predicted PISNe-I rate density, \(1.07\times 10^{1}\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\), we find a ratio to SLSNe-I of \(0.31^{+0.11}_{-0.22}\) and a ratio to CCSNe of \(9.30^{+1.03}_{-1.26}\,\times\,10^{-5}\). This implies that in our fiducial model (P)PISNe can contribute a
Figure 4: Intrinsic event-rate density, \(\mathcal{R}_{\mathrm{event}}\), evolution in our fiducial simulations (all solid), the \(\Delta M_{\mathrm{PPI,\,CO\,shift}}=-15\,\mathrm{M}_{\odot}\) variation (all dashed) and the \(\Delta M_{\mathrm{PPI,\,CO\,shift}}=+10\,\mathrm{M}_{\odot}\) variation (all dash-dotted). We show our CCSNe transient-event (blue with downward triangle), PPISNe (green with forward slash) and PISNe (orange with vertical line), as well as the BBH-merger rates when we exclude CE systems (solid circles, the line styles match are the same as the transient event-rate variations) and those where we include CE systems (crosses). The translucent green and orange regions indicate the rate ranges spanned by the two variations for PPISNe and PISNe respectively. The other \(\Delta M_{\mathrm{PPI,\,CO\,shift}}\) variations have rates that fall within these regions, except for \(\Delta M_{\mathrm{PPI,\,CO\,shift}}=-20\,\mathrm{M}_{\odot}\). We indicate the observed transient event-rate densities of Frohmaier et al. (2021) for CCSNe (red) and SLSNe (pink) as well as the expected GW merger event-rate densities of Abbott et al. (2023) for BBH mergers (blue-gray bar). Moreover, we indicate the event-rate density estimates from other sources for CCSNe and SLSNe, both tabulated in Briel et al. (2022), by pink errorbar symbols and pink capped-errorbar symbols with diamond centers. The (PPISN-I rates increase in our \(\Delta M_{\mathrm{PPI,\,CO\,shift}}=-15\,\mathrm{M}_{\odot}\) model and decrease in our \(\Delta M_{\mathrm{PPI,\,CO\,shift}}=+10\,\mathrm{M}_{\odot}\) model, spanning about an order of magnitude in rate densities for both PPISN-I and PISN-I between the models. CCSNe transient- and BBH-merger rates are not affected in the two models compared to our fiducial model.
significant fraction to the SLSN rate. However, it is important to note that PISNe do not necessarily lead to SLSN-like transients (Gilmer et al., 2017), and the same is likely true in PPISNe (Woosley, 2017). We should thus caution making conclusions from directly from these results. Comparison to the other sources that are tabulated in Briel et al. (2022) give similar ratios at higher redshifts.
While shifting the CO core-mass range for PPISNe does not affect the CCSN transient-rate density nor any of the BBH merger rate densities significantly, the PPISN-I and PISN-I rate densities are, however, strongly affected.
With our \(\Delta M_{\rm PPI,\,CO\,shift}=-15\) M\({}_{\odot}\) model both PPISN-I and PISN-I supernova rates increase by about a factor of 5 5.60 \(\times\) 10\({}^{2}\) Gpc\({}^{-3}\) yr\({}^{-1}\) and 4.10 \(\times\) 10\({}^{1}\) Gpc\({}^{-3}\) yr\({}^{-1}\), respectively. This is because, in this model, lower-mass stars explode as PPISNe-I, and because the IMF favours lower mass stars, this rate is higher. With this model both the PPISN-I and PISN-I transient event-rate densities at \(z=0.028\) are either approximately equal to, or higher than, the inferred SLSN-I rate density. With our predicted PPISN-I rate density we find a ratio to the SLSN-I rate of 16.00\({}^{+5.94}_{-11.43}\) and a ratio to the CCSN rate of 4.87\({}^{+0.54}_{-0.66}\)\(\times\) 10\({}^{-3}\). With our predicted PISN-I rate density we find a ratio to the SLSN-I rate of 1.73\({}^{+0.64}_{-1.23}\) and a ratio to the CCSN rate of 5.26\({}^{+0.58}_{-0.71}\)\(\times\) 10\({}^{-4}\). The PISN-I rate density is only slightly higher than the mean SLSN rate and falls within its error bars, but the PPISN-I rate density is higher by more than an order of magnitude.
With \(\Delta M_{\rm PPI,\,CO\,shift}=+10\) M\({}_{\odot}\) both PPISN-I and PISN-I supernova rates are decreased by about a factor of 3 relative to our fiducial model. PPISNe-I decrease to 4.10 \(\times\) 10\({}^{1}\) Gpc\({}^{-3}\) yr\({}^{-1}\) and PISNe-I decrease to 3.66 Gpc\({}^{-3}\) yr\({}^{-1}\). We now see the effect of the IMF\({}^{\rm a}\) disfavouring increasingly massive stars, decreasing the rate of both phenomena. In this model both the PPISN-I and PISN-I transient event-rate densities are either approximately equal to, or lower than, the inferred SLSN-I rate density. With our predicted PPISN-I rate density we find a ratio to the SLSN-I rate of 1.17\({}^{+0.44}_{-0.84}\) and a ratio to the CCSN rate of 3.56\({}^{+0.39}_{-0.48}\)\(\times\) 10\({}^{-4}\). With our predicted PISN-I rate density we find a ratio to the SLSN-I rate of 0.10\({}^{+0.04}_{-0.07}\) and a ratio to the CCSN rate of 3.18\({}^{+0.35}_{-0.44}\)\(\times\) 10\({}^{-5}\). The PPISN-I rate density is approximately equal to the mean inferred SLSN-I rate density, but the PISN-I rate density is lower by more than an order of magnitude.
To summarise, we find that varying the CO core-mass range of (PPISNe strongly affects the transient event-rate density of these supernovae, with little effect on the overall rates of transients associated with CCSNe. In our fiducial model, both the PPISNe and PISNe could contribute to the SLSN rate. Our \(\Delta M_{\rm PPI,\,CO\,shift}=-15\) M\({}_{\odot}\) model increases both rates such that the rate of PISNe falls within the upper bound of the error on the observed SLSN rate, and the PPISN rate is about a factor of 16 times higher than the mean SLSN rate. Our \(\Delta M_{\rm PPI,\,CO\,shift}=+10\) M\({}_{\odot}\) model has lower (P)PSN transient rates compared to our fiducial model, the PISN rate is about an order of magnitude lower than the SLSN rate and the PPISN rate approximately matches the SLSN rate. We discuss the implications of these variations, and whether they are in tension with the observed SLSN rate, in Section 5.2.
## 5 Discussion
In the following section we discuss the implications of our results in Section 4, some choices in our modelling approach, and whether, based on our results, the observed peak in the primary-BH mass distribution at 35 M\({}_{\odot}\) originates from PPISNe.
### PPISN mechanism and the primary-mass distribution
Our modifications to the PPISN prescription of Renzo et al. (2022) encompass both a shift in CO core masses that undergo PPISNe and an additional PPISN or post-PPISN mass loss (equation 6). This parametric approach allows us to explore several physical effects proposed in the literature. We discuss the results of this exploration in this subsection.
Motivated by processes that affect the CO core mass (Section 1) we calculate merging BBH populations with \(\Delta M_{\rm PPI,\,CO\,shift}=-20\) M\({}_{\odot}\) to \(\Delta M_{\rm PPI,\,CO\,shift}=+10\) M\({}_{\odot}\). Fig. 2 shows that the CO core-mass shift strongly affects the location of the PPISN pile-up in the primary-mass distribution. Moreover, the (relative) magnitude of this pile-up varies with different CO core-mass shifts.
The location and the magnitude of the over-density in our \(\Delta M_{\rm PPI,\,CO\,shift}=-15\) M\({}_{\odot}\) model matches the observed peak at \(\sim 35\)M\({}_{\odot}\). In many of processes mentioned in Section 1, however, this downward shift is too large to explain. The beyond Standard-Model process of axion formation, however, could lead to an effective downward shift of as much as 15 M\({}_{\odot}\) in the model of Mori et al. (2023) where the axion mass is about half the electron mass. They find that supernovae from these axion-induced instabilities are similar to standard pair-formation induced supernovae, i.e. the nickel ejecta distribution has the same overall extent and shape. Their light curves, however, have a shorter rise-to-peak time due to the lower total mass of star, which could differentiate between models, but also means that they do not display the standard long rise-to-peak characteristics used to identify PISNe-I, making it harder to identify them as PISNe-I in SLSN-I observations.
Several of the processes in Section 1 lead to an upward shift of the CO core-mass range of stars that undergo (P)PISNe, but we specifically highlight the more accurate and up-to-date reaction rates and stellar models of Farag et al. (2022), and model this with our \(\Delta M_{\rm PPI,\,CO\,shift}=+10\) M\({}_{\odot}\) model. This model results in an over-density in the primary-BH mass distribution at \(\sim 64\) M\({}_{\odot}\), suggesting that a third peak in the primary-mass distribution exists. We find that the magnitude of the peak is less pronounced, being only slightly higher (\(0.2\times 10^{-2}\)Gpc\({}^{-3}\) yr\({}^{-1}\)M\({}_{\odot}^{-1}\)) than the merger rate at primary masses slightly lower than the location of the peak (58 - 62 M\({}_{\odot}\)).
\begin{table}
\begin{tabular}{l c c c} \hline Model & SN type & Rate density1 & Rate density2 & Rate density3 & SLSN-I rate \\ & & [Gpc\({}^{-3}\) yr\({}^{-1}\)] & \(\rm{rate\,\,density}\)4 & density4 \\ \hline Fiducial & CCSN & 1.33 \(\times\) 10 & CCSN & 1.33 \(\times\) 10 \\ & PPISN-I & 1.06 \(\times\) 10 & 9.21\({}^{+1.62}_{-0.52}\)\(\times\) 10\({}^{-1}\) & 3.03\({}^{+1.12}_{-0.52}\) \\ & PISN-I & 1.07 \(\times\) 10 & 9.30\({}^{+1.12}_{-1.23}\)\(\times\) 10\({}^{-5}\) & 0.31\({}^{+1.12}_{-0.22}\) \\ \hline \(\Delta M_{\rm PPI,\,CO\,shift}=-15\) M\({}_{\odot}\) & CCSN & 1.33 \(\times\) 10 & 1.33 \(\times\) 10 \\ & PPISN-I & 5.60 \(\times\) 10 & 4.87\({}^{+0.64}_{-0.52}\)\(\times\) 10\({}^{-3}\) & 16.00\({}^{+5.84}_{-0.54}\) \\ & PISN-I & 6.05 \(\times\) 10 & 6.05 \(\times\) 10\({}^{5}\) & 5.26\({}^{+0.56}_{-0.51}\)\(\times\) 10\({}^{-4}\) & 1.73\({}^{+6.44}_{-1.23}\) \\ \hline \(\Delta M_{\rm PPI,\,CO\,shift}=+10\) M\({}_{\odot}\) & CCSN & 1.33 \(\times\) 10 & 1.33 \(\times\) 10 [FOOTNOTE:4]Footnote 4:
We expect that the magnitude of this peak relative to the rate at masses slightly lower than the peak, at least in part, depends on the maximum mass of stars that we take into account in our simulations. While our upward CO core-mass shift leads to a larger region of pre-SN core masses that undergo PPISN, if the initial masses of our stars are insufficiently massive to populate the entire range of pre-SN CO core masses, it lowers the rate of BH formation with masses in the expected PPISNe-remnant mass range. In the case that no star is massive enough to undergo PPISN, no pile-up or over-density is formed at all. The range of CO core-masses that undergo PPISN is also a factor that determines the magnitude of the peak. If the range is narrow, fewer stars are in that CO core-mass range, which effectively lowers the rate of stars that undergo PPISN and form BHs in the PPISN-remnant mass range. The narrower CO core-mass range is the result of a higher sensitivity of the PPISN mass loss to the CO core-mass. Examples of this are the \(\sigma\) [\({}^{12}\)C(\(a\), \(\gamma\))\({}^{16}\)O] = \(-3\) models of Farag et al. (2022) or the strongly coupled (high \(\epsilon\)) hidden-photon models of Croon et al. (2020).
We leave the exploration of the sensitivity of the peak at \(\sim 64\,\mathrm{M}_{\odot}\) to the maximum considered initial primary mass and the sensitivity of the PPISN mass loss to the CO core-mass for a future study. The O4 observation run of the LVK-collaboration (Abbott et al., 2020) probes a five times larger space than O3 and is expected to uncover more structure in the high mass range. Thus, this peak may already be observed in O4. The exact location and magnitude of this new peak may inform us about the PPISNe mechanism and how massive stars that undergo PPISNe are.
Additionally, we calculate merging BBH populations varying the additional mass loss from \(\Delta M_{\mathrm{PPI,\,extra\,ML}}=+5\,\mathrm{M}_{\odot}\) to \(\Delta M_{\mathrm{PPI,\,extra\,ML}}=+20\,\mathrm{M}_{\odot}\). Fig. 3 shows that additional mass loss lowers the merger rate and moves both \(M_{\mathrm{PPISN,\,cutoff}}\) and the over-density caused by PPISNe to lower masses, but up to \(\Delta M_{\mathrm{PPI,\,extra\,ML}}=+10\,\mathrm{M}_{\odot}\). This is because removing more mass results in primary masses that are created by CCSNe instead of PPISNe, and the BHs that are formed through PPISN lose so much mass that they end up as the secondary BH. The F19 model is similar to our \(\Delta M_{\mathrm{PPI,\,extra\,ML}}=+5\,\mathrm{M}_{\odot}\) and \(\Delta M_{\mathrm{PPI,\,extra\,ML}}=+10\,\mathrm{M}_{\odot}\) models, although it does not have the peak in the primary-mass distribution we find at \(\sim 42\,\mathrm{M}_{\odot}\) in the \(\Delta M_{\mathrm{PPI,\,extra\,ML}}=+10\,\mathrm{M}_{\odot}\). This indicates that a PPISN mass loss prescription that has an artificial discontinuity at the CC-PPISN interface has the same qualitative effect as extra mass removal. While there are studies, both theoretical (Powell et al., 2021; Rahman et al., 2022) and observational (Ben-Ami et al., 2014; Kuncarayak et al., 2023; Lin et al., 2023), that indicate additional post-PPISN mass loss, removing more than \(10\,\mathrm{M}_{\odot}\) over the entire range of PPISNe seems hard to justify, and it makes no difference to the primary-BH mass distribution.
### Transient rates
Our fiducial model agrees well with the observed CCSNe rate from Frohmazier et al. (2021). We produce roughly one PISN-I per 10000 CCSNe and one PPISN-I per 1000 CCSNe when \(z\leq 1\). While currently there are no unambiguous rate estimates from direct observations of PPISNe or PISNe, there are estimates based on the non-detection of these supernovae, e.g., Nicholl et al. (2013). Specifically, from light-curve analysis and SLSN rates, taking into account that not all SLSNe match PISN light-curves and that not all PISNe are SLSNe, Nicholl et al. (2013) concludes that the rate of PISNe cannot exceed a fraction \(6\times 10^{-6}\) of the CCSN rate of. This rate contradicts fiducial results, because we find a ratio of PISNe-I to CCSNe of \(9.30^{+1.03}_{-1.26}\times 10^{-5}\) (Table 1).
We find that at \(z=0.028\) our predicted PPISN-I rate is approximately equal to the SLSN-I rate of Frohmazier et al. (2021), and that our PISN-I rate is approximately an order of magnitude lower. We caution, however, that our PPISN-I and PISN-I rates are not directly comparable to observed SLSN-I rates. It is clear from SLSN-I observations (Nicholl et al., 2013; Cia et al., 2018; Gal-Yam, 2019) that only a small fraction of SLSN-I display characteristics that fit with PISNe, and from detailed models of PISNe (Kasen et al., 2011; Nicholl et al., 2013; Gilmer et al., 2017) it is understood that not all PISNe are super luminous or necessarily show the characteristics that make them stand out as PISNe in the SLSNe-I sample. Instead, some PISNe may hide in a population of transients that fall between normal and SLSNe called Luminous Supernovae (Gomez et al., 2022). The situation with PPISNe-I is likely similar. Events like _SN2017egm_(Lin et al., 2023), _iPTF16eh_(Lunnan et al., 2018), _PFT12dam_(Tolstov et al., 2017) and _SN 2019szu_(Aamer et al., 2023) are strong candidates for SLSNe-I caused by PPISNe-I. However, based on detailed models, not all PPISNe are super-luminous (e.g., Woosley, 2017), and the fact that _SN 1961V_(Woosley and Smith, 2022) and _iPTF14hls_(Wang et al., 2022) possibly have PPISN-like light-curve morphologies but are not super-luminous supports this. Because of the theoretical and observational uncertainties, we refrain here from making quantitative estimates of the fraction of (P)PISNe that appear as super-luminous, and encourage further studies of both PPISN and PISN light curves building on the pioneering work of Woosley (2017, 2019).
Our fiducial intrinsic transient-rate density predictions at \(z\sim 0\) for CCSNe and PPISNe-I agree well with the rates of Stevenson et al. (2019, using the comma population-synthesis code). We predict about an order of magnitude more PISNe-I, possibly due to considering a higher maximum stellar mass. The estimates from Briel et al. (2022, using the npass population-synthesis code) for CCSNe agree well with ours. They estimate a PISN-I rate density of \(1-6\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\) at \(z\sim 0\), which is \(\sim 5\,\mathrm{Gpc}^{-3}\,\mathrm{yr}^{-1}\) lower than our fiducial rate density. They do not provide rate estimates of PPISNe-I. Previous results from Eldridge et al. (2019, npass) show a similar agreement for CCSNe, and match well for the PISNe-I. Thus, we produce similar CCSNe and PISNe rates to other studies.
We note that with the variations we introduce in this study, specifically our \(\Delta M_{\mathrm{PPI,\,CO}}\) shift models, the fractions of (P)PISNe that are SLSNe-I, and vice versa, are not necessarily the same as in our fiducial models. SLSNe from PISNe are characterised by long rise-to-peak times due to large masses, and long decay time-scales. Mori et al. (2023) finds that axion instability supernovae, which we model with a downward CO core-mass shift, behave qualitatively similarly to normal PISNe. The light curves of their PPISNe have a slightly shorter rise-to-peak time, but the nickel-mass ejecta and peak luminosities span the same ranges and are comparable to PISNe without an additional CO core-mass shift. If the light curves and peak luminosities behave similarly, the fraction of PISN-I that are SLSN-I does not change much, and thus, as our fiducial model, our \(\Delta M_{\mathrm{PPI,\,CO\,shift}}=-15\,\mathrm{M}_{\odot}\) model is likely in tension with the observations. It is unclear how PPISNe, specifically the fraction that display SLSNe-like features, are affected by either an upward or a downward CO core-mass shift, and studies similar to Mori et al. (2023) are necessary to obtain further insight.
To provide a more quantitative exclusion or confirmation of our models, we need observational data from current and upcoming telescopes like _JWST_(Hummel et al., 2012), _LSST_(Villar et al., 2018), _EUCLID_(Tanikawa et al., 2023) and _ROMAN_(Moriya et al., 2022) for better estimates of the rate densities of PPISNe and PISNe, as well
as more systematic modelling of PPISNe and PISNe light curves, including variations on stellar evolution and the PPISN mechanism, to determine the fractions of these transients that are super luminous.
### Modelling approach
We use population synthesis to evolve populations with initial primary masses up to 300 M\({}_{\odot}\) with the binary_c framework. These masses go well beyond the maximum mass of the detailed models on which library_c is based, and are thus an extrapolation of the fitting formulae from Hurley et al. (2000, 2002), which are themselves based on models of stars with initial masses \(\leq 50\,M_{\odot}\) from Pols et al. (1998). Most of the stars in our simulation that undergo PPISN require initial masses in excess of 100 M\({}_{\odot}\), and are affected by systematics in the extrapolation. Our results are affected by the maximum initial primary mass we consider in that the fraction of stars that remain massive enough to undergo PPISN/PISN is changed. The presence of a pile-up in primary BH mass by PPISNe, and probably the magnitude of this pile-up, depend on our considered maximum primary mass. The magnitude of the shallow 'peak' of primary BH masses in our \(\Delta M_{\textrm{PPI, CO\,shift}}=+10\) M\({}_{\odot}\) model could increase by considering a larger maximum mass. This, in turn, also increases the transient event-rate density of PISNe (Tanikawa et al., 2023).
We choose to use a binary fraction \(f_{\textrm{bin}}=0.7\). Several studies show that the binary fraction depends on initial primary mass, (e.g., Moe & Stefano, 2017; Offner et al., 2022), and in solar-mass stars it is anti-correlated with metallicity (Moe et al., 2019; Thiele et al., 2023). Because we are interested in objects formed in massive-star systems only, we assume that choosing a mass-dependent binary fraction is currently unnecessary, as most (if not all) massive stars come in binaries or higher-order systems (Sana et al., 2012) or higher multiplicity systems (Offner et al., 2022). Moreover, we assume that the distributions of birth parameters of our binary systems are separable and independent. Moe & Stefano (2017) and Offner et al. (2022) show that this is not the case. Klencki et al. (2018) find, however, that this assumption does not strongly affect the rate estimates, although it does skew the birth mass-ratio distribution of merging BBHs to lower mass ratios.
In this study we use the prescription for PPISN mass loss of Renzo et al. (2022), which is based on the detailed stellar models of F19. Unlike most other existing prescriptions for PPISN mass loss, this provides the mass lost in pulses due to the PPISN, rather than a remnant mass, for a given CO core mass, which allows for a natural transition at the CCSNe/PPISNe boundary. Whether there really is no discontinuity at the interface is unclear (Renzo et al., 2020), but a prescription that artificially introduces discontinuities should be avoided. Taking the top-down approach from Renzo et al. (2022) makes the final remnant-mass prediction sensitive to the mass of the He layer that lies above the CO core. The pre-SN evolution of the star, specifically the evolution of the mass of the He layer for a given final CO core mass, affects the final remnant mass. Several processes influence the ratio of the He to CO core mass, like convective overshooting (Tanikawa et al., 2021; Vink et al., 2021), or wind mass loss (Renzo et al., 2017; Woosley, 2019), or binary interactions (Laplace et al., 2021). We find a near-constant ratio, \(M_{\textrm{He~{}core}}/M_{\textrm{CO~{}core}}=1.3\), in all our stars that undergo PPISNe.
### Can the peak in the primary-BH mass distribution at 35 M\({}_{\odot}\) be explained by PPISNe?
We make use of observations of both GW mergers and EM transient events to constrain our models and to answer whether the peak in the primary-BH mass distribution at 35 M\({}_{\odot}\) can be explained by PPISNe. We find that the CO core-mass range for stars to undergo PPISNe must shift down by more than 10 M\({}_{\odot}\) to line up the feature from our PPISNe to the observed feature in the primary-mass distribution at 35 M\({}_{\odot}\). This downward shift contradicts recent results (Farag et al., 2022) which suggest an upward shift of about 10 M\({}_{\odot}\). Given that the PISNe rate in our fiducial model is already too high according to Nicholl et al. (2013) and that Mori et al. (2023) indicates that the light curves of our \(\Delta M_{\textrm{PPI, CO\,shift}}=-15\) M\({}_{\odot}\) model behave similarly to PPISNe without an additional CO core-mass range shift, we find it likely that the downward shift variation that is required to match the GW observations is in tension with the observed (rate of) SLSNe-I.
Our PPISN-prescription variations that behave qualitatively like more recent detailed models of (P)PISNe (\(\Delta M_{\textrm{PPI, CO\,shift}}=+10\) M\({}_{\odot}\)) predict a at peak between 58 - 64 M\({}_{\odot}\). The transient rates associated with this variation relieve some of the tension with observations, given that only some SLSN-I are PISN-I, although we still overproduce PISN-I compared to Nicholl et al. (2013). Our models therefore suggest that the 58 - 64 M\({}_{\odot}\) region is a promising mass range in which to search for a new over-density of primary BH masses, and may well be observable in the next observation runs of the LVK collaboration.
We regard a combination of a downward \(\Delta M_{\textrm{PPI, CO\,shift}}\) variation and \(\Delta M_{\textrm{PPI, extra\,ML}}\), as an unlikely explanation to the peak at 35 M\({}_{\odot}\). While additional mass loss does shift the peak to a lower mass, and an additional \(\Delta M_{\textrm{PPI, CO\,shift}}\) of e.g. \(\sim 5\) M\({}_{\odot}\) may create an over-density at 35 M\({}_{\odot}\), it would still be in tension with the SLSNe rate according to Nicholl et al. (2013) because our fiducial model is already in tension with that rate and any CO core-mass shift would increase this tension.
Current and upcoming transient surveys like _EUCID_, _JWST_, _LSST_ and _ROMAN_ will measure increased rates of SLSNe, PISNe and PPISNe over a large range of redshifts. While we cannot definitively rule out downward variation of \(\Delta M_{\textrm{PPI, CO\,shift}}=-15\) M\({}_{\odot}\) based on the current observations, these surveys will provide the observational data to confirm or reject our transient-rate estimates, and will statistically constrain the fraction of SLSNe-I is associated with (P)PISNe-I.
Thus, given the results of our study, the fact that transient event-rate observations indicate a likely tension with the rates of our models that lead to a matching peak, we find it unlikely that the observed peak is due to PPISNe.
### If the peak at 35 M\({}_{\odot}\) is not from PPISNe, then what causes it?
Broadly speaking, the origin of features in the primary-mass distribution are expected to either I mainly reflect the remnant mass distribution, or II) they mainly reflect (binary) evolutionary selection effects that are caused by their formation channel. If the feature is caused by PPISNe, then this would fall under the first category (e.g. Schneider et al., 2023; Disberg & Nelemans, 2023, for the lower-mass analogue). However, it is equally likely for such a feature to arise from evolutionary effects.
A handful of studies have tried to explain the 35 M\({}_{\odot}\) peak through causes other than PPISNe and the remnant mass distribution. For example, Antonini et al. (2023) suggests that the 35 M\({}_{\odot}\) peak can be explained by cluster dynamics. They find that dynamical interactions in globular clusters lead to features in the primary-mass distribution around \(\sim 35\) M\({}_{\odot}\), as long as massive clusters form with a half-mass density \(>10^{4}\)M\({}_{\odot}\) pc\({}^{-3}\). This is not populated by hierarchical mergers, but does depend on the dynamical pairing of black holes. Alternatively, Briel et al. (2023) suggests that isolated binary interac
tions are the cause of the 35 M\({}_{\odot}\) peak. They find that the peak is not caused by pair-instability remnants, but rather systems that undergo only stable mass transfer, possibly multiple times. They find that a combination of mass-transfer stability that limits the low-end of the mass range of primary-mass BHs at 35 M\({}_{\odot}\), and quasi-homogeneous evolution limiting the upper end, leads to an over-density at \(\sim 35\) M\({}_{\odot}\).
No alternative explanation for the peak at 35 M\({}_{\odot}\) has yet been adopted as the solution, and further research is needed to determine the correct channel. It may not be enough to just find an over-density at 35 M\({}_{\odot}\), and matching other properties of systems around this mass, like mass-ratio (e.g. Li et al., 2022) and spin-orbit alignment, may be critical in finding the actual cause of the observed peak.
## 6 Conclusions
We implement a top-down pulsational pair-instability supernova mass-loss algorithm in the binary population-synthesis code n-nary_c and use this to predict the merger rate and mass distribution of BBHs merging at redshift zero. We explore several physically motivated variations to our PPISN prescription, and study how each variation affects the mass distribution of primary masses of merging BBHs, with a focus on the location of a peak at high BH masses. We combine our GW- and EM-transient predictions to study PPISNe and PISNe phenomena, and we compare these to recent observations to constrain our model variations.
Below we list our most notable results.
1. Our fiducial model has no peak in the primary-mass distribution that matches the observed feature at 35 M\({}_{\odot}\).
2. Our CO core-mass shift variations strongly affect the location of the PPISN pile-up such that shifting the CO core-mass range with \(\Delta M_{\rm{PPI}},\rm{CO\,shift}=-15\) M\({}_{\odot}\) does match the location of the observed over-density in the primary-mass distribution. It is hard to explain this with conventional physics like rotation or variations in nuclear reaction rates. The upward shift of \(\Delta M_{\rm{PPI}},\rm{CO\,shift}=+10\) M\({}_{\odot}\), which is based on detailed models of PPISNe (Farag et al., 2022), moves the over-density upward in primary BH mass by about \(8-14\) M\({}_{\odot}\), predicting a (slight) over-density at \(58-64\) M\({}_{\odot}\). The current LVK Q4 observation run will detect BBH systems more efficiently than before and could shed light on whether this third peak exists.
3. Our additional mass-loss variations affect the location of the over-density of BHs in the primary-mass distribution by about \(5-10\) M\({}_{\odot}\). Removing more mass, however, does not lead to an over-density at lower masses because, at lower mass, the majority of BHs with those primary-masses are created through CCSNe.
4. The transient-rate estimates of CCSN in our fiducial model match well with the inferred rate of Frohmaier et al. (2021). Their rate for SLSN-I falls between both our predicted PPISN-I and PISN-I rates. We predict a PPISN rate \(\sim 3\) times higher, and a PISN rate \(\sim 3\) times lower. Our ratio of PISN-I to CCSN-I exceeds the estimate of Nicholl et al. (2013), however, indicating that our fiducial model disagrees with the SLSN-I rate.
5. With the PPISN-prescription variation that does produce a peak at the correct location (\(\Delta M_{\rm{PPI}},\rm{CO\,shift}=-15\) M\({}_{\odot}\)), we find that the PPISN-I rates exceed the SLSN rates by a factor of 16, and the PISN-I rates are almost double that of the SLSN-I rates. Even taking into account that not all (P)PISNe produce SLSNe, and not all SLSNe can be explained by (P)PISNe, these rates likely are in tension with the observed SLSN rates as well.
In summary, because the large downward shift in CO core mass required to fit the observed GW peak is difficult to explain without exotic physics beyond the Standard Model, and new reaction rate studies even suggest an upward shift to \(58-64\) M\({}_{\odot}\), and because the transient event-rates of PPISNe and PISNe for this variation are likely in tension with the observed SLSNe rate, we conclude that PPISNe are unlikely to be responsible for the peak feature observed at 32-38 M\({}_{\odot}\).
## Acknowledgements
DDH wants to thank Arman Aryaeipour, Max Briel, Payel Das, Will Farr, Giovanni Mirouh, Bob Nichol, Natalie Rees, Karel Temmink, Rob Yates for the useful discussions, Paula Gherghinescu and Madison Wulder for their artistic advice, and the UKRI/UoS for the funding grant H120341A. LVS acknowledges partial financial support from the National Science Foundation under Grant No. (NSF grant number 2009131), the Netherlands Organisation for Scientific Research (NWO) as part of the Vidi research program BinWaves with project number 639.042.728 and the European Union's Horizon 2020 research and innovation program from the European Research Council (ERC, Grant agreement No. 715063). RGI thanks the STFC for the funding grants ST/R0000603/1 and ST/L003910/2, and the BRIDGCE consortium. The authors thank Selma de Mink for providing a platform for collaboration and communication, and for long term scientific guidance. Moreover, we thank the anonymous reviewer for the useful feedback on the manuscript.
In this research we make use of the GWTC-3 data release provided by the LIGO, VIRGO and KAGRA collaborations (LIGO Scientific Collaboration et al., 2021). Moreover, we make use of the following software to enable this study: The cosmology module of Astropy (The Astropy Collaboration et al., 2022), asymmetric_uncertainty (Gobat, 2022), the star-formation rate prescriptions of compas (Riley et al., 2022) u5gr (Collette, 2013), Lyutios/Jupyten (Perez & Granger, 2007; Kluyver et al., 2016), Matplotlib (Hunter, 2007), Numpy (Harris et al., 2020), numpy-index (Hoogendoorn, 2023), pandas (Wes McKinney, 2010; The pandas development team, 2020), PyCBC (Nitz et al., 2023), PyPDF2 (Fenniak et al., 2022), PyTables (PyTables Developers Team, 2002), Python (Van Rossum & Drake, 2009) and Scipy (Virtanen et al., 2020).
## Data availability
We will make the DCO and EM transient data used in this study available on 10.5281/zenodo.8083112 upon publication, along with routines to generate these data and the figures presented in this paper. The data is generated with a modified version of binary_c v2.2.2 and a modified version of binary_c-python v0.9.5/2.2.
|
2309.11576 | Examining the Limitations of Computational Rumor Detection Models
Trained on Static Datasets | A crucial aspect of a rumor detection model is its ability to generalize,
particularly its ability to detect emerging, previously unknown rumors. Past
research has indicated that content-based (i.e., using solely source posts as
input) rumor detection models tend to perform less effectively on unseen
rumors. At the same time, the potential of context-based models remains largely
untapped. The main contribution of this paper is in the in-depth evaluation of
the performance gap between content and context-based models specifically on
detecting new, unseen rumors. Our empirical findings demonstrate that
context-based models are still overly dependent on the information derived from
the rumors' source post and tend to overlook the significant role that
contextual information can play. We also study the effect of data split
strategies on classifier performance. Based on our experimental results, the
paper also offers practical suggestions on how to minimize the effects of
temporal concept drift in static datasets during the training of rumor
detection methods. | Yida Mu, Xingyi Song, Kalina Bontcheva, Nikolaos Aletras | 2023-09-20T18:27:19Z | http://arxiv.org/abs/2309.11576v2 | # Examining the Limitations of Computational Rumor Detection Models
###### Abstract
A crucial aspect of a rumor detection model is its ability to generalize, particularly its ability to detect emerging, previously unknown rumors. Past research has indicated that content-based (i.e., using solely source posts as input) rumor detection models tend to perform less effectively on unseen rumors. At the same time, the potential of context-based models remains largely untapped. The main contribution of this paper is in the in-depth evaluation of the performance gap between content and context-based models specifically on detecting new, unseen rumors. Our empirical findings demonstrate that context-based models are still overly dependent on the information derived from the rumors' source post and tend to overlook the significant role that contextual information can play. We also study the effect of data split strategies on classifier performance. Based on our experimental results, the paper also offers practical suggestions on how to minimize the effects of temporal concept drift in static datasets during the training of rumor detection methods.
## 1 Introduction
False rumors are claims or stories that are intended to deceive or mislead the public and can spread faster through social media, causing harm and confusion Lazer et al. (2018); Zubiaga et al. (2018); Vosoughi et al. (2018). Due to their large volume and high velocity of spread, computational approaches (e.g., supervised rumor detection models) are typically employed to detect and analyze false rumors at an early stage1Bian et al. (2020); Lin et al. (2022); Tian et al. (2022).
Footnote 1: Note that the task of rumor detection typically distinguishes the detection of check-worthy unverified claims (i.e., rumors) from other kinds of posts in social media (non-rumors) Zubiaga et al. (2018). On the other hand, rumor verification is typically the task of classifying a rumor as _True, False, Unverified, or Non-Rumor_Kochkina et al. (2023). In this work, for brevity, we refer to both tasks as rumor detection.
Current computational rumor detection systems are typically follow a two step approach: (i) features are extracted from the text contend of the rumor (e.g., source post) alongside contextual information2 and then (ii) models are trained and evaluated on static datasets through random data splits Ma et al. (2016, 2017).
Footnote 2: In this work, we use the term ‘contextual information’ to refer to different forms of information associated with a rumor in social media, e.g., comments, images and user profile attributes. The term ‘content-based methods’ refers to the use of only source posts as the model input.
As demonstrated by Mu et al. (2023); Hu et al. (2023), the evaluation of rumor detection systems performed on static datasets using random splits might not provide an accurate picture of the generalizability of such models to unseen rumors. Note that the evaluation conducted by Mu et al. (2023); Hu et al. (2023) focuses solely on standard text classifiers (such as logistic regression) using only features derived from source posts.
However, rumors in social media also come with a rich amount of contextual information, including comments, user profile features and images, which complement the text of the source posts. For example, Figure 1 shows two Weibo users who post the same rumor about the death of a famous Chinese actor. Despite the source posts being identical, the remaining contextual information (e.g., comments and user profile attributes) is completely different. Note that the development of the majority of current rumor detection models relies on context-based features and utilizes random data splits Bian et al. (2020); Rao et al. (2021).
The question that emerges is whether rumor detection models trained with contextual information using random data splits may also exhibit a tendency towards overestimation. Therefore, this paper primarily centers on a systematic evaluation of the actual generalization capabilities (i.e., detecting rumors that are not previously known) of context-based rumor detection models, which is a hitherto
unstudied research question.
The four contributions of this work are:
* Empirical proof (SS 4.1 & 5) that despite having additional contextual information, rumor detection models still struggle to detect unseen rumors appearing at a future date, with some models performing even worse than random baselines (see Table 3).
* An ablation study (SS 5.3) that removes source posts from the inputs, revealed that current rumor detection approaches rely excessively on information from the source post, while neglecting the contextual information.
* A follow-up similarity analysis (SS 5.4) on content and context-based features, which elucidates the impact of training/test split strategies on model performance.
* Finally, we focus on the issue of effectively utilizing static datasets for rumor detection by providing practical recommendations (SS 6), such as implementing additional cleaning measures for the static dataset and enhancing the current evaluation metrics.
## 2 Related Work
### Computational Rumor Detection Approaches
The increased consumption of news and information on social platforms has necessitated large-scale automated detection of unreliable content (Shu et al., 2017; Shearer and Gottfried, 2017), which led to the development of new rumor detection approaches based on state-of-the-art NLP techniques.
Early studies typically relied on handcrafted features extracted from source posts and user profile attributes using traditional machine learning models, such as SVM and Random Forest. (Qazvinian et al., 2011; Takahashi and Igata, 2012; Yang et al., 2012; Ma et al., 2015). With the emergence of neural-based NLP models (Mikolov et al., 2013), rumors started to be modelled with contextual embeddings such as Glove (Pennington et al., 2014) and ELMo (Peters et al., 2018). In addition, graph-based neural models have been employed to learn relationships from the propagation network of rumors, which includes retweet and comment chains (Bian et al., 2020; Lin et al., 2021; Yang et al., 2021). Other methods adopted multimodal approaches to go beyond text and capture information from images (Wang et al., 2020; Sun et al., 2021; Zhou et al., 2022).
Recent hybrid models began including context
Figure 1: Two rumor spreaders (in the green box) posted an identical rumor and received different stances of comments (in the gray box), i.e., denial (on the left) and support (on the right), respectively. ‘[Crying_Face]’ denotes the Loudly Crying Face emoji.
tual information to improve rumor prediction performance [14, 15, 16]. The top-performing rumor detection systems (e.g., DUCK [12]) rely both on contextual information and user-level attributes, with 98 F1-measure on widely used datasets such as Weibo 16 [16] and CoAID [17].
Most of these rumor detection approaches however have a major weakness, as they are trained using random data splits which ignore a key temporal dimension of rumors and thus tend to overestimate model performance of future unseen rumors [15, 14].
### The Effect of Temporal Concept Drift in NLP Downstream Tasks
Previous work on legal, COVID-19, and biomedical classification tasks [13, 14, 15] has investigated the sensitivity of classifiers to temporal concept drift (i.e., the deterioration of their performance due to temporal/topic variation) when evaluated on chronological data splits. However, temporal concept drift mainly affects the rumor text (i.e. new unseen topics), as rumors on the same topic posted by different users have different contextual information. Mu et al. [15] explore the impact of temporal concept drift on rumor detection using standard text classifiers such as logistic regression. In contrast, this paper performs an extensive empirical evaluation of the effect of temporal concept drift on neural rumor detection models which combine textual and contextual information.
## 3 Experimental Setup
### Data
For comprehensiveness and reliability, our experiments are carried out on five datasets (see Table 1 for details), which have been widely used in prior rumor detection research [1, 16, 17, 18, 19, 20]:
* **Twitter 15 & Twitter 16**[16] are two English datasets that include tweets categorized into one of four categories: _True Rumor (T), False Rumor (F), Non-rumor (NR) and Unverified Rumor (U)_.
* **Weibo 16**[16] consists of 4,664 Weibo posts in Chinese. It comprises 2,313 _false rumors_ debunked by the official Weibo Fact-checking Platform and 2,351 _non-rumors_ sourced from mainstream news sources.
* **Weibo 20**[16] is a Chinese rumor detection dataset similar to Weibo 16. It provides 3,034 _non-rumors_ and 3,034 _false rumors_ from the same Weibo fact-checking platform as Weibo 16.
* **Sun-MM**[17] comprises 2,374 annotated tweets (i.e., _rumor or non-rumor_) that cover both textual (i.e., source post) and visual (i.e., image) information. It is typically used for multi-modal rumor detection.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**Statistic** & **Twitter 15** & **Twitter 16** & **Weibo 16** & **Weibo 20** & **Sun-MM** \\ \hline _# of source posts_ & 1,490 & 818 & 4,664 & 6,068 & 2374 \\ \hline _# of True rumors_ & 374 & 205 & 2,351 & 3,034 & 1,688 \\ \hline _# of False rumors_ & 370 & 205 & 2,313 & 3,034 & 686 \\ \hline _# of Unverified rumors_ & 374 & 203 & - & - & - \\ \hline _# of Non-rumors_ & 372 & 205 & - & - & - \\ \hline _Average length of posts_ & 19 & 19 & 105 & 88 & - \\ \hline _Average # of comments_ & 22 & 16 & 804 & 62 & - \\ \hline _Average length of comments_ & 242 & 202 & 8,484 & 13,592 & - \\ \hline \multicolumn{2}{|l|}{**Contextual Information**} & & & & \\ \hline _Source Posts_ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline _Comments_ & G & G & G+S & S & - \\ \hline _User Profile Attributes_ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline _Images_ & - & - & - & - & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Dataset statistics. ‘G’ and ‘S’ denote comment propagation network (Graph) and comment sequence (S) respectively. We also present contextual-based features obtained from each dataset.
It should be noted that most prior rumor detection models are typically evaluated on two or three datasets only, typically from a specific language.
### Models
Following Kochkina et al. (2023), we evaluate a number of top-performing rumor detection models.3. Each dataset is used to train at least three models, based on the information it provides (see Table 2 for details).
Footnote 3: Here, we only consider reproducible models with publicly available code and full implementation details. Note that these models have been extensively employed as baselines in prior research Rao et al. (2021); Tian et al. (2022)
Weak BaselineFor reference, we provide a weak baseline by randomly generating predictions compared to the ground truth labels of the test set.
SVM-HF (Source Post + User Profile)Similar to Yang et al. (2012); Ma et al. (2015), we use a linear SVM model using source posts represented with TF-IDF and various handcrafted features extracted from user profile attributes e.g., number of followers, account status (i.e., whether a verified account or not), number of historical posts, etc.
BERT (Source Post)In line with previous work Rao et al. (2021); Tian et al. (2022), we use solely source posts as input to fine-tune the Bert-base model4Devlin et al. (2019) by adding a linear layer on top of the 12-layer transformer architecture with a softmax activation. We consider the special token '[CLS]' as the post-level representation.
Footnote 4: We use bert-base-uncased and bert-base-chinese models from Huggingface Wolf et al. (2020) for English and Chinese datasets respectively.
Bi-GCN (Comment Network)To model the network of comment propagation, we use Bi-Directional Graph Convolutional Networks (BiGCN) Bian et al. (2020). We employ two separate GCNs with (i) a top-down directed graph representing rumor spread to learn the patterns of rumor propagation; and (ii) another GCN with an opposite directed graph of rumor diffusion.
Hierarchical Transformers (Source Post + Comment Sequence)Similar to prior work Rao et al. (2021); Tian et al. (2022), we use a hierarchical transformer-based network to encode separately the source post and its sequence of comments.5 We then add a self-attention and a linear projection layer with softmax activation to combine the hidden representation of posts and comments.
Footnote 5: Given that the total number of tokens of the source post and all comments exceeds the maximum input length (i.e., 512 tokens) of most Bert-style models.
Hybrid Vision-and-Language Representation (Source Post + Image)We use visual transformer (ViT) Dosovitskiy et al. (2020) and BERT Devlin et al. (2019) to represent images and source posts of rumors for the Sun-MM dataset. We then combine the two hidden representations by adding a fully connected layer with softmax activation for rumor classification.
### Data Pre-processing
We begin by processing all the source posts and comments, replacing @mentions and links with special tokens such as '@USR' and 'URL' respectively. For the English datasets, we also convert all tweets to lowercase before feeding them to the bert-base-uncased model.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Models**} & \multicolumn{4}{c|}{**Contextual Information**} & \multicolumn{4}{c|}{**Datasets**} \\ & **Post** & **Comment** & **User** & **Image** & **Twitter 15** & **Twitter 16** & **Weibo 16** & **Weibo 20** & **Sun-MM** \\ \hline _SVM-HF_ & ✓ & - & ✓ & - & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline _BERT_ & ✓ & - & - & - & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline _H-Trans_ & ✓ & ✓ & - & - & - & - & ✓ & ✓ & - \\ \hline _Bi-GCN_ & ✓ & ✓ & - & - & ✓ & ✓ & ✓ & - & - \\ \hline _Hybrid_ & ✓ & - & - & ✓ & - & - & - & - & ✓ \\ \hline \end{tabular}
\end{table}
Table 2: Model details.
Figure 2: An example of using forward and backward chronological data splits on Weibo 20 dataset (including rumors from 2016 to 2020). There is no overlap among the three subsets.
### Evaluation Metrics
We run each model three times with different random seeds. Implementation details (e.g., hyperparameters) are provided in Appendix A.
In accordance with the original settings [16, 17, 18], we report the average macro precision, recall, F1-score, and accuracy for all binary datasets, i.e., Weibo 16, Weibo 20, and Sun-MM. Since the Twitter datasets [17, 15] have multi-class labels, we report the average accuracy and F1-score for each class.
## 4 Evaluation Strategies
### Data Splits
To examine the effect of data splitting strategies on the models' predictive performance, we compare three strategies: the widely used random data split against two types of chronological data splits (see Figure 2).
Forward Chronological SplitsFor each dataset, we initially sort all rumors chronologically, from the oldest to the newest. We then divide them into three subsets: a training set (containing 70% of the oldest rumors), a development set (10% of the rumors that were posted after those in the training set but before those in the test set), and a test set (containing the 20% most recent rumors). This data split strategy allows the model to be trained and fine-tuned on _older rumors_ and then be evaluated on the most _recent ones_.
Backward Chronological SplitsIn contrast, here all rumors are sorted starting from the most recent ones to the oldest ones, and then are split in the same way as the forward chronological splits. This allows the model to be trained on the _newest rumors_ and evaluated on the _oldest ones_.
These two different temporal split strategies enable the evaluation of temporal concept drift effects on model performance.
Random SplitsThis is the most commonly adopted data split strategy in prior work. All datasets are divided into three subsets using a stratified random split approach6.
Footnote 6: We use a data split tool from sklearn: [https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html)
Some prior rumor detection research has used a leave-a-rumor-out strategy [16, 17], where each dataset is divided into \(N\) folds, where \(N\) denotes the the number of unique rumor events in the given dataset. In this case, rumor detection models are evaluated using N-fold cross validation, i.e., using \(N-1\) unique rumors and all associated posts as the training set and the posts about the last remaining rumor as the test set. In this way, it is possible to evaluate model performance on _new unseen rumors_. However, it has not been possible to experiment with this data split protocol as none of the datasets used in this paper cluster posts into individual events which give rise to a unique rumor, with associated multiple social media posts about it.
## 5 Results and Discussion
### Model Performance on Random Splits
The experimental results for all rumor detection approaches and data split strategies are shown in Tables 3 and 4. We can observe that training on random splits always leads to significant overestimation (t-test, \(p\) < 0.01) of model accuracy as compared to training on both forward and backward chronological splits.
Taking the best performing Bi-GCN model on Twitter 15 as an example, we observe a decrease in model accuracy of at least 39.4% when comparing test results on random splits against the two chronological splits. Furthermore, we find that some models (e.g., SVM-HF and BiGCN on Twitter 15) perform even worse than a weak baseline (e.g., the F1-measure results for the false rumor category (\(F\)) across two chronological splits in comparison with the weak baseline) that uses random predictions. As expected, our empirical findings align with previous studies of temporal impact in other downstream NLP tasks [16, 17, 15].
The results indicate that models learn to classify accurately rumor posts in the test set only when they are highly similar to posts in the training data, even though the remaining contextual information (such as user profile attributes, comments, and sometimes images) are different. To further investigate the impact of this semantic overlap, we conduct an ablation study (Section 5.3) and a similarity analysis (Section 5.4).
### Forward v.s. Backward Chronological Splits
Our experimental results show that models trained using backward chronological splits achieve higher
accuracy on all datasets (except Weibo 16) as compared to those on forward chronological splits. This suggests that the models have the tendency to learn recurrent rumors. This observation is consistent across datasets. For instance, the accuracy of all models on the Twitter 16 dataset is higher when random splits are used for training as compared to forward splits, but lower when compared to backward splits. This may be attributed to similarities between the training and test sets. This is investigated further in Section 5.4.
### Ablation Study
In order to evaluate the impact of the source post's text on rumor detection performance, we perform a source post removal ablation study7. Our hypothesis is that after removing the source posts, there will be no significant difference in the performance of the rumor detection models trained according to the different data split strategies. We conduct experiments using (i) SVM-HF on all datasets, (ii) the Hier-Transformer model on Weibo 16 and Weibo 20, and (iii) visual transformer (ViT) on Sun-MM dataset.
Footnote 7: Previous ablation studies have focused primarily on removing new features rather than source posts (Sun et al., 2021; Tian et al., 2022)
The results of the ablation study are reported in Table 6 and Table 5. We demonstrate that when the source posts are removed from the input, all models except for ViT model (see Section 5.4 for further analysis) no longer exhibit consistent superiority over forward and backward chronological splits as compared to using random splits. As we have shown, two identical rumors can have different
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Models**} & \multirow{2}{*}{**Splits**} & \multicolumn{4}{c|}{_Weibo 16_} & \multicolumn{4}{c|}{_Weibo 20_} & \multicolumn{4}{c|}{_Sun-MM_} \\ \cline{3-14} & & _Acc._ & \(P\) & \(R\) & _FI_ & _Acc._ & \(P\) & \(R\) & _FI_ & _Acc._ & \(P\) & \(R\) & _FI_ \\ \hline \multirow{2}{*}{**Weak Baseline**} & \multirow{2}{*}{0.493} & \multirow{2}{*}{0.493} & \multirow{2}{*}{0.492} & \multirow{2}{*}{0.493} & \multirow{2}{*}{0.501} & \multirow{2}{*}{0.501} & \multirow{2}{*}{0.501} & \multirow{2}{*}{0.501} & \multirow{2}{*}{0.501} & \multirow{2}{*}{0.514} & \multirow{2}{*}{0.512} & \multirow{2}{*}{0.514} & \multirow{2}{*}{0.512} \\ \cline{2-2} \cline{6-14} & & & _Acc._ & \(P\) & \(R\) & _FI_ & _Acc._ & \(P\) & \(R\) & _PR_ & _PR_ & _PR_ & _PR_ & _PR_ & _PR_ \\ \hline \multirow{2}{*}{**SVM-HF**} & \multirow{2}{*}{_Random_} & **0.906** & **0.907** & **0.906** & **0.906** & **0.870** & **0.870** & **0.868** & **0.870** & **0.783** & **0.742** & **0.758** & **0.749** \\ \cline{2-14} & & _Forward_ & 0.823 & 0.855 & 0.822 & 0.819 & 0.680 & 0.691 & 0.680 & 0.676 & 0.689 & 0.636 & 0.635 & 0.630 & 0.635 \\ \cline{2-14} & & _Backward_ & 0.752 & 0.757 & 0.752 & 0.752 & 0.801 & 0.802 & 0.801 & 0.801 & 0.771 & 0.740 & 0.676 & 0.692 \\ \hline \multirow{2}{*}{**BERT**} & \multirow{2}{*}{_Random_} & **0.918** & **0.918** & **0.917** & **0.918** & **0.920** & **0.921** & **0.920** & **0.920** & **0.839** & **0.807** & **0.806** & **0.806** \\ \cline{2-14} & & _Forward_ & 0.889 & 0.892 & 0.888 & 0.888 & 0.738 & 0.756 & 0.738 & 0.732 & 0.708 & 0.682 & 0.708 & 0.680 \\ \cline{2-14} & & _Backward_ & 0.809 & 0.812 & 0.809 & 0.808 & 0.898 & 0.899 & 0.898 & 0.898 & 0.807 & 0.783 & 0.735 & 0.748 \\ \hline \multirow{2}{*}{**Bi-GCN**} & \multirow{2}{*}{_Random_} & **0.892** & **0.893** & **0.885** & **0.887** & **-** & **-** & **-** & **-** & -** & - & - & - \\ \cline{2-14} & & _Forward_ & 0.843 & 0.843 & 0.834 & 0.835 & **-** & **-** & **-** & **-** & - & - & - & - \\ \cline{2-14} & & _Backward_ & 0.762 & 0.783 & 0.762 & 0.747 & **-** & **-** & **-** & **-** & - & - & - & - \\ \hline \multirow{2}{*}{**H-Trans / Hybrid**} & _Random_ & **0.955** & **0.956** & **0.955** & **0.955** & **0.959** & **0.960** & **0.959** & **0.959** & **0.853** & **0.818** & **0.829** & **0.823** \\ \cline{2-14} & & _Forward_ & 0.946 & 0.949 & 0.946 & 0.850 & 0.860 & 0.849 & 0.850 & 0.707 & 0.687 & 0.725 & 0.685 \\ \cline{2-14} & & _Backward_ & 0.792 & 0.833 & 0.785 & 0.793 & 0.940 & 0.938 & 0.935 & 0.938 & 0.821 & 0.782 & 0.805 & 0.791 \\ \hline \end{tabular}
\end{table}
Table 4: Experimental results of Weibo 16 & 20 and Sun-MM across three different data split strategies. Cells in **bold** indicate the best results from all models. Cells in gray indicate that the model trained using random splits achieves significantly better performance than using both forward and backward chronological splits. (\(p\) < 0.05, \(t\)-test).
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Models \& \multirow{2}{*}{**Splits**}} & \multicolumn{4}{c|}{**Twitter 15**} & \multicolumn{4}{c|}{**Twitter 16**} \\ \cline{3-14} & & \multirow{2}{*}{_Acc._} & _NR_ & \(F\) & \(T\) & \(U\) & _Acc._ & _NR_ & \(F\) & \(T\) & \(U\) \\ \cline{2-14} & & _F1_ & _F1_ & _F1_ & _F1_ & _F1_ & _Acc._ & **F1** & **F1** & **F1** & **F1** \\ \hline \multirow{2}{*}{**Weak Baseline**} & \multirow{2}{*}{0.240} & \multirow{2}{*}{0.224} & \multirow{2}{*}{0.246} & \multirow{2}{*}{0.238} & \multirow{2}{*}{0.254} & \multirow{2}{*}{0.248} & \multirow{2}{*}{0.174} & \multirow{2}{*}{0.250} & \multirow{2}{*}{0.300} & \multirow{2}{*}{0.264} \\ \cline{2-14} & & _Random_ & **0.739** & **0.727** & **0.701** & **0.803** & **0.728** & **0.709** & **0.697** & **0.602** & **0.858** & **0.663** \\ \cline{2-14} & _Forward_ & 0.413 & 0.589 & 0.366 & 0.092 & 0.304 & 0.373 & 0.523 & 0.226 & 0.297 & 0.214 \\ \cline{2-14} & & _Reverse_ & 0.353 & 0.590 & 0.462 & 0.063 & 0.062 & 0.380 & 0.520 & 0.103 & 0.411 & 0.368 \\ \hline \multirow{2}{*}{**BERT**} & \multirow{2}{*}{_Random_} & **0.615** & **0.561** & **0.593** & **0.692** & **0.599** & **0.598** & 0.381 & **0.615** & **0.698** & **0.625** \\ \cline{2-14} & _Forward_ & 0.366 & 0.382 & 0.226 & 0.457 & 0.328 & 0.38
contextual information. This indicates that temporalities are not commonly reflected in the majority of contextual information associated with rumors in social media. Notably, even without the source post, the H-Trans model can achieve competitive performance using chronological splits. For instance, it achieves up to 93.8% and 94.4% accuracy on Weibo 16 and Weibo 20, respectively, which is comparable to the performance of the Bi-GCN and original H-Trans models (which take the source post as input). We hypothesize that rumor debunking information may be present in the comments (for example, see Figure 1), which can assist in the decision-making process of the rumor classifier. Next we conduct linguistic analysis to elucidate the distinctions between comments from rumors and non-rumors in Weibo 16 & 20.
### Similarity Analysis
This section explores the impact of data split strategies on the content and contextual information in the respective training and test sets.
Source PostSimilar to Kochkina et al. (2023); Mu et al. (2023), we first measure the difference in textual similarity between training and test sets generated using random and chronological data splits using two standard matrices with ranges from 0 to 1.
Intersection over Union (IoU) Tanimoto (1958)\[IoU=\frac{|V^{Train}\cap V^{Test}|}{|V^{Train}\cup V^{Test}|}\] (1)
DICE coefficient (DICE) Dice (1945)\[DICE=\frac{2\times|V^{Train}\cap V^{Test}|}{|V^{Train}|+|V^{Test}|}\] (2)
where \(V^{Train}\) and \(V^{Test}\) refer to the set of unique words from training and test sets; and \(|V^{Train}\cap V^{Test}|\) and \(|V^{Train}\cup V^{Test}|\) indicate the number of unique words that appear in the _intersection_ and _union_ of training and test sets respectively. When the two sets have no shared vocabulary list, the IoU and DICE values will be 0, while if they are identical, the IoU and DICE values will be 1.
We display the similarity of the source posts between training and test sets using different data split strategies in Table 7. Additionally, we provide the accuracy of the BERT model (which takes only the source post as input) for each dataset as a reference.
We demonstrate that using random splits leads to significantly higher IoU and DICE values (\(t\)-test, \(p\) < 0.001), indicating greater similarities between the training and test sets compared to both forward and backward chronological splits. This suggests that rumors with similar content, resulting from temporal concept drift, appear in both training and test sets when employing random data splits. Additionally, we discover a positive correlation (using Pearson's Test) between model accuracy and the similarity distance of training and test sets, as measured by both IOU (Pearsons' \(r\) = 0.865, \(p\) < 0.05) and DICE (Pearsons' \(r\) = 0.879, \(p\) < 0.001) values. In other words, higher textual similarities correspond to better classifier performance.
User Profile AttributesWe use cosine similarity to assess the difference between the mean values of user profile attributes from the training and test sets. However, we do not observe a significant difference in cosine similarity values when using both random and chronological data splits, as all rumor speaders are unique across all datasets.
CommentsConsidering the comparable model performance, with accuracy of up to 93.8% and 94.4% on Weibo 16 and Weibo 20, when using only comments as input, we hypothesize that the comments from the two classes are significantly different. To identify the difference in comments that distinguish between rumors and non-rumors in Weibo 16 and Weibo 20, we employ the univariate Pearson's correlation test Schwartz et al. (2013). We observe that there is a large amount of words related to debunking rumors (e.g., 'false','really?', and 'truth') in the comments associated with false rumors on both Weibo 16 and Weibo 20. On the other hand, comments associated with non-rumors are more words related to the daily life of the public. Note that non-rumors in Weibo datasets are collected from mainstream media accounts.
ImagesAblation study results (see Table 6) show that only the ViT model, which uses images alone as input, is affected by the temporal data splits (i.e., the deterioration of model performance). We further explore the Sun-MM dataset and uncover that rumors with similar content are usually posted with similar images. We show examples in Figure 3 (See Appendix). Note that similar semantic objects (e.g., entitles Sun et al. (2021)) can be extracted from similar images, which can impact the accuracy of the model.
## 6 How do we properly use static datasets?
Apart from prioritizing skewed methodologies solely for achieving high accuracy on rumor detection datasets, it is essential to develop a deeper comprehension of the protocol we employ and generate significant insights. Given the limitations raised by our experiments, we make the following practical suggestions for developing new rumor detection systems on static datasets:
* For practical applications that aim to detect **unseen rumors**, it is essential to consider chronological splits when evaluating all rumor detection approaches on static datasets, in addition to standard random splits. By using forward and backward chronological splits, we can assess the ability of the rumor classifiers to handle both earlier and older unseen rumors.
* Considering that temporalities (i.e., the temporal concentration of rumor topics) typically occur in widely used rumor detection datasets (e.g., Twitter 15&16 and Weibo 16 (Ma et al., 2016, 2017)), one can apply an additional data pre-processing measure to filter out rumor events with multiple posts. For instance, using out-of-the-box methods such as Levenshtein distance (Levenshtein et al., 1966) and BERTopic (Grootendorst, 2022), we identified a total of 9 similar rumors that resemble the false rumor depicted in Figure 1. After conducting a more in-depth error analysis on the predictions generated by the H-Trans model, which has demonstrated the highest predictive performance on Weibo 16, we discovered that the models can accurately classify all of these rumors in the test set when employing random data splits.
* Current evaluation metrics, such as accuracy and F1-measure, are unable to accurately assess the true capability of rumor classifiers in detecting unseen rumors. Therefore, there is a need for new measures to evaluate the accuracy of model predictions for unknown rumors. For example, one can calculate the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Models**} & \multirow{2}{*}{**Splits**} & \multicolumn{6}{c|}{_Twitor 15_} & \multicolumn{6}{c|}{_Twitor 16_} \\ & & _Acc._ & _NR_ & \(F\) & \(T\) & **U** & _Acc._ & _NR_ & \(F\) & \(T\) & **U** \\ \hline \hline \multirow{2}{*}{**SVM-HF**} & _Random_ & **0.383** & 0.609 & 0.050 & 0.356 & **0.132** & 0.343 & 0.494 & 0.140 & **0.273** & **0.229** \\ \cline{2-13} & _Forward_ & 0.375 & **0.635** & 0.039 & **0.374** & 0.086 & **0.417** & **0.689** & **0.333** & 0.046 & 0.158 \\ \cline{2-13} & _Reverse_ & 0.361 & 0.590 & **0.133** & 0.359 & 0.050 & 0.328 & 0.499 & 0.178 & 0.170 & 0.021 \\ \hline \end{tabular}
\end{table}
Table 5: Ablation study of Twitter 15 & 16 datasets across three different data split strategies. Cells in **bold** indicate the best results from all models.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Splits**} & \multicolumn{3}{c|}{_Welio 16_} & \multicolumn{3}{c|}{_Welio 20_} & \multicolumn{3}{c|}{_Sum-MM_} \\ & & _Acc._ & \(P\) & \(R\) & _F1_ & _Acc._ & \(P\) & \(R\) & _F1_ & _Acc._ & \(P\) & \(R\) & _F1_ \\ \hline \multirow{2}{*}{**Twitter 16**} & _Random_ & 0.887 & 0.889 & 0.887 & 0.887 & 0.773 & 0.801 & 0.773 & 0.768 & 0.707 & **0.663** & 0.510 & 0.439 \\ \cline{2-13} & _Forward_ & **0.936** & **0.944** & **0.936** & **0.936** & 0.699 & 0.753 & 0.699 & 0.681 & 0.701 & 0.602 & 0.507 & 0.434 \\ \cline{2-13} & _Reverse_ & 0.683 & 0.698 & 0.684 & 0.678 & **0.831** & **0.837** & **0.831** & **0.830** & **0.713** & 0.655 & **0.52** & **0.453** \\ \hline \multirow{2}{*}{**H-Trans / Hybrid**} & _Random_ & 0.929 & 0.930 & 0.929 & 0.929 & 0.925 & 0.926 & 0.925 & 0.925 & **0.726** & **0.674** & **0.691** & **0.681** \\ \cline{2-13} & _Forward_ & **0.938** & **0.935** & **0.934** & **0.935** & 0.851 & 0.856 & 0.851 & 0.851 & 0.655 & 0.521 & 0.514 & 0.505 \\ \cline{2-13} & _Reverse_ & 0.730 & 0.795 & 0.732 & 0.715 & **0.944** & **0.945** & **0.944** & **0.944** & 0.623 & 0.516 & 0.514 & 0.514 \\ \hline \end{tabular}
\end{table}
Table 6: Ablation study of Weibo 16 & 20 and Sun-MM datasets across three different data split strategies. Cells in **bold** indicate the best results from all models.
accuracy of a rumor detection system by excluding known rumors (i.e., similar rumors appearing in the training set) from the test set.
* Given the limitations of the current pipeline that relies solely on static datasets, we argue that evaluation models should not be restricted to such datasets. By leveraging the consistent format of datasets collected from the same platform (as shown in Table 1), for example, one can explore **broader temporalities** by training a rumor classifier on Twitter 15 and evaluating its performance on Twitter 16. This protocol enables a more comprehensive examination of the generalizability of rumor detection systems, which is crucial for their practical applications in the real world (Moore and Rayson, 2018; Yin and Zubiaga, 2021; Kochkina et al., 2023).
## 7 Conclusion
In this paper, we evaluate the limitations of existing widely used rumor detection models trained on static datasets. Through empirical analysis, we demonstrate that the use of chronological splits significantly diminishes the predictive performance of widely-used rumor detection models. To better understand the causes behind these limitations, we conduct a fine-grained similarity analysis and an ablations study. Finally, we provide practical recommendations for future research in the advancement of new rumor detection systems.
## Limitations
We conducted an empirical study on current rumor detection models, utilizing both the source post and **standard contextual information** (such as comments, images, and user profile attributes) as input. However, previous research has employed hidden features, such as sentiment and entities, which can be extracted from the source post and contextual information (Rao et al., 2021; Sun et al., 2021). We consider this as future work and aim to explore additional feature settings. Besides, the current work is limited to English and Chinese, and we acknowledge that further research into more multilingual datasets should be considered in the future.
## Ethics Statement
Our work has been approved by the Ethics Committee of our university, and complies with the data policies of Twitter and Weibo. All datasets are obtained through the links provided in the source papers.
|
2309.05503 | Long-Range Transformer Architectures for Document Understanding | Since their release, Transformers have revolutionized many fields from
Natural Language Understanding to Computer Vision. Document Understanding (DU)
was not left behind with first Transformer based models for DU dating from late
2019. However, the computational complexity of the self-attention operation
limits their capabilities to small sequences. In this paper we explore multiple
strategies to apply Transformer based models to long multi-page documents. We
introduce 2 new multi-modal (text + layout) long-range models for DU. They are
based on efficient implementations of Transformers for long sequences.
Long-range models can process whole documents at once effectively and are less
impaired by the document's length. We compare them to LayoutLM, a classical
Transformer adapted for DU and pre-trained on millions of documents. We further
propose 2D relative attention bias to guide self-attention towards relevant
tokens without harming model efficiency. We observe improvements on multi-page
business documents on Information Retrieval for a small performance cost on
smaller sequences. Relative 2D attention revealed to be effective on dense text
for both normal and long-range models. | Thibault Douzon, Stefan Duffner, Christophe Garcia, Jérémy Espinas | 2023-09-11T14:45:24Z | http://arxiv.org/abs/2309.05503v1 | # Long-Range Transformer Architectures for Document Understanding
###### Abstract
Since their release, Transformers have revolutionized many fields from Natural Language Understanding to Computer Vision. Document Understanding (DU) was not left behind with first Transformer based models for DU dating from late 2019. However, the computational complexity of the self-attention operation limits their capabilities to small sequences. In this paper we explore multiple strategies to apply Transformer based models to long multi-page documents. We introduce 2 new multi-modal (text + layout) long-range models for DU. They are based on efficient implementations of Transformers for long sequences. Long-range models can process whole documents at once effectively and are less impaired by the document's length. We compare them to LayoutLM, a classical Transformer adapted for DU and pre-trained on millions of documents. We further propose 2D relative attention bias to guide self-attention towards relevant tokens without harming model efficiency. We observe improvements on multi-page business documents on Information Retrieval for a small performance cost on smaller sequences. Relative 2D attention revealed to be effective on dense text for both normal and long-range models.
Keywords:Document Understanding Long-range Transformers Relative Attention.
## 1 Introduction
Digital documents are everywhere around us, in the form of born digital PDF or scanned paper, they carry much information and can be easily exchanged. They can be used to exchange information from an issuer to its recipient or to archive its content. Information is generally structured in some way depending on the document type. Invoices and scientific articles, for example, do not follow the same structure because their objective is different. Both are codified to carry information very efficiently such that most invoices and most articles look the same but differ in content. Document Understanding is a field gaining increasing attention in the past years, as automating document-related processes can
drastically improve efficiency of information processing in a wide range of fields and industries. Recent advances in Neural Network architectures allowed better document understanding and enabled tackling more complex tasks: Question Answering [20], Layout Segmentation [18] and Information Extraction [11].
In particular, models based on Transformer architectures have led to a breakthrough in these domains since their first release in 2017 [28]. They have been widely used on Natural Language Processing tasks and their performance is unequaled [29]. The implementation of Transformer models in DU [32] was swift, revealing the potential of attention-based models in this domain. More recently, the trend is towards multi-modal models that combine multiple different information representations such as text, image, sound, layout. Those models have shown great results on short, single page documents, but are difficult to apply to long, multi-page or dense documents. This is because the self-attention time and space computational complexity is \(O(N^{2})\) where \(N\) is the length of the sequence. It effectively limits the usage of Transformer models on long sequences due to either long training or lack of GPU memory.
In this work, we explore several approaches and architectures in order to use Transformer models on long documents. For simplicity, we limit our study to text and layout (i.e., text position in the page) modalities, and chose to focus on document length to evaluate the model efficiency. We compare various encoder-only models on Sequence Tagging tasks with business and academic documents. We also study the impact of relative attention based on document layout instead of a linear token position, and its implementation for long-range Transformers.
## 2 Related Work
### From NLP to Document Understanding
This work derives from both long-range Transformers proposed in NLP tasks, trying to process longer sequences at once and Transformer architectures adapted to DU. Before the proposal of Transformers, the _de facto_ architecture for NLP has been Recurrent Neural Networks. Multiple improvements have been proposed, for example to tackle vanishing gradients like Long-Short Term Memory cells [9]. Coupled with Conditional Random Fields, bidirectional LSTM encoders were then capable at most text understanding task [16]. For more complex Information Retrieval, where target information can span multiple tokens, BIESO tags allow better decoding by precisely locating the beginning and end of the information. Although long sequences can be processed with Recurrent Neural Networks, longer input negatively affects the performance of encoder-decoder architectures [2]. Hence, the attention mechanism was quickly adopted for those architectures as an "information highway" between the encoder and the decoder.
In addition to these new architecture developments, large progress has been made in the past years on how to learn usable word representations. Before, word embeddings were trained at the same time as the other model's parameters. Then, approaches like Word2Vec and GloVe [21, 22] showed that self-supervised learning improves finetuning on all tasks. Major improvements came
from contextual embeddings, first introduced by Elmo [23]. Contrary to static embeddings, contextual embeddings can better represent words with multiple meanings in adequation with their surroundings.
This is where Transformer models rose, heavily relying on (self-)attention and pre-training giving unprecedented performance at the time. Most NLP challenges leaderboards were monopolized by BERT like models, growing bigger and deeper by the day [4; 5; 19].
In parallel to those quick improvements, the DU community developped alternatives to bi-LSTM, using multiple modalities to provide more useful information to the model. Some used convolutions over a grid mixing image and text [6; 12], others proposed graph-based models [33] to represent a document.
The revolution in DU came from Transformer architectures. Pre-trained models able to leverage large document collections outperformed all previous approaches. LayoutLM [32], for example, only introduced 2D positional embeddings over BERT and was pre-trained on the RVL-CDIP [17] collection. It opened the way to many other models applying Transformers to previous design [6], leveraging end-to-end capacities of encoder-decoder models [24], or providing image and text to the models like a visual Transformer [15]. Because the Transformer output is independent of the sequence order, positional embeddings are classically added to the input. It is also possible to introduce relative bias to the self-attention mechanism to promote local interactions inside the self-attention.
Most recent models for DU propose to leverage as much information as possible by using multiple modalities: text, layout and image. Either by combining Convolutional Neural Networks with Transformers [24; 31] or mixing visual with classical Transformers [10; 13]. Even though those approches provide superior results, we chose to not include image information to our architectures.
### Long Range Transformers
Since the introduction of BERT [7] and GPT [26], Transformers have demonstrated their capacity to understand and model language [29]. Their ability to manipulate words can be visualised through the amount of attention each token allows to other tokens. However, dot-product attention computation involves a \(O(N^{2})\) time and memory complexity where \(N\) is the sequence length. It limits the capacity of Transformer-based models in dealing with long sequences as they need too much GPU memory and/or take too long to process.
Many modifications have been proposed to replace the attention layer with some efficient approximation that can be computed in \(O(N)\) or \(O(N\log(N))\). They have been developped and tested with NLP tasks where long sequences are most likely to be found like long text summarization and translation. Some models use attention patterns [1; 3; 34] to limit token attention to a fixed number of other tokens. Some combination of sliding window, global and random patterns provide a simple but efficient attention. A balance needs to be found between more attention context and attention complexity. It is also possible to learn the attention pattern by limiting attention to tokens that share some locality sensitive hash [14]. Others proposed to replace the \(N\times N\) attention matrix
with a low-rank approximation. Empirical observations on multiple NLP tasks show that the attention matrix can be replaced with a lower rank approximation without harming the attention process too much [30].
However, long range Transformer architectures have not yet been used on DU tasks, mostly due to datasets not containing lengthy documents.
## 3 Datasets
We used 2 document datasets, where our choice was mainly made based on document length and the task itself. We wanted a NLP task that can be represented as Sequence Tagging in order to test the whole encoder with long inputs. Both datasets consist of English-only documents with close to perfect OCR extraction. They provide word-level axis aligned bounding boxes in the form that can be fed to the model as layout information. We use the OCR provided order for the input sequence and do not further analyze documents to extract their structure.
### Business Documents
The first dataset consists of Customer Orders submitted to a commercial platform between 2018 and 2021. Due to privacy concerns, these documents cannot be shared. It contains 80k documents that can be divided in 9000 different issuers with no more than 50 documents from the same issuer. Usually, an issuer only emits documents with the same template for convenience. About 55% of documents can be tokenized into a sequence of 512 tokens which fit into classical Transformer default maximum length. Only 5% of documents are longer than 2048 tokens, following a long tail of distribution. In order to evaluate the models' generalization abilities, we split into train, validation and test sets such that templates in the test set have not been seen by the model during training.
The task consists of Information Extraction on multiple known classes: _document number_, _date_, _total amount_, _item ID numbers_ and _item quantities_. Some information only appears once in the document (e.g., _document number_, _date_ and _total amount_) while others are repeated for each line item in the business order. We call _header_ fields those only occurring once and _table_ fields others as they are most of the time structured in a table layout. There could be between 1 and 50 items present in any document, their number is not known in advance. Fig. 1 shows the labeling of a multi-page document. Even though header field are sometimes repeated on each page, it is only labeled once in order to stay consistent acrosstemplates. Labels are provided at the word level based on manual customer document extraction. We also controlled labeling quality and rejected from the dataset documents with missing mandatory fields or wrong number of line items.
A superset of this dataset was used for pre-training models on business documents. It consists of 300k Customer Orders and 100k Invoices from the same commercial platform. All documents were submitted and processed by the platform but later rejected due to labeling errors or bad habits. Fortunately, this
does not impact the OCR quality and allows us to pre-train our models on a large collection of recent documents. We chose to use it for pre-training instead of RVL-CDIP [8] for the OCR quality difference.
### DocBank
DocBank [18] is a dataset containing 500k public research article pages. It contains English documents spanning various research fields. Documents were obtained on arXiv and were annotated with PDFPlumber, a PDF parser that accurately extracts item bounding boxes. The task consists in document layout analysis. Li et al. [18] provide both pixel and word-level annotations for CV and NLP models. The order of words is defined from top-to-bottom and left-to-right, except for multicolumn documents where whole columns are ordered left-to-right. In this work we will only use textual information along the word 2D positions.
Docbank segmentation task contains 12 categories (e.g. _title_, _paragraph_, _figure_ etc.) representing semantic parts of a research article. Because articles contain dense paragraphs, most pages are longer than 512 tokens once tokenized. In fact only 11% of the test documents contains less than 512 tokens and 84% contains between 512 and 2048 tokens.
Figure 1: Sample pages with colored labels similar to those in the Business Documents dataset. Both pages come from the same document, the first page in on the left and the last page on the right. Some information are repeated across pages of a document.
## 4 Models
We compared LayoutLM, a Transformer for DU which is our baseline, with our long range contributions LayoutLinformer and LayoutCosformer1. They only differ by their implementation of self-attention: LayoutLM uses full self-attention like BERT, LayoutLinformer uses a low-rank approximation first proposed by [30] and LayoutCosformer uses a kernel-based method introduced in [25] as a replacement. We further detail how they work in the subsequent subsections.
Footnote 1: Models implementation and weights available at [https://github.com/thibaultdouzon/long-range-document-transformer](https://github.com/thibaultdouzon/long-range-document-transformer)
We chose those models over other efficient Transformers based on the convenience to adapt them from linear text to 2-dimensional documents. Efficient attention based on sliding windows [3, 34] does not transpose nicely to 2D documents because the sliding window mechanism is deeply linked to the linear order of words. Even though our approach tries to provide words in a natural order, in some documents it does not reflect the human reading order - for example for table content. To mitigate this issue, we preferred to rely on global attention or 2D local attention.
Similarly to how LayoutLM was adapted from BERT, we adapt Linformer and cosFormer models to process documents by adding a 2D positional embedding and a page embedding to the input. We chose to use learned embeddings to simplify weight transfer from LayoutLM to our long-range models.
Figure 2: DocBank sample image on the left and its corresponding segmentation on the right. Each color represents one class (black for _paragraph_, purple for _equation_,...).
### LayoutLM
LayoutLM [32] has proven its capacities on most tasks related to documents since its release. It reuses BERT [7] encoder and tokenizer, and only modifies the positional encoding by introducing a 2D encoding for word boxes boundary and size. This modification allows the model to leverage layout information provided by the OCR. LayoutLM's computational bottleneck is the self-attention layer. In Transformers, self-attention [28] takes _queries_\(Q\), _keys_\(K\) and _values_\(V\) and computes a weighted average of _values_ for each input. The weights are given by the dot product between each pair of _queries_ and _keys_. It can be formulated \(\mathtt{softmax}(QK^{\top})V\), where \(Q,K,V\in\mathbb{R}^{N\times d}\) and \(N\) represents the sequence length and \(d\) the model hidden size. Fig. 3 describes the self-attention operation. The matrix \(\mathtt{softmax}(QK^{\top})\) is the attention matrix containing the intensity of attention between each pair of tokens. Xu et al. [32] pre-trained the model on RVL-CDIP [8] which contains 7 millions scanned documents released in the 90' from the tobacco industry. Two versions of LayoutLM was have been released: base and large, and it outperforms all preceding text-only language models on classification and information retrieval tasks.
In our experiments, we only use the base model with maximum sequence length \(N=512\) and hidden size \(d=768\). For longer documents, we split the tokenized sequence into chunks of maximum length and process them separately.
### LayoutLinformer
Our first contribution, LayoutLinformer is based on the Linformer architecture [30] and adapted to document processing by adding 2D positional encodings and using LayoutLM pre-trained weights. Although true self-attention can only be computed in \(O(N^{2})\), it can be approximated very efficiently by leveraging the low rank of the attention matrix \(QK^{\top}\). In Fig. 4, we illustrate LayoutLinformer's attention mechanism. Keys and values sequence length dimension is projected on a smaller space of size \(k\) through a linear transformation: \(K^{\prime}=P_{K}K\) where \(P_{k}\in\mathbb{R}^{k\times N}\) is the learned projection matrix (respectively \(V^{\prime}=P_{V}V\) where
Figure 3: Illustration of the attention mechanism used in LayoutLM, normalization and multiple heads aside. In this example, \(N=5\) and \(d=2\). Due to the softmax operator, the product \(QK^{\top}\) must be computed, resulting in \(O(N^{2})\) complexity.
\(P_{V}\in\mathbb{R}^{k\times N}\)). This means the size of the new attention matrix \(Q(P_{K}K)^{\top}\) is \(N\times k\), reducing the complexity of self-attention to \(O(Nk)\).
An immediate drawback of this projection is the loss of ability to visualize the attention matrix in order to explain the model. It is also no longer possible to implement causal attention or any specific attention pattern. On the other hand, Linformer provides a simple modification to the Transformer in order to make it manage longer sequences with global attention. Most model weights are identical between the two architectures, allowing us to transfer LayoutLM pre-trained weights into DocumentLinformer before further pre-training.
Wang et al. [30] showed that it can obtain a performance comparable to Roberta [19] on multiple NLP benchmarks. He brang evidence that its performance is mostly determined by the projection dimension \(k\), and that increasing sequence length \(N\) did not degrade results. Therefore, we chose to apply LayoutLinformer with \(N=2048\) and \(k=512\) in order to compare its performances with LayoutLM.
### LayoutCosformer
Our second contribution, called LayoutCosformer, is based on the cosFormer [25] model which is another efficient alternative to the original Transformer. Similarly to LayoutLinformer, we transferred pre-trained weights from LayoutLM to DocumentCosFormer thanks to the similarities between architectures. It achieves linear complexity by replacing the non-linear similarity computation between \(Q\) and \(K\) with a linear operation. More specifically, Qin et al. [25] proposed to replace \(\exp(QK^{\top})\) with \(\Phi(Q)\Phi(K^{\top})\) where \(\Phi\) is a nonlinear function. Fig. 5 illustrates in more detail how LayoutCosformer attention works. In order to keep values of the similarity matrix positive, a good choice is \(\Phi=\texttt{ReLU}\). Computations can then be reordered to decrease the complexity to \(O(N)\).
In addition to its linear self-attention complexity, Qin et al. [25] include a relative self-attention bias towards nearby tokens. They cannot simply add the bias to the \(N\times N\) similarity matrix before multiplying with values because it
Figure 4: LayoutLinformer attention mechanism. In this example, \(N=5\), \(d=2\) and \(k=3\). Efficient matrix multiplication ordering reduces the complexity to \(O(Nk)\).
would mean a quadratic complexity. Their solution is to use functions that can be decomposed into a sum of products: \(f(x,y)=\sum_{n}g_{n}(x)\times h_{n}(y)\). If we call \(B\) the bias matrix where \(B_{i,j}=f(i,j)\), their biased similarity matrix can be written \(\Phi(Q)\Phi(K^{\top})\odot B\) where \(\odot\) is the element-wise product. Then when looking at the attention from token \(i\) to token \(j\) we obtain:
\[s_{i,j} =\Phi(Q_{i})\Phi(K^{\top}_{j})B_{i,j}\] \[=\Phi(Q_{i})\Phi(K^{\top}_{j})\sum_{n}g_{n}(i)\times h_{n}(j)\] \[=\sum_{n}\Phi(Q_{i})\Phi(K^{\top}_{j})g_{n}(i)h_{n}(j)\] \[=\sum_{n}(\Phi(Q_{i})g_{n}(i))\times(\Phi(K^{\top}_{j})h_{n}(j))\]
Using this trick, they proposed to use a cosine bias \(B_{i,j}=\cos(\frac{\pi}{2M}(i-j))\) which can be decomposed into \(B_{i,j}=\cos(\frac{\pi}{2M}i)\cos(\frac{\pi}{2M}j)+\sin(\frac{\pi}{2M}i)\sin( \frac{\pi}{2M}j)\). With the normalization constant \(M\) set to the maximum sequence length, they ensure \(0<B_{i,j}<1\) with a maximum when \(i=j\). In the next subsection, we demonstrate how it can also be applied to 2D relative attention.
### 2D Relative attention
Global self-attention is a powerful tool for capturing long-range dependencies. However, although distant dependencies can be relevant, most attention should be toward close neighbors. Relative attention [24, 27] selectively focuses on specific parts of the input by biasing the base self-attention. This was proven useful on text which that can be represented as a linear sequence, but due to complex layouts, the sequence order is suboptimal to determine locality. In order to better capture local context in documents, we introduced 2D relative attention based on the token positions inside the document.
In LayoutLM, we pre-compute for each document an attention bias matrix \(B\) and modify the self-attention formula to take it into account. More precisely, we replace the self-attention with:
Figure 5: LayoutCosformer efficient attention mechanism with \(N=5\) and \(d=2\). The linear similarity enable computing first \(\Phi(K^{\top})V\) and factorize \(\Phi(Q)\) out of the summation.
\[\text{RelativeAttention}(Q,K,V,B)=\left(\text{softmax}(QK^{\top})\odot B\right)V\]
Where \(\odot\) denotes element-wise multiplication. Directly multiplying the attention matrix by some bias is very flexible and allows for any bias matrix to be chosen. It also matches the way LayoutCosformer applies relative bias to its self-attention, thus allowing to compare them.
On the other hand, it is nontrivial to implement relative attention for global long-range Transformers. Because LayoutLinformer compresses the sequence dimension of the Key matrix, it is not possible to apply custom 2D attention bias to LayoutLinformer. For LayoutCosformer it is possible to reuse the same trick as in the 1D version with another bias function.
Because the function must remain separable into a sum of products, a good choice is to use exponentials and trigonometric functions. We first prove that the product of two separable functions is also itself separable. Let \(f^{1}=\sum_{n}g_{n}^{1}(x)\times h_{n}^{1}(y)\) and \(f^{2}=\sum_{m}g_{m}^{2}(x)\times h_{m}^{2}(y)\) be two functions separable into sum of products, then:
\[f^{1}(x,y)\times f^{2}(x,y) =\left(\sum_{n}g_{n}^{1}(x)\times h_{n}^{1}(y)\right)\times\left( \sum_{m}g_{m}^{2}(x)\times h_{m}^{2}(y)\right)\] \[=\sum_{n}\sum_{m}\left(g_{n}^{1}(x)\times h_{n}^{1}(y)\times g_{ m}^{2}(x)\times h_{m}^{2}(y)\right)\] \[=\sum_{n,m}(g_{n}^{1}(x)g_{m}^{2}(x))\times(h_{n}^{1}(y)h_{m}^{2 }(y))\]
Which can also be separated into a sum of products.
We chose to compare 2 different attention biases. The first one is simply the product cosine bias along both X and Y axis. It captures local context in every direction with variations close to euclidean distance. We define \(B^{\text{squircle}}\)1 the following:
Footnote 1: Squircle are intermediate shape between square and circle, see [https://en.wikipedia.org/wiki/Squiircle](https://en.wikipedia.org/wiki/Squiircle). Contours of the surface described by \(B^{\text{squircle}}\) is not actually a squircle but also range from square to circle.
\[B^{\text{squircle}}_{i,j}=\cos(\frac{\pi}{2M}(x_{i}-x_{j}))\times\cos(\frac{ \pi}{2M}(y_{i}-y_{j}))\]
Where \(x_{i}\) and \(y_{i}\) (resp. \(x_{j}\) and \(y_{j}\)) are positions of token \(i\) (resp. \(j\)) along X and Y axis. In practice we used the coordinates of the center of each token bounding box.
Although this bias correctly captures 2D locality, documents complex layout sometimes implicitly calls for other definition of proximity in order to understand it. For instance, Fig. 6 shows a table from a purchase order.
In this configuration, in order to grasp correctly the meaning of a cell in the table, the model needs to make the connection with the table header positioned at the beginning of the page. When multiple line items are spanning
the whole page, we hypothesize that this relative attention might hurt the performance due to the long-distance separating tokens. To deal with this issue, we propose another bias pattern. Its objective is to allow attention to tokens that are aligned with each other along the X or Y axis. To this end, we define \(B^{\text{cross}}_{i,j}=\max\{\cos(\frac{\pi}{2M}(x_{i}-x_{j})),\cos(\frac{\pi}{2M }(y_{i}-y_{j}))\}\). We illustrate the differences with an example shown in Fig. 6. With cross relative attention bias, the highlighted token (the price of an item) can better attend to the column header "Unit Price" and to its related line. In general, tokens inside a table can fully attend to their corresponding column header and line. This should prove helpful for understanding tables by guiding the model attention towards semantically related tokens.
## 5 Experiments
Our models are pre-trained on our Business Documents collection for 200k steps using Masked Visual-Language Modeling [32]. They are then finetuned on each dataset. For both tasks, we use BIESO tags to help the model decode predictions spanning multiple tokens. We performed our experiments on two RTX A6000 for pre-trainings and single RTX A6000 for fine tunings. LayoutLM models runs with a batch size of 48 and sequence length of 512 while long-range models (LayoutLinformer and LayoutCosFormer) can only get to a batch size of 16 with sequence length of 2048 on a single device. We accumulate gradient for 96 data samples before updating model's weights. We use Adam with learning rate \(lr=2\cdot 10^{-5}\) and linear warmup for 5% of the training steps followed by linear decrease.
### Long-Range
Theoretical results on models architectures hints towards LayoutLinformer and LayoutCosformer being much more efficient the longer the sequence. We use a dummy inference task with increasing sequence lengths and compare our 2 models with LayoutLM base architecture. The results are available in table 1.
Figure 6: Contour plots for squircle and cross relative attention bias applied to token “210,80” (bottom-right corner). Because token positions are normalized between 0 and 1000, tokens along the same line cannot fully attend to each other on the left while they are unaffected on the right.
They reveal how the computational complexity of full self-attention disables LayoutLM when dealing with sequence longer than 1024. Its memory consumption limits our tests with LayoutLM up to sequence length of 4096, longer sequences couldn't fit into a single GPU. On the other hand, LayoutLinformer and LayoutCosformer performed as predicted, with LayoutCosformer being slightly slower and more memory hungry than LayoutLinformer.
It turns out document's length also greatly impacts models metrics performance on the Customer Order dataset. For better visualization, we group documents into 3 length categories: short (document fits into 512 tokens), medium (between 513 and 2048) and long (2049 or more tokens). LayoutLM models can process short documents in a single sequence but need to split other documents into multiple independent sequences. Short and medium documents fit into LayoutLinformer and LayoutCosformer sequence length but not long documents. When a model cannot process a document in a single sequence, we split the document into multiple sequences and process them separately.
\begin{table}
\begin{tabular}{c|c c c c c c c c c} & \multicolumn{8}{c}{Time in seconds / _Memory in GiB_} \\ & \multicolumn{8}{c}{Sequence length} \\ Model name & \multicolumn{2}{c}{512} & \multicolumn{2}{c}{1024} & \multicolumn{2}{c}{2048} & \multicolumn{2}{c}{4096} & \multicolumn{2}{c}{8192} & \multicolumn{2}{c}{16384} \\ \hline LayoutLM & 1.41 & _1.25_ & 2.83 & _2.50_ & 7.39 & _5.01_ & 23.43 & _13.69_ & - & - \\ LayoutLinformer & 1.18 & _1.35_ & 1.92 & _2.26_ & 3.54 & _3.28_ & 6.90 & _5.19_ & 13.08 & _8.96_ & 25.65 & _16.78_ \\ LayoutCosformer & 2.03 & _1.36_ & 2.50 & _2.37_ & 4.68 & _3.38_ & 9.00 & _5.38_ & 17.23 & _9.59_ & 33.96 & _17.59_ \\ \end{tabular}
\end{table}
Table 1: Duration and memory consumption of the 3 models for various sequence lengths on an inference task.
Figure 7: F1-score stacked bar plot of multiple models on the Business Orders dataset. In each document length categories, models are in the same order.
In Fig. 7, we compare our pre-trained LayoutLM models with LayoutLinformer and LayoutCosformer. First, we discovered LayoutLM is very sensitive to the split position for medium and long documents. Introducing a sequence split when a new page is started greatly improves performance, we call this model LayoutLM SplitPage. It performs better on _total amount_ (from 53.7% to 70%), item ID number (from 62.7% to 75.6%) and _quantity_ (from 77.0% to 90.1%) recognition for medium and long documents. The repetitive structure of multipage documents combined with the fact that most pages fit in a 512 tokens sequence allow the model to not get lost. _Document number_ and _date_ are mostly not affected because they almost always occur at the beginning of the document, which is not affected by the splitting strategy.
Although LayoutLinformer and LayoutCosformer perform slightly worse than LayoutLM for short documents on all classes (around 74% F1 score on _item ID number_ versus 81% for LayoutLMs), their performance decreases less than LayoutLM's on medium documents. On those medium documents, even LayoutLM SplitPage drops from 88.2% to 70.1% F1 score on the _total amount_ while both long-range models only reduce performance from roughly 87% to 80%. We also noticed date recognition performance degrades across all models with longer documents which is not expected because _dates_ are usually at the top of the first page. The same can be noted for the _order number_ at a smaller scale. It might be due to a correlation between document's length and layout: short and medium / long documents do not share layouts. And because there are twice more short documents than longer ones, it is harder to generalize to new layouts. Overall, the performance of long-range models is more consistent acrossa wide variety of document lengths.
We performed the same experiments on Docbank dataset, except for the page-splitting part as all documents are single page. At first we compared models performance for each document length categories in Table 2. It contains average F1 score across all labels weighted by the support of each label. It turns out length categories introduce bias in the composition of pages, with labels being very sparsely represented in some categories. This bias implicitly selects more first page in short pages (with lower text density), and medium sized pages contain a lot of paragraphs.
We observe the same drop in performance for long-range models on short documents, with LayoutLinformer providing better results across the board than LayoutCosformer. But we notice LayoutLM perform slightly better on medium documents than on short. Long-range models follow the same pattern with a greater difference between short and medium pages, LayoutCosformer almost
\begin{table}
\begin{tabular}{c|c c c} & \multicolumn{3}{c}{F1 weighted macro average} \\ Model name & Short & Medium & Long \\ \hline LayoutLM & 95.36 & 95.84 & 91.42 \\ LayoutLinformer & 95.20 & 96.49 & 91.41 \\ LayoutCosformer & 94.03 & 95.91 & 91.40 \\ \end{tabular}
\end{table}
Table 2: F1 weighted average for each model and document length categories. All models were first pre-trained on the Business Documents collection.
gaining 2 average F1 percentage points. There is almost 20 times fewer long documents than medium, which could explain part of the global performance loss. Unfortunately, due to those biases, it is difficult to draw conclusions on model's performance.
Table 3 compiles results for LayoutLM and long-range models for all labels. First, we can make sure our training pipeline performs on par with what Docbank authors reported for LayoutLM base model by comparing their results and the ones we obtained by using public LayoutLM weights. Except for _author_ and _title_ labels, both results are very close, and the macro average is almost identical. Secondly, pre-training on business documents negatively impacts LayoutLM performances on all labels, losing 1.4 F1 percentage points on average. This advocates for pre-training data crucial role in later model finetuning results and its composition. Finally, long-range models performed on the same level as LayoutLM. LayoutLinformer even being more performant than our pre-trained LayoutLM. Overall, even though LayoutCosformer seems less performant on this task, both long-range models performed better than our pre-trained LayoutLM on _table_ and _equation_. Those two labels might beneficiate from long-range references, giving the model hints of their presence in the current sequence.
### Relative Attention
We conduct the same experiments on models with 2D relative attention and compare their performance with their flat attention counterpart. On the business order dataset, Table 4 shows slight gains when using squircle attention with LayoutLM. For all document lengths, information retrieval is improved a few percentage points of F1 score over our previous LayoutLM Split Page implementation. Though, we do not observe the same improvement with the cross shaped attention pattern. This might indicate focusing on very local neighbors helps LayoutLM making the right decision. Overall, relative attention improves results in some circumstances but not as much as splitting every page did. However, when combined with LayoutCosformer, we observe a significant degradation
\begin{table}
\begin{tabular}{c c|c c c c c c c c c c c|c} & \multicolumn{3}{c|}{2D Relative} & \multicolumn{6}{c|}{F1 score} & \multicolumn{3}{c|}{Macro} \\ Model name & attention & Abst. & Author & Caption & Equa. & Figure & Foot & List & Para. & Refe. & Section & Table & Title & average \\ \hline LayoutLM (Li et al. [18]) & - & 98.1 & 85.9 & 95.9 & 89.4 & **100.0** & 89.5 & **89.4** & **97.8** & 93.3 & **95.9** & 86.3 & **95.7** & 93.1 \\ LayoutLM (Xu et al. [32]) & - & 98.3 & 89.6 & 96.0 & 89.0 & 99.7 & 91.6 & 88.2 & 97.5 & **93.5** & 94.3 & 87.4 & 90.4 & 93.0 \\ LayoutLM (*) & - & 97.8 & 87.5 & 94.9 & 87.2 & 99.7 & 90.5 & 84.0 & 97.1 & 93.2 & 92.8 & 85.7 & 88.6 & 91.6 \\ LayoutLM (*) & Squiercle & **98.4** & 90.2 & **96.1** & 89.7 & 99.8 & 92.0 & 88.9 & 97.6 & 93.4 & 94.6 & 87.7 & 90.3 & **93.2** \\ LayoutLM (*) & Cross & **98.4** & **90.3** & 96.0 & 89.6 & 99.8 & **92.1** & 88.7 & 97.6 & 93.4 & 94.6 & 87.5 & 90.7 & **93.2** \\ LayoutLinformer (*) & - & 97.9 & 88.9 & 93.7 & 90.0 & 99.5 & 91.1 & 87.9 & 97.5 & 93.2 & 91.3 & 87.6 & 88.7 & 92.3 \\ LayoutCosformer (*) & - & 97.2 & 87.2 & 91.0 & 88.1 & 99.3 & 90.6 & 87.4 & 97.1 & 93.2 & 81.4 & 87.0 & 88.3 & 90.7 \\ LayoutCosformer (*) & Squiercle & 97.0 & 85.4 & 92.4 & 89.2 & 98.8 & 90.7 & 84.2 & 97.2 & 93.2 & 85.6 & 87.9 & 86.8 & 90.7 \\ LayoutCosformer (*) & Cross & 97.4 & 86.9 & 93.8 & **91.2** & 98.9 & 91.7 & 87.5 & 97.5 & 93.1 & 87.4 & **89.0** & 88.1 & 91.9 \\ \end{tabular}
\end{table}
Table 3: Results on Docbank dataset for LayoutLMs and long-range models. Models with a asterisk (*) are ours. They were pre-trained on the Business Documents collection before finetuning on Docbank.
in performance for all labels with the squircle attention while the cross pattern provides similar results as the raw LayoutCosformer.
On Docbank task, relative attention provides noticeable performance gains for both LayoutLM and LayoutCosformer. We provide all results in Table 3. LayoutLM with relative attention is standing out, going from 91.6% F1 score to 93.2% for both squircle and cross patterns. Most improvements are made on _author_, _equation_ and _list_, each gaining at least 2 F1 score points. Both resulting models even beat Docbank's authors version by a thin margin. This is impressive knowing those models were pre-trained on the same business order dataset as our base LayoutLM which suffered a 1.5 F1 score performance drop as a consequence. It turns out _author_, _equation_ and _list_ were also the fields where our LayoutLM performance dropped the most compare to stock LayoutLM. Applying cross shaped relative attention to LayoutCosformer also improves performance across most labels. It even outperforms all other models on _equation_ and _table_ fields which benefit most from very long attention.
## 6 Conclusion
In this work, we showed the impact of document length on Transformer-based models applied to Document Understanding. Depending on the document's type and the task, model's performance on longer documents can be negatively impacted with F1 score dropping 20% for the most impacted. We explored several alternatives including another sequence split strategy and long-range layout-aware models based on Linformer and cosFormer architectures. They all proved to successfully reduce the performance gap between short and long documents (down to only 10% performance drop), sometimes at a small cost on short document's metrics. We also introduce relative attention based on 2D textual layout instead of the classical sequence order. It produces better results on dense text, significantly improving both LayoutLM and LayoutCosformer on the Docbank layout segmentation task.
In addition to other efficient Transformer architectures, we plan to investigate other ways to use longer sequences for DU. For example, in multi-modal models, this may allow fitting the whole text and visual patches of a document in a single sequence without needing more compute capabilities.
\begin{table}
\begin{tabular}{c c|c c c} & & & Macro average & F1 score \\ Model name & 2D Relative attention & Short & Medium & Long \\ \hline LayoutLM Split Page & - & 90.0 & 82.2 & 77.2 \\ LayoutLM Split Page & Squircle & **90.4** & 83.0 & 77.8 \\ LayoutLM Split Page & Cross & 90.0 & 82.0 & 77.6 \\ LayoutCosformer & - & 87.6 & 85.0 & **79.0** \\ LayoutCosformer & Squircle & 85.8 & 82.2 & 73.4 \\ LayoutCosformer & Cross & 87.6 & **85.2** & 77.2 \\ \end{tabular}
\end{table}
Table 4: Macro average F1 score on the Business Orders dataset with 2D relative attention. |
2309.13361 | Machine Learning with Chaotic Strange Attractors | Machine learning studies need colossal power to process massive datasets and
train neural networks to reach high accuracies, which have become gradually
unsustainable. Limited by the von Neumann bottleneck, current computing
architectures and methods fuel this high power consumption. Here, we present an
analog computing method that harnesses chaotic nonlinear attractors to perform
machine learning tasks with low power consumption. Inspired by neuromorphic
computing, our model is a programmable, versatile, and generalized platform for
machine learning tasks. Our mode provides exceptional performance in clustering
by utilizing chaotic attractors' nonlinear mapping and sensitivity to initial
conditions. When deployed as a simple analog device, it only requires
milliwatt-scale power levels while being on par with current machine learning
techniques. We demonstrate low errors and high accuracies with our model for
regression and classification-based learning tasks. | Bahadır Utku Kesgin, Uğur Teğin | 2023-09-23T12:54:38Z | http://arxiv.org/abs/2309.13361v1 | Machine Learning with Chaotic Strange Attractors
## Abstract
Machine learning studies need colossal power to process massive datasets and train neural networks to reach high accuracies, which have become gradually unsustainable. Limited by the von Neumann bottleneck, current computing architectures and methods fuel this high power consumption. Here, we present an analog computing method that harnesses chaotic nonlinear attractors to perform machine learning tasks with low power consumption. Inspired by neuromorphic computing, our model is a programmable, versatile, and generalized platform for machine learning tasks. Our mode provides exceptional performance in clustering by utilizing chaotic attractors' nonlinear mapping and sensitivity to initial conditions. When deployed as a simple analog device, it only requires milliwatt-scale power levels while being on par with current machine learning techniques. We demonstrate low errors and high accuracies with our model for regression and classification-based learning tasks.
## Introduction
Current computing methods and hardware limit machine learning studies and applications regarding speed, data resolution and deployed platforms. Particularly, the power consumption of artificial neural networks started to raise questions regarding its impact on the environment. Recent studies indicate that the carbon emissions of training a complex transformer learning model are roughly equivalent to the lifetime carbon emissions of five cars[1], and training a famous language model consumed the energy required to charge 13,000 electric cars fully[2]. Several computing paradigms are proposed for machine learning studies to decrease training times and, therefore, the energy consumption issue. Among them, reservoir computing[3, 4] offers a promising path by using nonlinear systems with fixed weights to process information in high dimensional space. Various neuromorphic devices[5] were proposed to surpass chronic performance issues of conventional computing and high-power consumption issues. Optical computing methods[6, 7] and electronic memristive devices[8, 9, 10] were introduced as powerful reservoir computing platforms. The concept of fixed nonlinear high-dimensional mapping is of usual practice in several areas of machine learning, such as extreme learning machines[11] and support vector machines[12, 13].
In machine learning studies, chaotic systems were mainly employed as targets to learn dynamical systems[14, 15, 16]. Chaos theory examines deterministic but unpredictable dynamical systems that are extremely sensitive to initial conditions. These systems commonly occur in nature, inspiring art, science, and engineering[17]. Also, chaotic spiking dynamics of neurons have inspired several neuromorphic machine learning applications[18, 19]. In the past, chaotic systems were proposed for Boolean computation
and data processing, forming the concept of chaos computing. Early chaos computing devices operated one-dimensional chaotic maps to perform logic operations[20, 21]. These dynamical systems were also suggested for reservoir computing but used in a stable state just below the bifurcation point, where order transitions to chaos[22]. Operating in a stable state, such systems could not benefit from chaos in learning and information processing for machine learning purposes. Following these attempts, systems with "weakly chaotic" architecture were proposed[23, 24]. However, these models and other similar approaches(25) could not demonstrate competent performances[25].
Here, we propose an analog computing method based on controllable chaotic learning operators to perform high-dimensional nonlinear transformations on input data for machine learning purposes. Our method benefits circuits designed to compute chaotic strange attractors for reservoir computing purposes, as demonstrated in Fig.1. Since minor differences amplify and evolve, chaotic transformation processes information and improves performance for machine learning tasks. While previously reported physical reservoir computing hardware lacks flexibility, we introduce a controllable model by increasing overall versatility. Achieving this versatile platform allows us to enhance overall learning accuracy for various learning tasks through optimization. Our computing method intrinsically offers smaller footprints with power consumption levels as low as a milliwatt scale while preserving high accuracies. By providing complex and chaotic dynamics for the nonlinear transformation of data, our model performs on par with neural networks operating on conventional computing platforms. We present the generalizability of our approach by testing a wide variety of machine learning tasks, including image classification, and achieve high accuracies, reaching up to %99 for several tasks. Later, we explore how sensitivity to initial conditions in chaotic attractors improves learning accuracy and determines the power consumption required for training. Our method is a controllable, chaotic analog learning device that offers versatile and sustainable machine learning without compromising learning performance.
## Results
### Input / Output encoding and selection of the optimal attractor
As chaotic systems are extremely sensitive to initial conditions, we anticipate that the input method is highly correlated with output accuracy. We decide to input our data as initial conditions of the attractor. As we scale our data using z-score normalization, the initial conditions we use as inputs land in a scale that does not vitiate the physical model (see Supplementary Material for details). After the chaotic transformation is applied to our samples, we feed the transformed matrix to the regression or classification algorithm.
The pattern and average divergence pace between each chaotic attractor's close points are distinctive properties. We select six different chaotic attractors to evaluate how these unique properties translate into machine learning. We employ a nonlinear regression task on a randomly generated Sinus cardinal dataset (see Methods). We select the well-known Lorenz attractor[26], Rossler attractor[27], Burke-Shaw system[28], Sprott attractor[29], Chen's system[30], and Chua's Circuit[31] for this test. Our test attractors transformed randomly generated points for one hundred iterations and tried to predict
Sinus Cardinal function values corresponding to the transformed sample. After recording the lowest root mean squared error (RMSE) amongst iterations, we sort each result from smallest to largest RMSE value. Lorenz attractor was the most successful attractor with a RMSE of 0.143. We decide to proceed with further tests only using the Lorenz attractor and using the iteration with the lowest error after 100 iterations (see Supplementary Material for details).
### Sinus Cardinal regression
To assess the potential performance of machine learning with chaotic attractors, we run a simple regression task on a dataset of randomly generated samples and their values after the Sinus Cardinal function. In aforesaid benchmarking tests, we measure the vanilla RMSE of Lorenz attractor as 0.143. We apply the Bayesian Optimization algorithm to determine the best values for Lorenz system parameters to minimize error and improve model performance. After completing three separate optimizations, we select the values that lead to minimum error (\(\sigma\) = 10, \(\beta\) = 8/3, and \(\rho\)= 97). We use these coefficients in further tasks except the Abalone dataset, where we applied a separate optimization. After the optimization, an RMSE of 0.105 is achieved.
To further increase our model performance, we add another layer that will apply the chaotic transformation to the input variable. First, two parallel Lorenz Attractor layers with different \(\rho\) values transform the same input simultaneously. These two distinct outputs are concatenated into a single matrix, and this matrix undergoes the learning process. Using two attractors, we increase the dimensionality and benefit from different \(\rho\) values (see Largest Lyapunov exponent and accuracy). Keeping \(\sigma\) and \(\rho\) as
Figure 1: Schematic displaying the architecture of our model. Input values are encoded as initial voltages for the circuit performing analog computation of Lorenz attractor. After the chaotic transformation of data, output voltages are transferred to a processing device as reservoir output. Via the device, the last layer is performed, mainly ridge regression and classification, completing the learning process.
constants, we apply Bayesian Optimization to determine optimal \(\rho\) values for our transformers. After the optimization process, we decrease model RMSE down to 0.03. (see Supplementary Material for the figure)
### Abalone dataset
Moving on with a relatively more complex and multivariable regression task, we test our chaotic model in the abalone dataset. This dataset, taken from ref[32], is composed of the eight physical measurements of sea snails and their ages. We normalize the ages on a scale between 0 and 1. We apply z-score normalization and deploy chaotic transformation with a single transformer to every single variable. We use Bayesian optimization to find the optimal parameters of the Lorenz transformer. After optimization, we achieve remarkable accuracy (RMSE 0.072884) with parameters: \(\sigma\) = 10, \(\beta\) = 2.667, \(\rho\) = 64.917. (see Supplementary Material for the figure)
### Iris dataset
We move on with classification tasks to challenge our model. The Iris dataset is one of the classical datasets that assess linear and nonlinear classification abilities. The dataset from ref[33] consists of four physical measures of iris flowers from three distinct species. While one class, iris-setosa, is linearly separable from the other two classes, iris-versicolor and iris-virginica require nonlinear applications to be separated. We employ Ridge classification as the last layer because it is a simple and linear method that is fast to execute. Changing the usual method for visualizing classifier decision boundary, we use Linear Discriminant Analysis (LDA) to raw and transformed data (see figure 2). Using LDA, we retrieve 2D matrices for raw and transformed data and perform Ridge classification to these 2D matrices. A high accuracy of 97,78% is achieved, gaining about 18% over model accuracy before chaotic transformation (80,00%). After chaotic transformation, samples that belong to linearly non-separable classes (iris-versicolor and iris-virginica) all clustered almost perfectly (see Figure 2). As a result, the linear classifier we utilize can make an almost perfect classification. We also test other classifiers for benchmarking (see Methods and Supplementary Material Table 2 for details). A drastic increase in test accuracy of linearly inseparable classes is demonstrated in confusion matrices (see Figure 3).
### Liver disorders dataset
For this classification task, we test our methods in the liver disorders dataset. This dataset, taken from ref[34], comprises 12 features in blood samples taken from healthy people, hepatitis patients, fibrosis patients, or cirpsis patients. After obtaining an even dataset (see Methods for details), we employ the same chaotic transformation method to our features. With chaotic transformation, we report an increase in the ridge classifier accuracy by about 11% from 81.71% to 92.82% and achieve an accuracy of 98.84% with Linear SVM (see Supplementary Material). With chaotic transformation, like previous results, classes are well-clustered and decision boundary lines are easier to draw (see Figure 2). Also, substantial improvement in the accuracies of every single class is displayed in confusion matrices (see Figure 3).
#### MNIST dataset
We test our model for image classification after proving strong performance in numerical datasets. MNIST dataset[35] contains 70,000 samples (60,000 training, 10,000 testing) of 10 handwritten digit classes. For this task, 28x28 images are flattened without any normalization, and a fast algorithm for dimensionality reduction (see Methods for details) is employed as a form of preprocessing. After reducing dimensions of each flattened images from 1x784 to 1x7, we perform classification and set a baseline accuracy. After chaotic transformation, the accuracy of this Ridge classifier increase 81.42% to 95.42%. Such a significant increase in accuracy highlights the effect of chaotic nonlinear transformation one more time.
Figure 2: Impact of chaotic nonlinear transformation on the decision boundaries and the data points. **a,b,** Decision boundary of ridge classifier in Iris dataset before (**a**) and after (**b**) the chaotic transformation. **c,d,** Distribution of datapoints of Liver dataset before (**c**) and after (**d**) the chaotic transformation.
### Largest Lyapunov exponent and learning accuracy
Next, we investigate the impact of sensitivity to initial conditions on our model's performance in machine learning tasks. We set the Largest Lyapunov Exponent (\(\lambda\))(LLE)[36] to measure the pace of divergence in a chaotic system. An LLE that is larger than 0 indicates a chaotic system, and a larger LLE corresponds to faster diverging points.
In this test using the Liver Disorder dataset, we study a chaotic transformation with \(\rho\) values ranging between 1 to 100. Then, we record the best accuracy of Linear SVM. We evaluate the LLE of Lorenz attractor with \(\rho\) value in the range from 1 to 100. When compared with a non-chaotic (\(\rho\) = 2) and chaotic but less divergent model (\(\rho\) = 28), the
Figure 3: Confusion matrices of Ridge classifier accuracy in three classification tasks before and after chaotic transformation. The upper row (**a**, **b**, **c**) represents accuracies before chaotic transformation was applied and the lower row (**d**, **e**, **f**) represents accuracies after chaotic transformation. Confusion matrices of each dataset are represented as follows: Iris dataset (**a**, **d**), Liver Disorders Dataset (**b**, **e**), and MNIST Dataset (**c**, **f**). Confusion matrices are normalized row-wise (see Methods for details).
optimized model (\(\rho\) = 97) demonstrates higher accuracies in every single class (see Figure 4).
We also demonstrate a positive statistical relationship between the Largest Lyapunov Exponent and model accuracy after running Welch's t-test and Pearson's R-value test (see Supplementary Material for details). It should be noted that as these dynamical systems evolve, while we benefit from the divergence, as mentioned earlier at early iterations, the attractor transforms the data, becoming unlearnable after a particular stage.
### Circuit simulations
Encouraged by our model's impressive performances, we study its analog implementations with circuit simulations using a specific circuit designed for the analog computation of the Lorenz attractor[37]. After running the circuit and performing chaotic transformation to the data, we use a decision layer like our previous tests. We tune the analog computing achieved via the circuit by changing the resistance of a resistor and adjusting the \(\rho\) value. Alternatively, a digital potentiometer can be utilized to actively set the effect of chaotic data transformation in the circuit.
Figure 4: Relation between sensitivity to initial conditions and model accuracy. **a,** Confusion Matrices of Ridge classifier in Liver Disorder Dataset on three states of Lorenz transformer: non-chaotic (stable), chaotic, and more chaotic **b,** Color map visualization of P value (x-axis) LLE (y-axis) and accuracy of Linear SVM in liver disorder dataset (color values).
Our circuit simulations delivered the same performances with numerical test results when \(\rho\) is set to 97, thus proving the feasibility of our proposed analog learning model (See Supplementary Material Table 1). In circuit simulations, we calculate the total power consumption of our analog chaotic systems. A single analog unit consumes about 350 milliwatts (see Supplementary Material for details) to perform the chaotic transformation to data.
## Discussion
The findings of this study present a promising computing platform for the field of machine learning. The study introduces a novel method that has demonstrated effectiveness in various machine learning applications. It significantly improves power consumption for image and numerical classification tasks, using a straightforward linear last layer following a chaotic nonlinear transformation. This methodology, showcased in the context of MNIST, Liver Disorder, Iris, Abalone, and Sinus Cardinal datasets, not only enhances accuracy but also maintains input data integrity and permits flexible adjustments of model parameters and architecture.
One intriguing aspect of this study is the integration of circuit simulations to validate the practicality of the analog chaotic reservoir computing paradigm. This approach also enables an in-depth examination of the relationship between the Largest Lyapunov exponent of the chaotic transformer and overall model accuracy. Moreover, the circuit architecture's speed and power efficiency on a milliwatt scale hold promise, particularly in light of contemporary concerns regarding energy consumption in machine learning applications.
The Lorenz attractor, serving as the primary chaotic transformer in this study, emerges as a noteworthy element, showcasing remarkable performance in clustering and pattern recognition. The potential for further research in related areas, particularly in image segmentation using chaotic pattern recognition, is a direction that warrants exploration. The study also highlights how optimizing the chaos parameter, \(\rho\), can lead to modest yet appreciable increases in model test accuracy. The positive correlation between model accuracy and the Largest Lyapunov Exponent raises intriguing possibilities for future research. Our method opens the door to various opportunities for further investigation, particularly in the realm of neuromorphic architectures that can harness chaos as a computational element. Similar chaotic computing techniques can be realized with silicon-on-insulator technology for chip-size footprints. Such architectures may offer innovative solutions and insights for advancing the field of machine learning.
## Methods
In our method, we compute the following set of ordinary differential equations for the Lorenz attractor to transform our data[26]:
\[\frac{dx}{d\tau}=\ -\sigma x+\ \sigma y\]
\[\frac{dy}{d\tau}=-xz+\rho x-y\]
\[\frac{dz}{d\tau}=xy-\beta z\]
where we use coordinates x, y, and z for both input and output recording, and use parameter \(\rho\) to adjust chaos. We created a Python code using the NumPy[38] library that will iterate the ordinary differential equations of chaotic strange attractors in time using the Runge-Kutta method[39, 40]. Due to high dimensionality, each variable is given to the simulation code as an (x, y, z) vector in (variable, 1.05, -variable) format. This code is then used to perform reservoir computation on the given input. Due to the high dimensionality of strange chaotic attractors, every one-dimensional predictor is transformed into a three-dimensional vector. Besides the exception of the Iris dataset, all the output vectors are used for the learning process. In the Iris dataset, after the transformation of all samples is complete, the Linear Discriminant Analysis method is applied to data before the final layer to demonstrate the decision layers and the learning process is consistent. A timestep of 10-2 is used to simulate strange chaotic attractors. Unless stated otherwise, the coefficients of used attractors were in their author-suggested values for the attractor benchmark test.
Excluding Sinus Cardinal data, every other dataset used was normalized using z-score normalization with a standard deviation of samples equal to one before being transformed with chaotic attractors. The Sinus Cardinal dataset is synthetically created and not normalized, with the predictor being randomly generated 2048 samples in the range of [-pi, +pi] and target values being the Sinus Cardinal function of generated samples. The Liver Dataset is an uneven dataset, which may result in imbalanced learning, and to prevent this, we used the Python implementation of the Synthetic Minority Oversampling Technique to upsample the Liver dataset evenly. In the MNIST dataset, we flattened the dataset and applied dimensionality reduction using Uniform Manifold Approximation and Projection for Dimension Reduction[41](UMAP).UMAP reduced the predictor size to 1/112 of the original data (784 to 7). Dimensionality reduction lasted about two minutes. A ratio of 80% training set to 20% test set was used to divide the datasets into training and test sets. Only for the Iris dataset, a ratio of 70% training set and 30% test set was used to divide the dataset. In all displayed results, datasets are set to training and test tests using random state zero.
For the regression tasks (Abalone and Sinus Cardinal), predictors of every sample are transformed with our code, and a simple Linear Regression algorithm is implemented as the final layer that completes the learning process. For classification tasks (MNIST, Liver Disease, and Iris), following the same transformation process, the Ridge Classification, Linear Kernel Support Vector Machine (SVM), Polynomial Kernel SVM, Gaussian Kernel SVM, K-Nearest Neighbors, and Multilayer Perceptron Classifier
algorithms are used as the last layer. All the last layers are implemented using the Scikit-learn[42] package. Unless stated otherwise in the results or methods section, all classifiers are utilized using their default method in the scikit-learn package. The multilayer perceptron classifier utilized in the study comprises a learning rate of 10-3, a tangent hyperbolic (tanh) activation function, and three hundred hidden neurons. Confusion matrices are normalized over true predictions (row-wise), and decimal numbers are rounded to the nearest whole. The table results show the standard deviation of accuracies after 20 separate dataset splits.
For the Chaos and Learning test, we estimated LLEs using the method proposed by Rosenstein et al.[43]. We utilized the MATLAB built-in function for to estimation Lyapunov Exponents. We measured local LLE between iterations 1 and 200 as these iterations were our range. We decided to make parameters \(\sigma\) and \(\beta\) as fixed variates (\(\sigma\) = 10, \(\beta\) = 8/3). We decided to keep these parameters unchanged due to the high sensitivity to the initial conditions of the Lorenz Attractor, which would complicate testing. We employed the Linear SVM with MATLAB implementation for chaos and learning tests. We utilized the SciPy[44] library functions of the given statistical significance tests.
For the circuit simulations, we modified the schematic of the circuit that performs the analog computation of the Lorenz system[37] to be able to input initial conditions. We then converted this schematic to a netlist file that we will feed to LTSpice (see Supplementary Material for Circuit Schematic). This netlist file consists of the circuit structure and the commands to regulate the circuit simulations. Identical to the numerical simulations, we set the timestep of the circuit simulation to 10\(\upmu\)s and iterated the circuit one thousand times. Afterward, we created a Python code to work simultaneously with the LTSpice simulation engine and perform parallel circuit simulations. For every variable in a sample in the dataset, this code initiates a circuit simulation after modifying the initial conditions as the variable's value. Results of the simulations are stored in a ". raw" format file that will require another Python code to extract output values. This code we created retrieves one thousand iterations of every sample out of the result files and creates a matrix of output values. To complete the learning process, values are sliced iteration-by-iteration from the matrix, and the same final layers in the numerical simulations are applied to the sliced values. We retrieved power consumption data by slicing power dissipation data from the same result files.
### Data and code availability
Datasets that contain raw information are available in references[32, 33, 34, 35]. All numerical and circuit simulation test data and code are available upon reasonable request.
### Author contributions
B.K. performed simulations and tests, U.T supervised and directed the project. All the authors participated in the data analysis and the manuscript's writing process.
### Competing interests
The authors declare no competing interests. |
2309.10640 | Characterising The Atmospheric Dynamics Of HD209458b-like Hot Jupiters
Using AI Driven Image Recognition/Categorisation | In-order to understand the results of recent observations of exoplanets,
models have become increasingly complex. Unfortunately this increases both the
computational cost and output size of said models. We intend to explore if
AI-image-recognition can alleviate this burden. We used DYNAMICO to run a
series of HD209458-like models with different orbital-radii. Training data for
a number of features of interest was selected from the initial outputs of these
models. This was used to train a pair of multi-categorisation
convolutional-neural-networks (CNN), which we applied to our
outer-atmosphere-equilibrated models. The features detected by our CNNs
revealed that our models fall into two regimes: models with a shorter
orbital-radii exhibit significant global mixing which shapes the entire
atmospheres dynamics. Whereas, models with longer orbital-radii exhibit
negligible mixing except at mid-pressures. Here, the initial non-detection of
any trained features revealed a surprise: a night-side hot-spot. Analysis
suggests that this occurs when rotational influence is sufficiently weak that
divergent flows from the day-side to the night-side dominate over
rotational-driven transport, such as the equatorial jet. We suggest that
image-classification may play an important role in future, computational,
atmospheric studies. However special care must be paid to the data feed into
the model, from the colourmap, to training the CNN on features with enough
breadth and complexity that the CNN can learn to detect them. However, by using
preliminary-studies and prior-models, this should be more than achievable for
future exascale calculations, allowing for a significant reduction in future
workloads and computational resources. | F. Sainsbury-Martinez, P. Tremblin, M. Mancip, S. Donfack, E. Honore, M. Bourenane | 2023-09-19T14:21:11Z | http://arxiv.org/abs/2309.10640v1 | Characterising The Atmospheric Dynamics Of HD209458b-like Hot Jupiters Using AI Driven Image Recognition/Categorisation
###### Abstract
In-order to understand the results of recent observations of exoplanets, models have become increasingly complex. Unfortunately this increases both the computational cost and output size of said models. We intend to explore if AI-image-recognition can alleviate this burden.
We used DYNAMICO to run a series of HD209458-like models with different orbital-radii. Training data for a number of features of interest was selected from the initial outputs of these models. This was used to train a pair of multi-categorisation convolutional-neural-networks (CNN), which we applied to our outer-atmosphere-equilibrated models.
The features detected by our CNNs revealed that our models fall into two regimes: models with a shorter orbital-radii exhibit significant global mixing which shapes the entire atmospheres dynamics. Whereas, models with longer orbital-radii exhibit negligible mixing except at mid-pressures. Here, the initial non-detection of any trained features revealed a surprise: a night-side hot-spot. Analysis suggests that this occurs when rotational influence is sufficiently weak that divergent flows from the day-side to the night-side dominate over rotational-driven transport, such as the equatorial jet.
We suggest that image-classification may play an important role in future, computational, atmospheric studies. However special care must be paid to the data feed into the model, from the colourmap, to training the CNN on features with enough breadth and complexity that the CNN can learn to detect them. However, by using preliminary-studies and prior-models, this should be more than achievable for future exascale calculations, allowing for a significant reduction in future workloads and computational resources.
Planets and Satellites: Interiors -- Planets and Satellites: Atmospheres -- Planets: HD209458b -- Methods: Data Analysis -- Methods: Artificial Intelligence
## 1 Introduction
Over the last decade models of Exoplanetary Atmospheres, particularly hot Jupiters, have become increasingly complex whilst also requiring increasingly large amounts of time to reach equilibrium (as deeper regions of the atmosphere are considered).
Briefly, this increase in model complexity (and hence computational resource requirements) has come about as a result of a desire to more accurately model atmospheres (specifically their chemical composition and radiative dynamics) in order to better match/recover/understand the high spectral resolution observations made possible by Hubble (e.g. Charbonneau et al., 2000; Ranjan et al., 2011), JWST (e.g. Jakobsen et al., 2022; Ferruit et al., 2022; Boker et al., 2022; Pontoppidan et al., 2022), and other next-generation observing missions. As a result, whilst earlier (and longer time-scale) studies ignored chemistry and/or modelled radiative dynamics in an equilibrium sense (e.g. Newtonian Cooling - Showman et al., 2008; Rauscher and Menou, 2010; Mayne et al., 2014), recently this has started to thanks to next generation global circulation models (GCMs). These models now include explicit non-equilibrium chemistry, including cloud formation and condensation, and multi-banded, correlated-k, radiative transport schemes (see, e.g. Amundsen et al., 2016; Lee et al., 2021; Deitrick et al., 2022) which can simulate more accurate atmospheric feedback of evolving atmospheric dynamics. As a result, even when using high-performance, next generation, GCMS (such as LFRic - Adams et al., 2018; Maynard et al., 2020), both
the time required to run the model as well as the total output data generated has increased exponentially. This increase in data requirements is further exasperated by recent studies (such as Sainsbury-Martinez et al., 2019, 2021; Sainsbury-Martinez et al., 2023) which have shown that accurately modelling both the outer atmosphere dynamics, as well as global observable features, also requires the model to include a (close to) equilibrium deep atmosphere (which is computationally expensive due to the long dynamical time-scales of deep atmospheric circulations).
Whilst some of this burden can be lifted via run-time data-pre-processing (e.g. the XIOS1 library, which can not only interpolate complex GCM grids onto a simple long-lat grid at run-time, but also perform temporal and spatial averages), this still leaves a lot of data that needs to be analysed. A time consuming endeavour which only becomes more so when paired with the need to monitor on-going models in order to either confirm their stability or determine when they have equilibrated.
Footnote 1: [http://forge.ipsl.jussieu.fr/ioserver](http://forge.ipsl.jussieu.fr/ioserver)
Fortunately, recently developments in deep-learning (DL - a subset of general machine-learning methods, which allows for input data and trained features to be encoded at various levels of abstraction) driven image-classification provide a potential solution. Through the proper application of training data from GCM studies, it might be possible to train a DL-model to detect anticipated atmospheric features and dynamics, thus enabling a somewhat automated form of concurrent- or post-processing of simulation data. Note that we focus our investigation on image-classification models due to the maturity of algorithms in this area, driven by research into, for example, facial recognition, on-device image-recognition, and of course driverless cars.
In this paper we will explore how DL driven image-classification can aid in our understanding of exoplanetary atmospheres. More specifically, we will explore image-classification for a series of HD209458b-like models based upon those first performed in Sainsbury-Martinez et al. (2019), albeit with varying orbital radii (and hence surface irradiations and, synchronous, rotation rates) in order to expand upon the atmospheric features available for characterisation.
The structure of this work is as follows: In section 2 we introduce both the atmospheric models analysed here (which we calculate using the 3D GCM DYNAMICO - Dubos et al., 2015) as well as the deep-learning model used for the post-processing image classification (which itself is based on a multi-categorisation convolutional neural-network implemented in TensorFlow - Abadi et al., 2015; Developers, 2023). We follow this, in section 3, with our results: First we introduce some of the key features of exemplary HD209458b-like models, including the atmospheric features selected to train the DL-models. Note that that we have made the decision to use computer vision to analyse our results since a) this allows us to use the visualisations for both the models and our own analysis, and b) computer vision deep-learning algorithms are highly mature. Then, after applying the trained neural-networks to our atmospheric models, we explore their performance at tagging (i.e. identifying) atmospheric features of interest. This includes a discussion of how the failure of the neural-networks to detect any features at some pressures of our slowly-rotating model is in and of itself an interesting result which lead to the discovery of an uncommon/unusual atmosphere feature: a vertically coherent hot-spot on the night-side of our tidally-locked hot Jupiter. Finally we also discuss what features are easy/difficult for the network to detect, as well as how the format and breadth of the data being analysed impacts the final classifications - including factors ranging from the completeness of the training data-set, to the colourmaps used for each plot, which play an import role in transmitting information from the model to the CNN, with the choice of colourmaps playing a particularly outsized role in the detection of gradients in model data. We finish, in section 4, with concluding remarks, discussing the implications of our results and potential plans for future studies which will explore the atmospheric dynamics of variably rotating hot Jupiters in more detail.
## 2 Methods
In order to investigate how image classification can aid in the concurrent- and post-processing of exoplanetary atmospheric models (particularly hot Jupiters), we must introduce both the atmospheric models we intend to analyse (subsection 2.1) as well as the machine-learning model we will use to perform said analysis (subsection 2.2).
### Hot Jupiter Models
Following Sainsbury-Martinez et al. (2019, 2021) we use the GCM DYNAMICO to perform a series of 3D atmospheric models at different orbital radii (i.e. with different stellar-irradiation profiles and, tidally-locked, planetary rotation rates). Here we give a brief introduction to DYNAMICO, and its Newtonian Cooling approach to radiative transport (subsubsection 2.1.1) before introducing the exact model setups considered here
Table 3, including how the outer atmosphere, equilibrium, Newtonian Cooling profiles were calculated (subsubsection 2.1.2).
#### 2.1.1 Dynamico
DYNAMICO is a highly computationally efficient GCM that solves the primitive equations of meteorology (see Vallis, 2006 for a review and Dubos and Voitus, 2014 for a more detailed discussion of the approach taken in DYNAMICO) on a spherical, icosahedral, grid (Dubos et al., 2015). It remains under-development as a next-generation dynamical core for Earth and planetary climate studies at the Laboratoire de Meteorologie Dynamique and is publicly available2.
Footnote 2: DYNAMICO is available at [http://forge.ipsl.jussieu.fr/dynamico/wiki/](http://forge.ipsl.jussieu.fr/dynamico/wiki/)
In brief, DYNAMICO takes an energy-conserving Hamiltonian approach to solving the primitive equations of meteorology (see Showman et al., 2020; Mayne et al., 2019 for as discussion of the validity and limits to this approach).
Rather than the traditional latitude-longitude horizontal grid (which presents numerical issues near the poles due to singularities in the coordinate system - Williamson, 2007), DYNAMICO uses a staggered horizontal-icosahedral grid for which the total number of horizontal cells, \(N\), is defined by the number of subdivisions, \(d\), of each edge of the main spherical icosahedral3:
Footnote 3: Specifically, to generate the grid we start with a sphere that consists of 20 spherical triangles (sharing 12 vertex, i.e. grid, points) and then we subdivide each side of each triangle \(d\) times using the new points to generate a new grid of spherical triangles with \(N\) total vertices. These vertices then form the icosahedral grid.
\[N=10d^{2}+2. \tag{1}\]
In all the models considered here, we set the number of subdivisions to 30, which results in a total horizontal resolution of 9002 cells. This corresponds to an angular resolution of approximately \(2.4^{\circ}\). Additionally, at runtime, the output data is passed to XIOS which converts the horizontal-icosahedral grid onto a regular lat-long grid, with a resolution of 90x180 (i.e \(2^{\circ}\)) in the lat/long directions respectively, in order to simplify analysis.
Vertically, DYNAMICO uses a pressure coordinate system whose levels can be defined by the user at runtime. In our models, this means 33 pressure levels that are linearly spaced in \(\log\left(P\right)\) space between \(10^{-3}\) and 200 bar.
Finally, the boundaries of our simulations are closed and stress-free with zero-energy transfer (i.e. the only means of energy injection and removal are the horizontal numerical dissipation, i.e. hyperviscosity, and the Newtonian Cooling thermal relaxation scheme - see subsubsection 2.1.2).
We introduce the aforementioned horizontal numerical dissipation in order to stabilise the system against the accumulation of grid-scale numerical noise. This takes the form of a horizontal hyperdiffusion filter with a fixed hyperviscosity and a dissipation timescale at the grid-scale, \(\tau_{dissip}\), which acts to adjust the strength of the filtering (i.e. the longer the dissipation timescale, the weaker the dissipation strength).
For all the models presented here, we set a horizontal dissipation timescale of \(\tau_{dissip}=2500\) following the arguments of Sainsbury-Martinez et al. (2021) - a series of test cases with both faster and slower numerical dissipation were considered, and other than in the most extreme rotation cases (where we found that rapid dissipation lead to model instabilities - hence our choice to set \(\tau_{dissip}=2500\) rather than \(\tau_{dissip}=1250\)) the results were found to be essentially \(\tau_{dissip}\) independent.
Note that this hyperviscosity is not a direct equivalent of the physical viscosity of the planetary atmosphere, but rather can be viewed as a form of increased artificial dissipation that both enhances the stability of the model and somewhat accounts for sub-grid-scale dynamics. This approach, known as the large eddy approximation, has long been standard practice in both the stellar (e.g. Miesch, 2005) and planetary (e.g Cullen and Brown, 2009) atmospheric modelling communities.
Finally, since DYNAMICO (like many other GCMs) does not include a dynamic time-step, the time-step for each model had to be manually set. For the models considered here, we simply followed prior studies and set the time-step to 120 seconds. Note that test cases with shorter time-steps were explored, with little to no difference found in the observed dynamics.
#### 2.1.2 HD209458b-like Model Atmospheres
In our HD209458b-like atmospheric models, we do not directly model either the incident thermal radiation on the day-side or the thermal emission on the night-side of the exoplanet. This is due to the high computational cost of modelling radiative dynamics directly with current-generation GCMs and the preliminary-science status of next-generation GCMs (e.g. LFRic). Instead we use a simple thermal relaxation scheme to model these radiative effects, with a spatially varying equilibrium temperature profile, \(T_{eq}\), and a radiative relaxation timescale, \(\tau_{rad}\), that increases with pressure throughout the outer atmosphere. Specifically, we model the radia
tion by adding a source term to the temperature evolution equation which takes the form
\[\frac{\partial T\left(P,\theta,\phi\right)}{\partial t}=-\frac{T\left(P,\theta, \phi\right)-T_{eq}\left(P,\theta,\phi\right)}{\tau_{rad}\left(P\right)}\,. \tag{2}\]
This method is known as Newtonian cooling and has long been applied within the 3D GCM exoplanetary community (i.e. Guillot and Showman (2002), Showman et al. (2008), Rauscher and Menou 2010, Showman and Polvani (2011), Mayne et al. 2014a, Guerlet et al. 2014, or Mayne et al. 2014b) when a more complete/complex treatment of the radiative dynamics is unfeasible.
However in order to use this approach we must first find/set \(T_{eq}\left(P,\theta,\phi\right)\) for every orbital radii (i.e. every stellar insolation rate). At first glance, the simplest solution would be to use a 1D atmospheric model (like ATMO - Tremblin et al. 2015; Drummond et al. 2016) to calculate day-side and night-side profiles for each hot Jupiter model at each orbital radii. However, as discussed in Sainsbury-Martinez et al. (2021), this technique has a number of downsides, resulting in overly strong outer atmosphere dynamics thanks to an exaggerated day/night temperature difference (which itself results from the 1D models lacking horizontal advection which cools the day-side and heats the nightside). Similarly, we cannot use the solution proposed in Sainsbury-Martinez et al. (2021) since these models do not represent real exoplanets and hence we lack observations of the advected/observed day/night temperature contrast.
As such we must find an alternate method to define \(T_{eq}\left(P,\theta,\phi\right)\). Fortunately, since we are interested in the general dynamics of the atmosphere at different orbital radii rather than specific features in comparison to observations, we can take a slightly parametrised approach to deriving an approximate \(T_{eq}\left(P,\theta,\phi\right)\) profile from 1D atmospheric models. This approach can be understood as follows: Using ATMO we run a series of 1D models of HD209458b at 12 different latitudes between the sub-stellar point (which is the point of the tidally-locked planets atmosphere which is closest to the host star) and the anti-stellar point (i.e. the point on the cold night-side furthest from the host star) - these models used HD209458-like stellar irradiation profiles based upon the work of Castelli and Kurucz (2003). We then try to match which 1D models best match the known HD209458b temperature-pressure profile at different points in the atmosphere. Specifically, and in order to recreate the \(T_{eq}\left(P,\theta,\phi\right)\) profile from Sainsbury-Martinez et al. (2019), we try to match the day-side, equilibrium, and night-side temperature in the outer atmosphere (i.e. reproducing the observed day/night temperature contrast at \(10^{-2}\) bar) and the radiative-advective boundary temperature in the deep atmosphere (i.e. the temperature at 10 bar, where the day-night temperature difference is assumed to vanish.) The result is that the temperature at 10 bar is best fit with a 1D profile with a irradiation angle of 45 degrees (i.e. halfway between the substellar point and the terminator) and the outer atmosphere day/equilibrium/night-side temperatures (at \(10^{-2}\) bar) are best fit by models with irradiation angles of 25/78/84 degrees respectively. As a consequence of this, and in order to estimate/calculate \(T_{eq}\left(P,\theta,\phi\right)\) for our HD209458b-like models at different orbital radii, specifically radii of 0.021AU and 0.192AU, we next run the aforementioned 1D models for each of the orbital radii of interest. With this, assuming that the same points in the 3D and 1D models continue to coincide, we can now extract the estimated day/equilibrium/night-side and convergence temperatures, and use them to create parametrised \(T_{eq}\left(P,\theta,\phi\right)\) profiles, as shown in Figure 1 and Table 1. At the same time, we can also calculate the radiative timescale by perturbing one of the temperature-pressure profiles (specifically the profile with a 45 degree irradiation angle) at every pressure level and measuring the time taken for the profile to settle (see Showman et al. 2008 for details about this approach). We plot the profile calculated by ATMO, as well as its linear in \(\log(P)\) parametrisation, in Table 2.1.2/d, with the values at the interpolation points given in Table 2.
Note that in the aforementioned fits, \(T_{eq}\left(P,\theta,\phi\right)\), has been split into two pressure-dependent components. This was done for compatibility reasons with DYNAMICOs implication of Newtonian cooling. The first is simply the 1D equilibrium profile, \(T_{eq-1D}\left(P\right)\), which we define using a series of linear in \(\log(P)\) space interpolations (with interpolation points given in Table 1). The second is the day/night temperature difference \(\Delta T(P)\)
\begin{table}
\begin{tabular}{c|c|c c c} \(a\) (AU) & \(\Delta T_{0}\) (K) & \(T_{eq}\) Profile Interp Points \(\left(\frac{T_{eq}}{18}\right),\frac{P}{18\mathrm{bar}}\) \\ \hline
0.021 & 800 & \((1300,10^{-6})\) & \((1800,10^{-2})\) & \((2600,10)\) \\
0.192 & 260 & \((360,10^{-6})\) & \((500,10^{-2})\) & \((1400,10)\) \\ \end{tabular}
\end{table}
Table 1: Equilibrium temperature profiles for our ‘hot’ and ‘cool’ HD209458b-like atmospheric models.
\begin{table}
\begin{tabular}{c|c c c} \(a\) (AU) & \(\tau_{rad}\) Profile Interp Points \(\left(\log\left(\frac{\tau}{18\mathrm{bar}}\right),\frac{P}{18\mathrm{bar}}\right)\) \\ \hline
0.021 & \((1.0,10^{-6})\) & \((2.9,10^{-2})\) & \((7.5,10)\) \\
0.192 & \((2.7,10^{-6})\) & \((4.2,0.1)\) & \((7.6,10)\) \\ \end{tabular}
\end{table}
Table 2: Newtonian cooling radiative timescale profiles for our ‘hot’ and ‘cool’ HD209458b-like atmospheric models.
which takes the form:
\[\Delta T(P)=\left\{\begin{array}{ll}\Delta T_{0}&\mbox{if $P<10^{-2}$}\\ \Delta T_{0}\log(P/10^{-2})&\mbox{if $10^{-2}<P<10$}\\ 0&\mbox{if $P>10$}.\end{array}\right. \tag{3}\]
Taken together, \(T_{eq}\left(P,\theta,\phi\right)\), is then given by a position dependent combination of the two profiles:
\[T_{eq}\left(P,\theta,\phi\right) =T_{eq-1D}\left(P\right)-\frac{\Delta T(P)}{2}\] \[+\Delta T(P)\cos\left(\theta\right)\max\left[0,\cos(\phi-\pi) \right]\,. \tag{4}\]
### Deep-Learning Setup
In order to autonomously analyse the outputs of our HD209458b-like atmospheric models, and hence enable the run-time detection of key/interesting atmospheric features, we decided to make use of a convolutional neural network, i.e. a CNN, (specifically the Keras CNN which is implemented as part of TensorFlow). The approach we take here is based upon the work of Lagerquist et al. (2019), who used a deep-learning model to identify large-scale (i.e. synoptic-scale) fronts in meteorological data, modify so as to be able to detect multiple different events (tags) from each image. A more indepth discussion of the structure of the lightweight CNN considered here is given in the Appendix, here we give a broader overview.
Briefly a convolutional neural network is a type of DL algorithm (which themselves are a subset of machine-learning data analysis tools) that is particularly well suited to image recognition, and whose design was inspired by how neurons in the visual cortex behave (Lecun et al., 1998a). The key feature which separates a CNN from other image classification algorithms is, as the name would suggest, the inclusion of convolution layers. These are neural-network layers whose primary purpose is the extraction and isolation of both low-level and high-level features from input data, resulting in a reduction in the size of the data set and thus allowing for the final, reduced, image classification to be performed by more conventional, that is to say fully-connected, neural-network layers. This reduction in the size of the data-set to be worked on is crucial because, for the type of images we are interested in classifying, the resolution means that each node (which are the equivalent of neurons) in a fully-connected neural-network layer would contain far too many weights (i.e. learnable parameters) to fit in even the largest GPUs memory. Consider, for example, a fully connected neural-network layer analysing a 128x128 colour image, to maintain complete connection, each _individual_ node would have to contain 49152 weights, ballooning the memory and computational footprint of the network.
Briefly, this decrease in data dimensionality takes place via alternating convolution layers (which do the actual feature analysis and detection) with pooling layers in between (which reduce the dimensionality of the data). Note that, as the dimensionality of the data is reduced, the complexity of the upcoming convolutional layers is typically increased. See Figure 2 for an example of how the layers in a typical OCR CNN are arranged, Figure 13 for a flowchart showing how the layers in our CNN are arranged, and Fukushima & Miyake (1982); LeCun et al. (1989); Lecun et al. (1998b); Krizhevsky et al. (2012); O'Shea & Nash (2015); Albawi et al. (2017) for a more in-depth discussion of both the origins of CNNs, as well their specialised neural-network layers.
But how does the CNN model know what features to extract? Via training. Training, specifically supervised training, is the process by which a 'blank' neural network is provided a data-set in which the expected tags (i.e. classifications) for each image are known - in our case the total training data set consisted of at least 50 images for each tag, although in some cases those images were artificially generated, via an over-sampling approach, in-order to expand the available data set (see subsection 3.3 for details of how and why this was done). Note that, due to the configuration of our models, the bottom 10 (P\(>\sim\) 25 bar) and top 10 (P\(<\sim\) 10\({}^{-2}\) bar) layers where not considered when creating the training set, the former because the deep atmosphere had not equilibrated at the time, and the latter because we already had a wealth of features representing the very outer atmospheres dynamics (an unadvected day-side hotspot which we refer to as locked). The network is
\begin{table}
\begin{tabular}{c|c|c} Quantity (units) & Description & ‘Hot’ HJ & ‘Cool’ HJ \\ \hline \hline dt \((s)\) & Time-step & 120 \\ \(N_{z}\) & No. of Pressure Levels & 33 \\ \(d\) & No. of Sub-divisions & 30 \\ \(N^{\circ}\) & Angular Resolution & 2.4\({}^{\circ}\) \\ \(P_{top}\) (bar) & Pressure at Top & \(7\times 10^{-3}\) \\ \(P_{bottom}\) (bar) & Pressure at Bottom & 200 \\ \(g\) (\(ms^{-1}\)) & Gravity & 8.0 \\ \(R_{HJ}\) (m) & Planetary Radius & \(10^{8}\) \\ \(a\) (au) & Orbital Radius & 0.021 & 0.192 \\ \(\Omega\) (\(s^{-1}\)) & Angular Rotation Rate & \(7\times 10^{-5}\) & \(2.54\times 10^{-6}\) \\ \(c_{p}\) (\(jkg^{-1}K^{-1}\)) & Specific Heat & 13226.5 \\ \(\mathcal{R}\) (\(jkg^{-1}K^{-1}\)) & Ideal Gas Constant & 3779.0 \\ \(T_{init}\) (K) & Init Adiabatic T @ 10b & 1400 & 2600 \\ \end{tabular}
\end{table}
Table 3: Parameters for HD209458b-like simulations
then evolved, via a sequence of forward and backward data propagation that uses a gradient descent algorithm to optimize a stochastic differentiable manifold (i.e. a surface of differential equations) to solve a very non-linear problem, until it is able to reproduce the set of known tags with its own set of assigned tags. For all the models discussed here this training was complete after approximately 30 epochs (i.e. iterations) with either no improvement or a regression in model accuracy (i.e. the ability of the model to reproduce the training data identified tags without being guided) if the model was evolved further - a phenomenon known as over-fitting. For the CNN(s) discussed here our initial set of hand selected/identified data was split into separate training and validation pools, with 80% of the tagged images being used to train the CNN and 20% of images used for validation. The results of this validation procedure can be seen in Figure 3, which revels that the trained CNN(s) are able to recover approximately 90% of expected features in the validation data set after 30 epochs of evolution, after which accuracy regressed.
After this rather computationally expensive training process, we were left with a pair of multi-classification CNNs, one for thermal features and one for wind
Figure 1: Equilibrium temperature-pressure (left) and radiative timescale (right) profiles for the Newtonian Cooling schemes used in our exemplary ‘hot’ (top) and ‘cool’ (bottom) HD209458b-like atmospheric models. Note that in the above plots, the background solid lines correspond to the 1D ATMO models upon which the equilibrium profiles (dashed lines) are based/extrapolated.
features, that could rapidly analyse the results of new/evolved simulations. We discuss this analysis, as well as potential limitations/pitfalls (and how to overcome them) below.
## 3 Results
In order to explore the the use of machine learning and image classification for the autonomous detection and characterisation of hot Jupiter atmospheres, we ran a large series of synchronously rotating, HD209458b-like, hot Jupiter atmospheric models with orbital radii between 0.012au and 0.334au. This orbital radii range was selected to span from near the inner edge of the distribution of known exoplanets (where irradiation is strong and synchronous rotation is rapid) out towards the upper limit of the highly irradiated and synchronously rotating regime (which occurs when tidal effects are weak enough that the synchronisation time approaches the system age). Within this range, orbital radii were selected to be a whole fraction/multiple of the orbital radius of HD209458b.
Early outputs of these variable orbital-radii models were then manually tagged with atmospheric features of interest (see subsection 3.2/subsection 3.3), after which this set of tagged data was used to train the CNNs. Finally, after the simulations had been run further and the outer atmospheres (\(P<1\)bar) had equilibrated, the full time-series of temporally averaged ( in order to remove small-scale fluctuations) horizontal wind and temperature maps was feed into the CNNs and a series of multi-categorisation maps was generated - here we focus on this process, and its results, for two HD209458b-like models in different rotation/insolation regimes, one 'hot', with a short orbital radius and hence strong surface irradiation, and one 'cool', with a longer orbital radius and hence weak surface irradiation. We also briefly explore the ability of our trained CNN to detect atmospheric features for the adiabatically initialised model of Sainsbury-Martinez et al. (2019) - a model which is expected to contain examples of all the atmospheric features of interest. Finally, we also discuss the importance of selecting proper training/ validation data for a CNN, including a discussion of how the initial categorisation of our hot Jupiter atmospheres lead to the discovery of an uncommon/unusual, but robust, atmospheric feature on the night-sides of our 'cool' regime HD209458b-like atmospheric models.
### HD209458b-like Atmospheric Models
For the sake of brevity we have chosen to focus our analysis and discussion on two HD209458b-like models which, together, exhibit all the key features found in the broader set of models explored. Specifically we have chosen one model in each of the primary dynamical regimes observed: a short-orbital radius, \(a=0.021\)au, model (labelled 'hot') in which the dynamics are strongly
Figure 3: Convergence tests for our multi-categorisation CNN, i.e. thermal features CNN, showing the convergence of the loss function during both training (green) and validation (orange). The training is considered complete/optimised after 30 iterations since this is the point at which overfitting starts to occur, as represented by a regression in validation accuracy (i.e. increase in loss).
Figure 2: An example CNN layer structure used for modern optical character recognition. Note the use of stacked, and hence non-linear, convolution layers which are separated by dimensionality-reducing pooling layers, and which eventually feed into fully connected neural-network layers which perform the final image classification. Reproduced from O’Shea and Nash (2015).
influenced by both the strong stellar irradiation and significant rotational effects (i.e. strong Coriolis forcing), and a long-orbital radius, \(a=0.192\)au, model (labelled 'cool') in which the stellar insolation is significantly weakened, and the dynamics are much less rotationally influenced, leading to a weaker equatorial jet and a more divergence driven day/night wind.
The difference in dynamics between these two primary regimes of interest can easily be identified when exploring the zonal-mean (i.e. longitudinal-mean) zonal wind and meridional (i.e. vertical and latitudinal) streamfunction (where the meridional streamfunction is a measure of mass circulations on the meridional/latitudinal plane). We plot both of these quantities in Figure 4 for our two exemplary HD209458b-like atmospheric models.
Starting with the zonal wind (left) we find that, at steady-state, our 'hot' model (top) maintains a strong, deep, super-rotating (i.e. easterly) equatorial jet which is braced by significantly weaker, mid-latitude, westerly counterflows. This is reminiscent of a mix of the wind structure found when modelling HD209458b and SDSS1411b with DYNAMICO (Sainsbury-Martinez et al., 2019, 2021) and matches with the leading theory for the driving mechanism of super-rotating jets in hot Jupiters: the pumping of easterly angular momentum from mid-latitudes to the equator by standing Rossby and Kelvin waves, resulting in easterly acceleration at the equator and westerly acceleration at mid-latitudes
Figure 4: Longitudinally and temporally averaged zonal wind (left) and meridional circulation streamfunction (right) for both our ‘hot’ (top) and ‘cool’ (bottom) HD209458b-like atmospheric models. In the zonal wind maps, positive quantities correspond to eastward flows, whilst in the meridional circulation profiles clockwise circulations are shown in red and anti-clockwise in blue. Additionally the meridional circulation streamfunction is plotted on a log scale.
(Showman and Polvani, 2011; Tsai et al., 2014).
Moving onto the slower rotating, and more weakly radiatively driven, 'cool' model (bottom), the zonal-mean zonal wind profile reveals that whilst a super-rotating 'jet' appears to have once again formed, it is both significantly more vertically constrained and slower than the jet found in either the 'hot' model or prior HD209458b models. We will explore why these differences occur in more detail below (Figure 4), but briefly it can be linked to the relative strength of divergence driven (i.e. day-night) in comparison to rotationally driven (i.e. Rossby and Kevin wave driven) flows - as divergent/rotational flows become stronger/weaker, respectively, the zonal-mean zonal jet weakens since divergent flows have little to no east/west preference, leading to them broadly cancelling out in the zonal-mean, and rotational winds are key to driving a super-rotating jet due to their role in angular momentum transport.
In turn these very different zonal winds drive very different meridional circulations. In our 'hot' model, the meridional circulation streamfunction reveals a pair of narrow clockwise (northern hemisphere - negative latitudes) and anti-clockwise (southern hemisphere) circulation cells which drive a downflow at the equator and, in conjunction with weaker high-latitude circulation cells, upflows at low to mid latitudes - specifically those latitudes at which the zonal wind transitions from being primarily easterly to westerly. In turn, as discussed in Sainsbury-Martinez et al. (2019), this circulation drives a significant vertical heat flux leading to the heating of the deep, advective, adiabat, and hence radius inflation (see Figure 9a) thanks to the increased internal entropy (Tremblin et al., 2017; Sainsbury-Martinez et al., 2019).
On the other hand, in our 'cool' model, the circulation is rather different. Not only are the circulations cells much wider latitudinally, with each cell occupying an entire hemisphere, but the circulation direction of the cells changes with pressure. In the outer atmosphere we have a clockwise/anti-clockwise circulation cell in the northern/southern hemisphere, respectively, which drives a downflow at the equator (with balancing upflows at the poles). Whereas at higher pressures (down to 10 bar), we find that the sense of the circulations has changed, resulting in a slight upflow at the equator. The pressure at which this change in circulation regime occurs matches the pressure at which the zonal-mean zonal jet vanishes, which reinforces the conclusion of Tremblin et al. (2017); Sainsbury-Martinez et al. (2019) that flows associated with the equatorial jet are responsible for driving vertical flows that advect mass/temperature/enthalpy. This also explains why we see little to no heating of the deep, advective, adiabat (Figure 9b), and hence would expect to observe little to no radius inflation for a planet at a similar orbital radius and stellar insolation.
#### 3.1.1 A Note About Deep Atmosphere Equilibrium
It is important to note that, unlike Sainsbury-Martinez et al. (2019) and Sainsbury-Martinez et al. (2021), the deep atmosphere circulation profiles considered here reveal significant time-variability. This is simply a result of the deep atmosphere only being initialised close to its adiabatic steady-state and the high computational cost of running the models long enough for the dynamically slow deep atmospheres to completely thermally equilibrate - although this should not affect our results here as our focus in on the use of AI for analysis, and not the steady state dynamics in the deep atmosphere, the detailed analysis of which we leave to future studies.
However, whilst attempts have been made to solve the problem of the high computational-cost of resolving the steady-state deep atmospheres, these solutions come with their own problems. For example, Schneider et al. (2022), attempted to find the steady-state deep atmosphere of the ultra-hot-Jupiter WASP-76b by calculating the observed cooling at 650-bar in their model, including its rate of change, and then using this to extrapolate towards the steady-state T-P profile everywhere in the atmosphere (that is to say at pressures lower than 650 bar), which they find to be 'cold', implying that potential temperature advection is not driving any deep heating. However, this approach has a number of caveats which mean that extrapolation is generally a bad idea: To start, the pressure dependence of radiative heating, and radiative dynamics more generally, means that the 650 bar cooling rate is unlikely to be representative of the cooling rate at lower pressures. Specifically, at lower pressures, one needs to assess whether the atmosphere has reached equilibrium with the radiative dynamics or with the advection of potential temperature. An assessment which cannot be made via by the dynamics at 650 bars, especially since even this region has not yet reached equilibrium. Furthermore, the model was initialised significantly hotter than the expected, inflated, steady state, resulting in an enhanced deep cooling rate driven by the deep atmospheres need to expel excess energy. In turn, this deep cooling will drive very different deep circulations (as seen by Sainsbury-Martinez et al. (2019) when they modified the equilibrium temperature of the outer atmosphere of a previously equilibrated model), which take
time to evolve/settle back to deep heating once the atmosphere has cooled - in fact, evidence of this shift to deep heating can be seen in Figure 2 of Schneider et al. (2022): the extrapolated model shows signs of heating that is slowly pushing deeper with time, driven by potential temperature advection, and which will likely lead to a steady state somewhere between the two models at equilibrium. As such, we advise against estimating the deep atmospheres temperature-pressure profile via extrapolation, and instead suggest that future studies focus upon next-generation GCMs which will be efficient enough to model the equilibrium state of the deep atmosphere within a reasonable, computational, timescale. And which will benefit greatly from pairing with a concurrent, AI-analysis, model.
### Initial Data Tags, Training, and Results
Whilst interesting, the aforementioned zonal-mean dynamics are not what we intend to explore with our neural-networks. Instead, we plan to look at the pressure dependent horizontal wind and temperature profiles that combine to give these zonal-mean flows, and which contain many more interesting, and unique, features for the CNN(s) to detect.
Initially, we decided to train our CNNs to detect the presence of: day-side hot-spots in which the zonal winds have caused significant horizontal thermal advection (and whose shape is typically referred to to as a butterfly in the hot Jupiter community - e.g. Figure 5b/c), longitudinally homogenised and latitudinally symmetric thermal bands (e.g. Figure 5d), day-side hot-spots in which radiative affects dominate over advective dynamics (i.e. an irradiative hot-spot which has not been significantly advected by horizontal winds - e.g. Figure 5e and Figure 5a to a lesser extent), latitudinally asymmetric thermal structures (see, e.g. Figure 5h), and, finally, the presence of a equatorial zonal jet (see the arrows which trace the horizontal wind in, for example, Figure 5b/c), although, as discussed in subsubsection 3.4.1, this final categorisation, and hence the wind-based CNN more generally, did not pan out for a number of reasons.
This training was performed using early outputs from our complete simulation data-set, with orbital radii of between 0.012au and 0.334au, which was hand labelled/tagged such as to produce at least 50 examples of each feature of interest.
Once the training was complete, but before we explored how rotation impacts the detected atmospheric features, we first investigated how our trained neural-networks would behave when applied to a model which has manually been confirmed to contain examples of all of the current atmospheric features of interest, and from which no training or validation data was extracted.
Specifically, we consider the adiabatically initialised HD209458b model of Sainsbury-Martinez et al. (2019). As shown in Figure 6, and as anticipated, the thermal CNN successfully identifies all of the expected atmospheric features, whilst on the other hand the zonal jet, and the zonal wind more generally, proved to be highly intractable (subsubsection 3.4.1). The identified features include: a outer atmosphere which is dominated by the radiatively driven (tidally-locked) day-side hot-spot, a mid-atmosphere which shows signs of an advected hot-spot (i.e. a thermal butterfly), and a deep atmosphere which shows a mix of asymmetric and banded thermal structures, likely as a result of the ongoing deep heating in the model as it warms from its slightly cooler than steady-state adiabatic start.
Note that, by testing against a model from which no training or validation data was extracted, we are able to test the portability/generality of our trained CNN(s). However, it is important to note that the model we consider was also calculated using DYNAMICO, using a very similar setup for HD209458b. For future studies, we suggest that a more diverse range of models should be considered when testing generalisability. This could include testing the CNN(s) on the outputs of models run with different GCMs or on the outputs of models calculated using the same GCM, but for rather different objects, such as the brown Dwarf models of Sainsbury-Martinez et al. (2021).
Having confirmed that our CNN(s) can recover all the thermal atmospheric features for which they were trained, but when applied to a model from which no training data was taken, we next move on to exploring how rotation influences the detected dynamics. To do this, we consider our two exemplary models, which we ran until their outer atmospheres had reached equilibrium: i.e. for \(\sim 40\) Earth-years of simulation time. We then applied the trained CNNs to the full data-set, using temporal averaging window of between 100-150 outputs in order to reduce the effects of small-scale oscillations on the final characterisation. Note that we used broader averaging windows for the HD209458b-like model shown (Figure 6) due to the longer time-series available for that model.
As was the case for the zonal-mean dynamics, we find that the horizontal dynamics, and hence the detected atmospheric features, changes significantly with rotation rate. Specifically, we find little crossover in the features detected in our exemplary 'hot' and 'cool'
models:
Starting with the 'hot' regime, whose detected characteristics are shown in Figure 7, we find that the dynamics are dominated by three features: at very low pressures (which are slightly higher near initialisation, before advection kicks in) the thermal structure is dominated by a (tidally) locked day-side hot-spot. However this changes as we move towards higher pressures, with the advective time-scale becoming comparable to (and eventually faster than) the radiative time-scale, leading to significant horizontal advection and hence a transition to a thermal butterfly. Note that this detection of a butterfly tag does eventually vanish at later times, but as we discuss in subsubsection 3.4.1, this is a problem with our initial training data set not properly accounting for the influence of very rapid rotation on horizontal thermal advection. Finally, the deep atmosphere is generally dominated by latitudinally symmetric, and longitudinally homogenous, thermal bands, which indicate that it is generally well homogenised longitudinally, likely thanks to the zonal jet. Note however, that we do occasionally find that the CNN assigns the asymmetric tag to the outer deep atmosphere - this is due to ongoing heating of the lowest pressure regions of the deep atmosphere.
On the other hand, analysis of our 'cool' regime model, as shown in Figure 8a/b, reveals a rather different set of identified dynamics. Here we again find that the outer atmosphere is dominated by a tidally-locked, day-side, hot-spot, but unlike in our 'hot' model, this does not transition into a thermal butterfly with increasing pressure. Instead, we find that a) the locked-profile extends significantly deeper than in the 'hot' model, except near initialisation where the radiative-forcing time-scale dominates the dynamics, and b) this detection transitions into a region of non-detection within the mid-atmosphere (i.e. between pressure levels 20/21 (\(\sim\) 1bar) and 29 (\(\sim\) 0.05bar)). The former effect can likely be explained by the weaker role that zonal winds play in the atmospheric dynamics of 'cool' regime models (Figure 4c and subsection 3.4), whilst we explore the later lack of detected feature in more detail in subsection 3.3. Finally, in the deep atmosphere, we find that detected dynamics are generally dominated by weakly asymmetric thermal bands. Note that, unlike in the 'hot' regime, these asymmetric structures are not being driven by deep heating, instead analysis (subsection 3.4) suggests that it occurs because the deep atmosphere is rather quiescent (Figure 4c), with very weak vertical heat transport, leading to persistent, but weak, hori
Figure 5: Temporally averaged zonal wind (arrows) and temperature profile (map) at four different pressures levels (0.0026 bar - left, 0.016 bar - middle left, 0.2 bar - middle right, and 10 bar - right) for both our ‘hot’ (top) and ‘cool’ (bottom) HD209458b-like atmospheric models. Each profile has been labelled with the (class of) tag assigned to it by the CNN.
zontal temperature gradients in the deep atmosphere.
Note that the differences between the detected features in Figure 8a and b are due to the preliminary nature of the models that lacked thermal inversion detection (see below - subsection 3.3), with the preliminary models using a different colormap and averaging scheme to the final CNN models presented in subsection 2.2.
We explore why these differences in detected atmospheric features occurs in subsection 3.4.
### Updated Data Tags: A Night-Side Thermal Inversion
In an effort to understand the region of non-detection identified for our 'cool' atmospheric models (Figure 8a), we elected to explore this region of the atmosphere in more detail, and if appropriate, update our thermal CNN with what we find. As shown in Figure 5f/g, our analysis reveals that, once we are deep enough for advective transport to start to dominate over radiative forcing (via Newtonian Cooling), rather than a butterfly-like thermal structure on the day-side, as found in the classical hot Jupiter regime (e.g. Figure 5c) we instead find that a hotspot has formed on the cold-night side, slightly to the west of the anti-stellar point (see Figure 5f). This was of particular interest since it presence may have significant implications for both the atmospheric dynamics and observable features of more weakly irradiated Jupiters (so called 'warm' Jupiters). Furthermore, it came as somewhat of a surprise since prior studies (e.g. Sainsbury-Martinez et al.
Figure 6: AI multi-categorisation map for the HD209458b atmospheric model of Sainsbury-Martinez et al. (2019). Here we show the categories detected (with the opacity of each detection point representing the strength of the detection) at each pressure level (where an increase in pressure level corresponds to a decrease in actual pressure as we move to higher altitudes) against time (where t=0 corresponds to solid-body, adiabatic, initialisation, and each latter point corresponds to the centre of the temporal mean). The categories in question correspond to the detection of a north-south asymmetric thermal structure in blue, a banded (i.e. horizontally homogenised) thermal structure in orange, a radiative-dominated thermal structure (driven by tidal-locking) in green, a butterfly thermal structure (i.e. eastward equatorial advection flanked by slight, off-equator, westward advection) in red, and an equatorial (wind) jet in purple. Note that this model was run at lower resolution than the other models considered here, which has slightly impacted the ability of the CNN to discriminate between different atmospheric features.
Figure 7: AI multi-categorisation map for our ‘hot’ HD209458b-like atmospheric model. Here we show the categories detected (with the opacity of each detection point representing the strength of the detection) at each pressure level (where an increase in pressure level corresponds to a decrease in actual pressure as we move to higher altitudes) against time (where t=0 corresponds to solid-body, adiabatic, initialisation, and each latter point corresponds to the centre of the temporal mean). The categories in question correspond to the detection of a north-south asymmetric thermal structure in blue, a banded (i.e. horizontally homogenised) thermal structure in orange, a radiative-dominated thermal structure (driven by tidal-locking) in green, a butterfly thermal structure (i.e. eastward equatorial advection flanked by slight, off-equator, westward advection) in red, a vertical thermal inversion in purple (increasing T with P) and brown (decreasing T with P), and an equatorial (wind) jet in pink.
2019), other HD209458b-like models (e.g. Figure 7), as well as our initial training tags suggested that weak (due to the slower rotation rate and weaker surface irradiation at higher orbital radii) butterfly-like features should have been detected at these pressure levels. One of the main potential impacts of this night-side hot-spot is its associated thermal inversion, which can be clearly seen in longitudinally sliced temperature-pressure profiles of the 'cool' model Figure 9b. Briefly, as shown here, a thermal inversion occurs when the temperature profile switches from cooling as the pressure decreases to warming with decreasing pressure - this is similar to the stratosphere on Earth and can have significant affects on observed atmospheric dynamics and chemistry (see, for example, Hubeny et al., 2003; Fortney et al., 2008; Spiegel et al., 2009; Zahnle et al., 2009; Madhusudhan & Seager, 2010; Molliere et al., 2015; Beatty et al., 2017; Lothringer et al., 2018; Gandhi & Madhusudhan, 2019 for a discussion of thermal inversions in highly-irradiated hot Jupiters).
Of course, since this feature was not anticipated, our initial training data did not include it, hence the blank regions on the multi-categorisation maps. As such, and in order to better explore how wide-spread this feature is, we updated our training data-set (and the underlying thermal CNN) to include a pair of additional tags designed to cross-correlate temperature structures on the night-side and hence detect hot-spots: one indicating a night-side hot-spot which cools as the pressure decreases ('inversion-down') and one indicating the opposite ('inversion-up').
However adding these new tags to the CNNs was not a simple matter since we had but a few examples of this phenomenon to use as training data (Figure 10a). To resolve this, and thus properly train our CNNs to detect night-side hot-spots/thermal inversions, we turned
Figure 8: AI multi-categorisation map(s) for our ‘cool’ HD209458b-like atmospheric model. Here we show the categories detected (with the opacity of each detection point representing the strength of the detection) at each pressure level (where an increase in pressure level corresponds to a decrease in actual pressure as we move to higher altitudes) against time (where t=0 corresponds to solid-body, adiabatic, initialisation, and each latter point corresponds to the centre of the temporal mean) for two different multi-categorisation CNNs, one without thermal inversion detection (left) and one with (right). Note that the CNN that lacked thermal inversion detection also differs from the CNN used throughout the rest of this work in a number of other regards being an earlier iteration of the final model. This includes changes to the colour map, resolution, and averaging period of the input data. The categories in question correspond to the detection of a north-south asymmetric thermal structure in blue, a banded (i.e. horizontally homogenised) thermal structure in orange, a radiative-dominated thermal structure (driven by tidal-locking) in green, a butterfly thermal structure (i.e. eastward equatorial advection flanked by slight, off-equator, westward advection) in red, a vertical thermal inversion (in the RHS figure) in purple (increasing T with P) and brown (decreasing T with P), and an equatorial (wind) jet in purple (LHS) or pink (RHS).
to interpolative oversampling: that is to say we used interpolation to created a series of artificial tagged images from our limited sample of hand-tagged examples. For the 'inversion-up' tag, we generate three artificial tagged images per input image, whilst for the 'inversion-down' tag, which was significantly more numerous in our hand-labelled data than the 'inversion-up' tag, we merely generated a single artificial tagged image per input image. Consequently, we now had at least 50 examples of each new feature (Figure 10b) which could be used to update our multi-categorisation thermal CNN to detect the new feature of interest.
The results this updated analysis can be seen, for our 'cool' regime model, in Figure 8b, which reveals that the mid-atmosphere region of non-detection has been replaced with both 'inversion-up' and 'inversion-down' tags, which when taken together indicate the peak of the night-side hot-spot (and also the pressure level at which the hottest point shifts from the day-side to the night-side). Further analysis of our models reveals that this feature is both robustly, but uniquely, present in the 'cool' regime. For example, our 'hot' regime model reveals no detected inversion tags, as expected from our visual analysis. But why does this feature only occur in our 'cool' models? As we discuss below, it appears to be linked to the relative influence of rotation on the atmospheric dynamics and thermal energy transport.
### How Atmospheric Dynamics Impacts the Observed Features
As previously alluded to, the HD209458b-like atmospheric models we consider here fall into two distinct regimes depending upon their, tidally-locked, orbital radii (i.e. surface irradiation and rotation rate). At
Figure 9: Latitudinally and temporally averaged (at the equator) T-P profiles for our ‘hot’ (top) and ‘cool’ (bottom) HD209458b-like atmospheric models. Each plot includes profiles from 6 different longitudes, ranging from the sub-stellar point (whose equilibrium profile is shown in red) eastwards to the anti-stellar point (whose equilibrium profile is shown in blue) on the night-side. Note that we have excluded the deep atmospheres (\(P>10\)bar) from these plots since it has not fully equilibrated.
Figure 11: Helmholtz decomposition of the radially and temporally averaged horizontal wind for both our ‘hot’ (top) and ‘cool’ (bottom) HD209458b-like atmospheric models. To the left we plot the divergent component (\(\mathbf{u}_{div}\)) of the Helmholtz decomposition, in the middle, the rotational component (\(\mathbf{u}_{rot}\)), and on the right the eddy component (\(\mathbf{u}_{eddy}=\mathbf{u}_{rot}-\langle\mathbf{u}_{rot}\rangle\)) of the rotational component of the wind. A video version of this plot is available online: [https://www.youtube.com/watch?v=-7AX5_owg](https://www.youtube.com/watch?v=-7AX5_owg) and [https://www.youtube.com/watch?v=toDpbqper2e4](https://www.youtube.com/watch?v=toDpbqper2e4)
Figure 10: Distribution of (thermal) training/validation data before (top) and after (bottom) applying an oversampling technique to the inversion data-set so as to generate artificial training data. The total count of items in each training data set is shown in grey. Note that we do not include the amount of training data used for the jet detection for two reasons: a) unlike all the other features that where trained on the thermal structure, the jet tag was trained in the horizontal wind, and b) the jet detection model did not make it into our final analysis due to problems with detecting highly symmetric structures (subsubsection 3.4.1).
short orbital radii (i.e. the 'hot' regime), the zonal-mean atmospheric dynamics are dominated by a strong, super-rotating, equatorial jet which extends deep into the atmosphere (Figure 4a), driving significant downflows (Figure 4b) that result in strong vertical mixing, and hence deep heating (and radius inflation - Tremblin et al., 2017; Sainsbury-Martinez et al., 2019). On the other hand, at longer orbital radii (i.e. the 'cool' regime), this is no longer the case and instead the zonal-mean zonal jet is significantly weaker and shallower (Figure 4c), and thus associated with significantly reduced vertical mixing (Figure 4d) which in turn drives little to no deep heating.
However, in order to fully understand the dynamics identified by our CNNs, we must look at more than just the zonal-mean dynamics. Specifically we are interested in the differences in horizontal wind between the 'hot' and 'cool' regimes, and how these differences affect horizontal and vertical energy (enthalpy) transport, leading to the various atmospheric features identified by our CNNs, particularly the night-side hot-spot found in the 'cool' regime.
To that end, we next explore the Helmholtz decomposition of the zonal wind, a decomposition which has previously been used to study both the atmosphere of Earth (Dutton, 1986) and hot Jupiters (Hammond and Lewis, 2021). Briefly, a Helmholtz decomposition can be used to split the horizontal wind at each pressure level, \(\mathbf{u}=(u,v)\), into divergent (i.e. 'vorticity free') and rotational (i.e. 'divergence free') components (Dutton, 1986):
\[\mathbf{u} =\mathbf{u}_{div}+\mathbf{u}_{rot} \tag{5}\] \[=-\mathbf{\nabla}\chi+\mathbf{k}\times\mathbf{\nabla}\psi, \tag{6}\]
where \(\chi\) is the velocity potential function, \(\psi\) is the velocity streamfunction, and both can be linked to the divergence \(\delta\) / vorticity \(w\) directly:
\[\nabla^{2}\chi =\delta \tag{7}\] \[\nabla^{2}\psi =w. \tag{8}\]
Additionally, in order to further isolate the equatorial zonal jet from other wind dynamics, we further split the rotational component, \(\mathbf{u}_{rot}\) into a zonal-mean component \(\mathbf{u}_{zonal}\) and an eddy component \(\mathbf{u}_{eddy}\):
\[\mathbf{u}_{zonal} =\langle\mathbf{u}_{rot}\rangle \tag{9}\] \[\mathbf{u}_{eddy} =\mathbf{u}_{rot}-\mathbf{u}_{zonal}, \tag{10}\]
where \(\langle\rangle\) indicates the zonal-mean.
As for what these components represent: \(\mathbf{u}_{div}\) represents flows that diverge from the hot-spot on the day-side and converge on the cold night-side, forming a closed cycle when combined with the upwelling below the day-side hot-spot and the downwelling on the nightside; \(\mathbf{u}_{rot}\) represents dynamics driven by angular momentum transport via stationary Rossby and Kelvin waves - in typical hot Jupiters these standing waves transport angular momentum from mid-latitudes to the equator, resulting in slight westward flows at mid-latitudes and a super-rotating jet at the equator (Showman and Polvani, 2011); and finally, as mentioned above, \(\mathbf{u}_{eddy}\) and \(\mathbf{u}_{zonal}\) are used to split \(\mathbf{u}_{rot}\), allowing for us to explore the transport by standing waves in cases where the presence of a super-rotating equatorial jet would completely dominate the dynamics.
In Figure 11, we plot \(\mathbf{u}_{div}\) (left), \(\mathbf{u}_{rot}\) (centre), and \(\mathbf{u}_{eddy}\) (right), radially averaged over the outer atmosphere, for both our 'hot' (top) and 'cool' (bottom) atmospheric models. We also, online, give a 3D view of each component of the horizontal wind for our 'hot' atmospheric model\({}^{\text{b}}\).
Starting with said 'hot' atmospheric model, we can clearly see that by magnitude the rotational component (\(|\mathbf{u}_{rot}|=929ms^{-1}\)) significantly dominates over the divergent component (\(|\mathbf{u}_{div}|=191ms^{-1}\)). When combined with the difference between the rotational and eddy/zonal components of the wind, this suggests that the main driving force behind the horizontal dynamics seen here is, as expected, the presence of a strong equatorial jet. Further, as suggested by Showman and Polvani (2011), and confirmed by both \(\mathbf{u}_{rot}\) and \(\mathbf{u}_{eddy}\), the super-rotating equatorial jet appears to be driven by standing Rossby and Kelvin waves. More specifically, we find a m=1 standing wave pattern which has become significantly tilted from west to east by a combination of both a strong Coriolis effect at high latitudes (thanks to rapid rotation), and equatorial, eastwards, angular momentum transport. Finally the divergent component of the wind plays a much more minor role, only transporting energy from the day-side hot-spot towards the terminators and poles, with a slight, rotationally influenced, preference for easterly flows, and little to no transport on the cold night-side. This wind balance is typical of hot Jupiters, and leads to the primarily equatorial heat transport that we discuss below, heat transport which is responsible for many of the the atmospheric features detected by our thermal feature CNN, in particular the advected butterfly.
On the other hand, the wind dynamics in the 'cool' regime are rather different. Not only does the divergent component of the wind (\(|\mathbf{u}_{div}|=238ms^{-1}\)) dominate over the rotational component (\(|\mathbf{u}_{rot}|=119ms^{-1}\)), the difference between the rotational component and the
eddy wind is small (\(|\mathbf{u}_{eddy}|=87ms^{-1}\)). Taken together, this suggests that, unlike in the 'hot', or even classical hot Jupiter, regimes, the primary driver of this regimes horizontal dynamics is a divergent flow of material from the day-side hot-spot towards the colder night-side which dominates over a much weaker equatorial jet. However, the mechanism by which this very weak equatorial jet forms remains the same: \(\mathbf{u}_{rot}\) and \(\mathbf{u}_{eddy}\) both reveal a m=1 standing wave pattern which is slightly shifted west of the sub-stellar point, and which, thanks to the relatively weak influence that rotation has on the dynamics, is essentially untilted. Finally, is also interesting to note that the divergent component of the wind converges on the night-side just west of the anti-stellar point, nearly exactly where the hotspot can be found in Figure 5e/f/g - as we discuss below, this is not a coincidence.
Given that our two models are in very different dynamical regimes, with very different horizontal wind structures, we next explore how these differences are reflected in the horizontal and vertical energy transport. More specifically, in Figure 12, we explore the zonal, latitudinal, and vertical Enthalpy flux transport \(E(u,v,w)=\rho*cp*T*\mathbf{u}(u,v,w)\) (where \(\rho\) is the density, \(T\) is the temperature, and \(cp\) is the specific heat) at select pressures which were chosen in order to emphasise the differences in outer-atmosphere energetics/dynamics.
Starting in the 'hot' regime, we find that, at all but the lowest of pressures where the radiative time-scale is very short (and hence advection is suppressed), the zonal enthalpy transport (e.g. Figure 12a/b) is dominated by strong eastward advection at the equator and significant westward advection off-equator. This advection can explain the thermal structure identified in the outer atmosphere by the CNNs: zonal advection leads to a shift in the day-side hotspot towards the east at the equator, and towards the west at mid-latitudes, leading to the well known butterfly-like structure (Figure 5b/c) identified by the CNNs (Figure 7).
Moving onto the latitudinal enthalpy advection (Figure 12c), we find that it is strongly correlated with the eddy component of the Helmholtz wind decomposition. For example, by comparing this transport with Figure 11c, we see that the poleward and equatorward transport aligns well with the tilted standing wave pattern. This includes a peak in latitudinal enthalpy transport that occurs near the equator and west of the sub-stellar point, and which corresponds to a similar convergence/divergence point found in the eddy wind component. This correlation is reinforced by the relative magnitude of the latitudinal advection in comparison to the zonal transport - much like the eddy wind is much slower on average than the rotational wind (which includes the zonal jet), the latitudinal heat transport is much weaker than the zonal advection.
Interestingly, this difference in energy transport remains as we move into the deep atmosphere. Here, whilst we do see some slight signs of an asymmetric thermal structure around 10 bar, deeper than this, the strong vertical heat transport and significant longitudinal mixing has resulted in a deep atmosphere in which _zonal_ temperature differences have almost completely vanished (Figure 5d), leaving bands of temperature that vary latitudinally, with an enhancement in temperature near the poles thanks to the off-equator downflows (Figure 12d). Note: It is possible, and likely (Sainsbury-Martinez et al., 2019), that with enough time, the deep atmosphere will eventually mix further, reducing the latitudinal temperature differences and resulting in a deep atmosphere that is mostly horizontally homogenised, and hence may be identified by the CNNs as asymmetric if any small scale, residual, temperature variations are present.
An example of this vertical enthalpy transport, in the equilibrated outer atmosphere, can be seen in Figure 12d, where red/blue flows indicated upward/downward transport respectively. Here we see a slight upwelling on the day-side which can be linked to the hot-spot, surrounded by downwelling near the terminators, on the night-side, and at higher latitudes. This transport extends deep into the atmosphere, growing stronger as the pressure (and hence also density on the material being transported) increases, explaining the observed deep heating (Figure 9a).
We next turn to the 'cool' regime, in which the wind dynamics, and hence also the enthalpy transport, differ significantly from the 'hot' regime. This is best illustrated by the the outer atmosphere zonal and latitudinal enthalpy flux, which we plot in Figure 12e/g respectively. Here we find that the mean zonal and latitudinal enthalpy fluxes are approximately equal, and are very strongly shaped by the divergent component of the wind (Figure 11d) - with clear transport occurring from the hot day-side to the cooler night-side, both zonally and latitudinally across the poles, converging just west of the anti-stellar point, exactly where the deeper night-side hot-spot, and thermal inversion, is found (e.g. Figure 5e/f/g). This divergence driven transport also explains why we do not detect/observe the day-side butterfly that is typically associated with hot Jupiter atmospheres: due to the relatively low influence that
rotation plays on the dynamics, the day-night energy transport is primarily, isotropically, divergent, rather than the highly anisotropic (equatorial) transport found for the 'hot' regime. As such the temperature structure in the outer atmosphere remains largely 'locked' (and maybe a little broadened) until enough energy has been transported by the divergent flows to form a night-side hotspot (hence changing the tag applied by the CNN - see Figure 8b).
Moving deeper into the atmosphere, we start to see the impact that the night-side hot spot has on the mid-atmospheres horizontal enthalpy transport. Here we find that the zonal-enthalpy flux (Figure 12f), much like the rotational component of the zonal wind (Figure 11e), is dominated by an eastwards flow from the night-side hotspot towards the relatively cool day-side. This suggests that the divergent flows are dominant in the outer atmosphere, where the day-night forcing is strong, and 'rotational' flows are dominate in the mid atmosphere, where the forcing has switched to being driven by the night-side hotspot, reinforced by the eddy winds (Figure 11f).
Finally we come to the vertical enthalpy transport (see for example, Figure 12h), which is both weaker than the vertical energy transport found in the 'hot' regime, as well as being more vertically confined. More specifically, we find that the main region of strong downward enthalpy transport is focused on the night-side around the hot-spot, and like the hot-spot itself, this downward transport does not extend into the deep atmosphere. Instead we find that vertical mixing is weak in the deep atmosphere, helping to explain the lack of observed deep heating (Figure 9b) and hence radius inflation. In fact, mixing is generally weak in all directions in the deep atmosphere, although it is slightly stronger in the zonal direction than vertically or latitudinally. This helps to explain the slight asymmetry seen in the deep atmosphere (and identified by the thermal CNN). Small amounts of thermal energy are transported to the deep atmosphere by vertical mixing (in an essentially random way since the flow is so weak), leading to slight temperature variations that become smoothed out longitudinally but not latitudinally - hence leaving us with a weak asymmetric thermal structure, as seen in Figure 5h. Note that, unlike in the 'hot' regime above, the particularly slow dynamical timescales of the deep atmosphere found here mean that we do not expect complete horizontal homogenisation to occur on any reasonable timescale.
#### 3.4.1 Difficulties, Warnings, and Advice for Using CNN(s) to Analyse Atmospheric Dynamics
Whilst the above results show some promising outcomes of pairing deep-learning neural networks with (atmospheric) simulations, this process is not without its own problems and limitations, which we discuss here.
One particular important factor that can impact the ability of CNN(s) to detect atmospheric features is the choice of colourmap used to visualise the data since this can significantly impact the ability of the neural-network to detect the edges and gradients that are key to encoding detectable features during the training process. As part of this work, we tested training the thermal, and wind, CNN with every available colourmap included as part of MatPlotLib, using training data that was, other than the colourmap, identically tagged. Whilst the results were similar for many of the colourmaps chosen, a few stood out both positively and negatively. Note that the effectiveness of each set of variable colourmap networks was evaluated using their confusion matrices, allowing for a direct comparison of the ability of each set of models to reproduce the set of known tags.
For example, the colourmap 'Jet', which used to be the default map used in MatPlotLib, and which remains a staple of astrophysics research to this day, resulted in particularly poor detection and characterisation of atmospheric features. This occurs for exactly the same reason why it was replaced as the default colourmap in MatPlotLib: due to rapid colour and brightness changes (i.e. the colourmap is not perceptionally uniform), 'Jet' tends to both exaggerate and suppress small changes/gradients in the underlying data - whilst this can be an advantage when quickly inspecting data visually (but even this is in doubt), it also significantly hampers the ability of the multi-categorisation CNN to detect and isolate atmospheric features, either through mis-training, because the feature is hidden, or even because the colourmap has warped the morphology of the feature by exaggerating/suppressing gradients.
Interestingly, even the modern replacements, such as Viridis, Inferno, or Plasma, which were designed to solve this non-perceptional-uniformity problem are also poor choices for use with a CNN. This can be linked to their use of a large array of colours, which again can act to disguise, morph, and exaggerate/suppress features, as proved by the best colourmaps for use with image-recognition algorithms: monochromatic colourmaps.
Simply put, by using a monochromatic colourmap (such as Grays, Reds, Gist_gray, etc), simulations with the same atmospheric feature (but with different magnitudes and horizontal gradients, due to, for example,
differences in surface irradiation strength) are more likely to produce outputs that are visually near identical, making it easier for a CNN to both learn and detect said features. Furthermore, the use of monochromatic colourmaps can slightly reduce the memory footprint of the CNN by reducing the initial data-set-size by a factor of three, from RGB, to greyscale, thus, theoretically, enabling either faster data processing whilst the simulations run, or larger and more complex CNNs (with either more complex layers, or more layers generally) to be developed in the same memory footprint. However, as detailed in the Appendix, the CNN(s) we consider here are already fairly lightweight when compared to those used for, say, facial recognition, and so the possibilities for computational savings are minimal - for example, the 'high-resolution', full RGB, network only take 4 minutes to train on four K80s (which themselves are relatively old). Furthermore, testing with training our thermal CNN using normalised raw data rather than images produced with matplotlib resulted in a only a small reduction, 0.3% for our thermal CNN, in the size of the neural network. This occurred at the cost of introducing significant problems with the optimiser leading to significant oscillations during the validation phase of training the network. Whilst fixing such issues should be possible, it would require modifications to the structure of the network and individual layers, complicating issues for a non-AI expert all for a minimal reduction in computational cost and very limited potential improvement over a well chosen colormap.
However this all comes at the cost of making the data harder to interpret visually/manually. The compromise, in terms of generating data that is suitable for use with both CNNs and humans, albeit at the cost of not reducing the memory footprint, is to use a diverging colourmap: i.e. a colour map with only two primary colours which diverge from a neutral colour, such as white, at either a fixed point (e.g. zero wind) or at the data mean, as used in, for example, Figure 5. These colourmaps result in similar accuracy to monochromatic colourmaps for the thermal CNN when trying to recover training data, whilst also generating plots with which humans can easily visualise atmospheric features/dynamics.
Note that, whilst the above results are fairly robust, we advise that future studies, particularly those that intend to use multi-categorisation CNNs to analyse data unsupervised (i.e. during a simulations runtime), should fine tune both the colourmaps as well as the data boundaries in order to ensure suitability before
Figure 12: A selection of maps showing the temporally-averaged Zonal, Meridional, or Vertical Enthalpy transport at selected pressure levels for our exemplary ‘hot’ (top) and ‘cool’ (bottom) HD209458b-like atmospheric models. Here, positive (red) fluxes represent eastward/polar/outwards flows for the the zonal/meridional/vertical enthalpy flux maps respectively. Note: We include two maps for the zonal enthalpy transport, at different pressure levels, in order to emphasise how the zonal advection changes with both height and orbital radius.
committing to the run.
In addition to ensuring that the data is prepared in such a way as to be suitable for use with a CNN, we also have to ensure that the features being searched for are also detectable and are properly trained for.
This proved to be an issue here. As discussed in subsection 3.2, our original plan included using a separate CNN to explore the presence of zonal jets/horizontal winds in our simulation data. This proved to be highly intractable for a variety of reasons. To start, we initially tried to train the CNN by using the, overlaid, arrow quivers shown in Figure 5. However this ended up being nearly impossible since, at the resolution of the CNNs analysis, the arrows were essentially undetectable. Furthermore, even when we adjusted the plots to make the quivers more visible/bolder, or increased the resolution of the initial layers of the network, small changes in the horizontal wind structure lead to mis/failed detection of the jet - it appears that the localised nature of the quivers not only made the CNN very sensitive to changes in their structure but also meant that the model struggled to detect large scale structure in the first place. It is possible that a significantly larger training data set, larger quivers, higher resolution images, or a different kernel size might solve this issue.
Rather than testing the aforementioned ideas, which we leave to future work, we instead tried to avoid the quiver issue completely by replacing them with horizontal wind maps, much like those used with the thermal CNN. This greatly improved the ability of our CNN to detect the zonal winds, but it also revealed two additional limitations. Firstly the zonal wind is not as horizontally homogenised as the zonal-mean zonal wind would suggest, thus making it trickier for the CNN to detect unless very carefully trained with a wide variety of zonal wind structures, and secondly, even when carefully trained, the CNN can have a difficult time detecting zonal jets due to their highly symmetric structure (similar difficulties are also faced when searching for the banded structure with the thermal CNN, however it is less of a problem there as alternative structures, which may lead to misidentification, are not so prevalent due to the relative quiescence of the deep atmosphere). More specifically, our CNNs can find it more difficult to isolate a band of temperature/wind than a more complex structure, such as a night-side hot-spot or a thermal butterfly. This is because said complex structures have more features (edges) for the neural-network to latch onto and learn, increasing the complexity of the neural-network and hence also its ability to internalise features. As a consequence of all of the above, we have not focused on the detection of zonal winds in this work, and we suggest that future studies which wish to implement data analysis via neural networks should focus on more detailed and/or derived atmospheric features, ranging from thermal structures to complex wind dynamics, such as vortices or (e.g. by using a Helmholtz decomposition) standing waves (although here we again caution against using quivers due to their localised nature).
Another example of the possible difficulties faced when our thermal CNN to search for atmospheric features is the non-detection of the thermal butterfly structure at later times in our exemplary 'hot' regime model (Figure 7). Comparisons between the training data, which was based upon early outputs of our full simulation set, and the steady-state outer atmosphere thermal profiles reveals the simple reason why this is the case: the strong equatorial jet and off-equator, low-latitude, counterflows has resulted in a butterfly structure that is much more longitudinally extended/stretched than any profiles included in the initial training data. As such, our trained CNN was unable to properly identify the evolved/advected feature. This serves as an important caveat of CNNs and neural networks in general: they are only as effective as their training data, and are not able to identify new or highly evolved/warped features. However, as previously discussed, this lack of detection itself can also be an interesting result, identifying new, uncommon, or unexpected features. Yet this is a cold comfort when trying to use a CNN, or AI more generally, to concurrently process an ongoing simulation - here the best solution is a broad training set which contains multiple examples of all the features expected. For example, now that we have steady-state training data for a wide variety of HD209458b-like atmospheric models at different orbital radii, and with equilibrium outer atmospheres, it should be possible to build CNNs to accurately, and quickly, analyse future simulations within, or near, this parameter space on the fly.
## 4 Discussion and Conclusion
In this paper, we have explored the role that AI driven image-classification, via the use of convolutional neural-networks, can have in the concurrent- and post-processing of simulations of planetary atmospheres, specifically HD209458b-like hot Jupiters at various orbital radii.
### Model and AI (CNN) Setup
To that end, we started by running a series of HD209458b-like atmospheric models with different or
bital radii, and hence different surface irradiation and synchronous planetary rotation rates. The orbital radii considered here varied between 0.012au and 0.334au, but for the sake of brevity we chose to focus on models at two different orbital radii, with dynamics that are characteristic of their contemporaries. Specifically we focused on one model with an orbital radius of 0.021au, which we refer to as our 'hot' model, and one with an orbital radii of 0.192au, which we refer to as our 'cool' model. These simulations are based on those first presented in Sainsbury-Martinez et al. (2019), but modified such that their outer atmosphere temperature forcing is derived from 1D models calculated using ATMO at every orbital radius of interest. From the first outputs of these simulations, we then selected and labelled a number of thermal and wind atmospheric features that we wished for our AI model to detect/characterise. These features included, the presence of a tidally locked day-side hot-spot, a horizontally advected day-side hot-spot (better known as a butterfly-like thermal structure), a latitudinally asymmetric deep atmosphere, a deep atmosphere which is fully longitudinally homogenised, and in which latitudinal temperature variations remains small (we we refer to as banded), and the presence of a super-rotating equatorial jet, although this latter tag/feature proved to be difficult to identify.
This hand labelled data-set was then fed into a pair of multi-categorisation convolutional neural-networks - i.e. a neural-network which is particularly suited to image recognition tasks (see subsection 2.2 for a more detailed description) and detecting multiple features non-exclusively. Once trained, this neural network was then applied to the full time-series outputs of our exemplary simulations (which had been run to steady-state in the outer atmosphere - i.e a lower pressures).
### Identified Features, or Lack Thereof, with CNNs
Applying the trained CNN(s) to our exemplary models revealed our first key result: at higher orbital radii, i.e. in the 'cool' regime, the thermal CNN multi-classification map (which show the identified features verses both pressure and time) contained a region with no identified atmospheric features. The resulting analysis of this model, as well as other 'cool' regime models, revealed a mid-pressure atmosphere (i.e. \(0.05\text{bar}\to 1\text{bar}\)) that was behaving in a rather unusual way: The hottest region of the atmosphere had shifted from the irradiated day-side to the night-side, ending up just west of the anti-stellar point. Once this data was added to the thermal CNN (using interpolative oversampling to generate artificial training data, a necessity due to the small sample size available as this feature only occurs in the 'cool' regime and over a limited pressure range), we were able to successfully identify this hot-spot, and its associated thermal inversion in all models with an orbital radii \(>0.11\text{au}\) - i.e. all 'cool' regime models.
This is just one example of the differences observed between models in the 'cool' and 'hot' regimes, differences which extend throughout the model atmospheres and which appear to be highly linked to the mixing/transport/circulation regime that the models fall into. Only our analysis of the low-resolution and adiabatically initialised HD209458b model of Sainsbury-Martinez et al. (2019) revealed every, original (i.e. excluding the night-side hot-spot) feature, which was to be expected since it was HD209458b's dynamics that formed the basis for the original features selected for detection.
Starting in the 'cool' regime, the identified features tended to correspond to weaker mixing and anisotropic horizontal energy transport - that is to say that the outer atmospheres dynamics remain highly radiatively forced (despite the relatively weak stellar irradiation), the mid-atmosphere is dominated by isotropic (i.e. divergent) energy transport from the day-side to the night-side, which eventually leads to the formation of a night-side hot-spot and associated thermal inversion, and the deep atmosphere is highly quiescent with weak mixing and deep heating that allow for slight latitudinal temperature gradients to develop and be maintained, which the CNN identifies as an asymmetric thermal structure.
On the other hand, for models that fall into the 'hot' regime, we found that the identified dynamics correspond to strong zonal energy transport and significant horizontal, and vertical, mixing. For example, we find that in the outer atmosphere, once the radiative time-scale is long enough (i.e. not at very low pressures), the day-side hot-spot becomes significantly horizontally advected by both the equatorial jet as well as the associated mid-latitude counterflows, resulting in the well known butterfly-like thermal-structure on the day-side, with the exact shape depending upon the strength of the rotational influence, and hence the jet structure. This strong advection/mixing extends to the deep atmosphere, where not only do we find significant heating thanks to vertical potential temperature (enthalpy) transport, but also that the deep horizontal advection, which is strongest in the longitudinal direction, has resulted in strong zonal homogenisation paired with a weak latitudinal temperature gradient - this is the tem
perature structure we refer to as banded.
### Understanding the Different Dynamical Regimes
In order to try and understand the differences between these two dynamical regimes, and also how the unusual night-side hot-spot and thermal inversion forms in the 'cool' regime, we next explored the wind and energy transport (specifically enthalpy flux) in more detail.
Starting with the wind, we use a Helmholtz decomposition to split the horizontal wind into its divergent (i.e. 'vorticity free'), rotational (i.e. 'divergence free'), and eddy (i.e. perturbations to the rotational wind) components. This reveals that, as expected, the wind dynamics differ significantly between the 'cool' and 'hot' regimes.
In the 'hot' regime, strong stellar irradiation and rapid surface rotation mean that the wind dynamics are dominated by the rotational component of the wind. This is in agreement with Showman and Polvani (2011), who suggest that the rotational component of the wind is correlated with standing Rossby and Kelvin waves that in turn drive a strong, highly advecting, equatorial jet: here the rotational component of the wind reveals both a strong equatorial jet as well as a significant, m=1, standing wave pattern.
On the other hand, in the 'cool' regime where the irradiation is weaker and the surface rotation is slower,the winds fall into a completely different dynamical regime: The divergent component of the wind dominates over the rotational component, although signs of the latter remain, exhibiting a weak, but stable, m=1, standing wave pattern that fails to drive a significant equatorial jet. As for the dominant divergent wind, this flows, isotropically, from the sub-stellar point to the unirradiated night-side, converging just west of the anti-stellar point. i.e. exactly the same location as the mid-atmosphere night-side hot-spot.
The horizontal enthalpy transport reveals that this is not a coincidence. For instance, our 'cool' model reveals zonal and latitudinal enthalpy transport that is highly shaped by the divergent component of the wind, and hence, also converges just west of the anti-stellar point. This explains the formation of the night-side hot-spot at mid-pressures: In the very outer atmosphere, radiative forcing is too strong for significant advection to occur, however as we move deeper, the radiative time-scale lengthens and day-night advection starts to occur (isotropically thanks to the wind structure), leading to the formation of the night-side hotspot. This hot-spot then dominates the mid-atmospheres energy transport, leading to the unusual scenario that is night-day heat transport. However relative to the transport seen in HD209458b, or the 'hot' regime, this transport is weak and does not extend into the deep atmosphere. this is reflected in the vertical enthalpy transport profile, which reveals that vertical advection is focused on maintaining the night-side hot-spot, leading to little to no deep heating and a quiescent deep atmosphere. This link between the night-side hot-spot and the divergent wind is further reinforced by more rapidly rotating 'cool' regime models: As the influence that rotation has on the atmosphere rises, the location of both the hot-spot, as well as the divergent wind convergence point shifts westwards, likely as a result of the slight tilt introduced to the divergent wind by off-equator Coriolis forces.
A complementary result is found in the 'hot' regime, although here, as discussed above, the enthalpy transport is controlled by the rotational wind, resulting in transport that is primarily driven by the zonal-jet, with weaker off-equator transport linked to the m=1 standing wave pattern. As such, we find significant horizontal advection near the equator, either eastwards where the jet dominates, or westwards off-equator where the standing-wave driven transport is strongest. Taken together, this transport results in the formation of the synonymous butterfly-like thermal structure. At higher latitudes, we also find evidence for the influence that Coriolis forces have on enthalpy transport, with a significant westward tilt developing as we move towards more rapid rotation. As for the off-equator longitudinal and latitudinal enthalpy transport, this is highly correlated with the same Rossby and Kelvin standing wave pattern that drives the equatorial flows, including the significant westwards tilt as we move to higher latitudes, a tilt which is highly dependent upon the planetary rotation rate. Note however that, at very high rotation rates, off-equator heat transport is suppressed as the Coriolis force suppresses higher latitude winds. Finally, as suggested by Sainsbury-Martinez et al. (2019), the strong zonal wind, and hence zonal enthalpy transport, that develops in this model results in significant vertical heat transport that extends from the outer atmosphere all the way to the bottom of the simulation domain, increasing the entropy of the internal adiabat, and hence driving significant radius inflation (as shown in Figure 9a).
### Limitations and Advice for Future Pairings of CNNs with Atmospheric Models
Whilst our above results show some promising outcomes of pairing neural networks with atmospheric
simulations, it also reveals some of the limitations of this approach, as well as potential pitfalls and avenues for improvement.
To start, CNNs cannot detect any features for which they are not robustly trained. For example, our initial training and validation data set did not include the uncommon night-side hot-spot found in the 'cool' regime, and thus, when we feed the data into our networks, no tags were assigned to the mid-atmosphere. More subtly, whilst we did train our thermal CNN to detect butterfly-like features, at rapid rotation rates the strong zonal jet and latitudinally compressed mid-latitude counterflows resulted in a butterfly-like thermal structure that was notably different from those included in our training set. Consequently, the thermal CNN was only able to assign the butterfly tag in our exemplary 'hot' model near initialisation, before advection had significantly changed its structure.
Both of these examples emphasise how important a robust training set is when using neural-networks (and CNNs in particular) to analyse data. This is doubly true when trying to use a neural-network to concurrently analyse and process a running model, since there a result that is only revealed through further study of an unusual region of non-identification occurs too late to be of use in deciding which runs to continue, which to discard, or which have reached equilibrium.
However this does not mean that CNNs (and neural-networks more generally) cannot be used for concurrent-processing. One area for which they are particularly well suited is when paired with next-generation exascale super-computers. These next-generation machines will allow for incredibly high resolution, and long-timescale, simulations, albeit at the cost of vast amounts of computational resources that will be in high demand (and which carry a high cost both financially and environmentally). As such it is critical to ensure that the allocated computational resources are being used efficiently, as well as minimising researcher time required to analyse the vast outputs of these models. Thankfully, as part of the process of designing an exascale-calculation, it is typical to run a series of lower-cost (i.e. lower-resolution or shorter timescale), preliminary, simulations in order to both get a sense of the value/significance of a very-large-scale simulation, as well as the exact model parameters to use (for example, with DYNAMICO, it is import to recalibrate the diffusion time-scale since the models hyper-diffusion is resolution dependent). These preliminary models might provide a source of training data which can be used to generate a set of CNNs to analyse the final, production, simulation. In a similar vain, with our series of outer atmosphere equilibrium HD209458b-like models at different orbital radii, we are now have enough data to train a robust thermal-data CNN to analyse any models we want to run at intermediary orbital radii. For example, we showed that the networks trained here on our 'high' resolution, HD209458-like models, could be applied to analyse the lower resolution HD209458b models of Sainsbury-Martinez et al. (2019). Furthermore, the validity of such a technique could be further verified, in future studies, by analysing the CNN(s) with a post-processing tool such as GradCAM (Selvaraju et al., 2016), which highlights important features in a sample image, allowing for confirmation that the CNN(s) predictions can be trusted.
### Future Perspectives for Atmospheric Modelling
As we have alluded to above, the primary differentiator between the two regimes discussed here appears to be the relative influence that rotation has on the dynamics, specifically on the horizontal wind structure. Furthermore understanding these differences their impacts may prove crucial to our understanding of hot Jupiters in longer orbits, orbits which are now becoming accessible to observations thanks to next generation telescopes, such as JWST (James Webb Space Telescope) or TESS (Transiting Exoplanet Survey Satellite). One example of this is the unusual feature we detected in the atmospheres of our more slowly rotating hot Jupiters models, a night-side hot-spot, which was first detected via our thermal CNN, thus reinforcing the value of pairing long-timescale, and computational complex models with trained networks for both concurrent- and post-processing. If this feature proves to be robust, it may have implications for our understanding of hot Jupiter atmospheric chemistry. For example, a thermal inversion on the night-side may significantly impact the distribution of chemical compounds by acting as a cold trap in which denser materials condense, essentially raining out of the outer atmosphere, thus becoming depleted.
As a result, we have a number of suggestions for future studies: Firstly we strongly encourage any future studies with high-resolution and long-time-scale simulations to consider pairing them with an AI model, if only to reduce the the initial analysis burden, with the AI helping to identify regions of interest or uncommon dynamics. Secondly, we propose a more indepth study isolating how rotation alone affects irradiated atmospheric dynamics, with a particular focus on day-night winds and energy transport in slowly irradiated Jupiters: i.e. on the unusual night-side hotspot. If this result still proves to be robust, we further suggest that this work be followed up with a next-generation GCM, which includes both non
equilibrium chemistry and robust radiative dynamics, to fully explore what possible impacts that a night-side hot-spot may have on observable dynamics.
## Appendix
Following and expanding upon the work of Lagerquist et al. (2019), we implement a pair of multi-categorisation, lightweight, convolutional neural networks (CNN), one for thermal features and one for horizontal wind features. Here, as in the paper, we focus our discussion on model designed for thermal feature detection.
This model follows the structure shown in Figure 2 (and Figure 2 of Lagerquist et al., 2019). After input, the image data is analysed and reduced by a series of 4 convolution blocks of decreasing dimensionality and increasing complexity, as shown in Figure 13. Each convolution block consists of three layer: the 2D convolution layer itself, which has a kernel size of \(5\times 5\) and includes \(16/32/64/64\) filters in the first/second/third/forth block respectively, a 2D max pooling layer which downsamples the data, halving the resolution in both dimensions (i.e. latitudinally and longitudinally), and finally a dropout layer, which is only active during training, and which sets a fraction of the input arrays values to 0 (rescaling the remainder of the array), in order to reduce over-fitting. During the convolution process, this fraction is set to 0.25, whilst in the fully connected layers the fraction is increased to 0.5. Note that, in the convolution layers, the \(5\times 5\) kernel represents the matrix used to enhance features, i.e. detect edges. This kernel is swept across the entire input moving horizontally and vertically with a stride length of 1, hence reducing the dimensionality of the input by 4 in both the horizontal and vertical direction (since we do not include any padding). The exact form that this kernel matrix takes is a result of the training process undertaken, with the kernel being optimised to recover the feature of interest. Of course since we are looking for multiple features and since these features vary spatially, running a single kernel per convolution layer would be highly inefficient. Instead we consider and train multiple kernels per layer, with
Figure 13: Flowchart showing the layers and blocks of a half resolution (initial image resolution is 180 by 360, half that produced by matplotlib for print quality figures) image recognition network designed to detect the 6 features of interest in the thermal structures of our HD209458b-like atmospheric models. Note that in each box we give the name of the layer, the output resolution of the layer, as well as the number of trainable parameters, when non-zero.
the number of kernels, referred to as the number of filters increasing as the dimensionality (and hence size) of the data set decreases: i.e. 16 filters in out first convolution block and 64 in the last.
Once the convolution blocks have processed the data, and the total dimensionality has been reduced to \(7\times 18\) with 64 filters (i.e. \(7\times 18\times 64\)), the data-set is small enough that, after flattening (so that the dense layers connect all points), it can be fed into a series of 'low-resolution' fully-connected blocks which consist of fully-connected dense layers, followed by over-fitting reducing dropout layers. The final dense layer returns 8 values corresponding to the probability of detection of each the trained features. This process is repeated for every pressure level and every time-averaged point in order to generate the multi-categorisation maps shown throughout this paper. It is important to note that, even after the dimensionality reduction associated with the convolution process, the first dense layer which fully connects the 8064 points in the flattened output of the convolution process with only 128 output points contains over a million weights, although this is small compared with the almost 25 million weights that would be required to fully-connect the initial image with a similar sized output (a process which is likely to lead to spurious outputs due to the massive single-step decrease in dimensionality from 194,000 to 128 points!).
In addition to the above it is import to note, for reproducibility, that: a) other than the final fully-connected dense layer (which uses a sigmoid function to generate the final probabilities), all the neural-network layers considered here include rectified linear unit (ReLU) activation, which is commonly used in CNNs, is believed to improve the efficiency of deep-learning (Nair & Hinton, 2010), and which essentially works by zeroing out any negative values in the output of the associated neural-network layer. b) the learning rate of the model was 0.001 and it made use of the 'adam' optimiser. c) the initial resolution of the network discussed here is half that of the input, print quality, image file - however tests with higher initial resolutions where performed, and whilst the accuracy slightly improved, the computational cost significantly ballooned. And d) the training/validation data split was 80%/20% with pseudo-random assignment between the two categories (that is to say we made use of the random state parameter to ensure that the train/test split was consistent between models).
F. Sainsbury-Martinez and P. Tremblin would like to acknowledge and thank the ERC for funding this work under the Horizon 2020 program project ATMO (ID: 757858). F. Sainsbury-Martinez would also like to thank UK Research and Innovation for additional support under grant number MR/T040726/1.
The authors also wish to thank Idris, CNRS, University Paris-Saclay, and MDLS for access to the supercomputer Ruche, without which the long time-scale calculations featured in this work would not have been possible. Additionally this work was granted access to the HPC resources of IDRIS (Jean-Zay) and CEA-TGCC (Irene/Joliot-Curie) under the 2020/2021 allocation - A0080410870 made as part of the GENCI Dari A8 call.This work was supported by French government funding managed by the National Research Agency under the Investments for the Future program (PIA) grant ANR-21-ESRE-0030 (CONTINUUM).
Finally the authors with to thank the referee (and editor) for useful comments, questions, and suggestions which have significantly improved the readability of this manuscript.
|
2309.06022 | Impact of modified gravity theory on neutron star and nuclear matter
properties | New observational data, measured with a high degree of accuracy, of compact
isolated neutron stars and binary stars in gravitational wave remnants have the
potential to explore the strong field gravity. Within the framework of
energy-momentum squared gravity (EMSG) theory we study its impact on several
properties of neutron stars and plausible modifications from the predictions of
general relativity. Based on a representative set of relativistic nuclear mean
field models, non-relativistic Skyrme-Hartree-Fock models and microscopic
calculations, we show deviations of neutron star mass-radius sequence in EMSG
theory as compared to general relativity. The variation in the effective
nuclear equation of state in EMSG, results in distinct magnitudes in the
reduced pressure, speed of sound, and maximum compactness at the center of
neutron stars. We perform extensive correlation analysis of the nuclear model
parameters with the neutron star observables in light of the new observational
bounds. Perceptible modifications in the correlations are found in the models
of gravity that provide different estimates of the slope and curvature of
nuclear matter symmetry energy. The available neutron star data however do not
impose stringent enough constraints for clear evidence of deviations from
general relativity. | Naosad Alam, Subrata Pal, A. Rahmansyah, A. Sulaksono | 2023-09-12T07:47:49Z | http://arxiv.org/abs/2309.06022v2 | # Impact of modified gravity theory on neutron star and nuclear matter properties
###### Abstract
New observational data, measured with a high degree of accuracy, of compact isolated neutron stars and binary stars in gravitational wave remnants have the potential to explore the strong field gravity. Within the framework of energy-momentum squared gravity (EMSG) theory we study its impact on several properties of neutron stars and plausible modifications from the predictions of general relativity. Based on a representative set of relativistic nuclear mean field models, non-relativistic Skyrme-Hartree-Fock models and microscopic calculations, we show deviations of neutron star mass-radius sequence in EMSG theory as compared to general relativity. The variation in the effective nuclear equation of state in EMSG, results in distinct magnitudes in the reduced pressure, speed of sound, and maximum compactness at the center of neutron stars. We perform extensive correlation analysis of the nuclear model parameters with the neutron star observables in light of the new observational bounds. Perceptible modifications in the correlations are found in the models of gravity that provide different estimates of the slope and curvature of nuclear matter symmetry energy. The available neutron star data however do not impose stringent enough constraints for clear evidence of deviations from general relativity.
## I Introduction
Understanding the stellar structures, such as the compact neutron stars, relies entirely on the physics of high-density matter [1]. The two major impedents to a precise determination of neutron star (NS) properties at supranuclear densities are the lack of detailed knowledge of nuclear interaction in particular [2; 3; 4] and gravitational interaction [5; 6; 7; 8; 9]. The repulsive nuclear equation of state (EoS: characterizing the dependence of matter pressure on energy density) and the balancing attractive strong-field gravitational physics are intertwined via the Tolman-Oppenheimer-Volkoff (TOV) equations [10; 11] for hydrostatic equilibrium of the star configuration, hence their uncertainties could impact the predictions of structure and properties of neutron stars.
While terrestrial experiments and _ab inito_ calculations provide nuclear matter description only about the saturation densities, one relies on several sophisticated nuclear many-body interaction theories [2; 3] for high density behaviour. These models by construction reproduce the ground state nuclear matter properties, as a consequence, the higher density predictions of these EOSs are very diverse and remain largely unconstrained. Particularly uncertain are the supranuclear density behavior of nuclear symmetry energy \(e_{\rm sym}\), its slope and curvature at saturation density and thus the EoS of neutron-rich matter [1; 12; 13]. Considerable attempts have been made to put stringent constraints on the EoS by employing the combined measurements of neutron star masses and radii, and the observed tidal deformability bound from the detected gravitational waves [14].
On the other hand, the impact of various theories of gravity in the strong-field regime remains largely unexplored [5; 8]. While Einstein's General theory of Relativity (GR) continues to be a very effective theory of gravitational interaction at various scales, especially with the detection of gravitational waves, sufficient motivation to investigate alternate viable theories of gravity arises from unexplained dark matter and dark energy at the galactic and cosmological scales, and the presence of singularity in the early universe and inside black holes [6]. Being a superdense object with strong gravitational field, neutron stars offer an exciting avenue for investigation of general relativity in the strong field or high curvature domain and open up a direction for the study of new gravitational physics. Hence, it will be appealing to explore and test alternative theories of gravity in the case of superdense stars in addition to the traditional approach based on GR.
The predictions of several alternative theories of gravity are compatible to those of GR in vacuum. However, these theories provide deviations from GR in the presence of matter not only at high density but also in low curvature domains [15]. Technically, the field equations of this class of modified gravity theories alter the expressions of the TOV equations, and hence the astrophysical properties of the stars could be affected. In the present study, we shall employ the energy-momentum squared gravity (EMSG) which belongs to this class of modified theories of gravity [6; 7; 9; 16; 17; 18]. EMSG is a modification of GR that allows a self-contraction of the energy-momentum tensor \(T^{\mu\nu}\), and thereby to incorporate in the generic functional action an additional term of the form \(f(T_{\mu\nu}T^{\mu\nu})=\alpha T_{\mu\nu}T^{\mu\nu}\), where \(\alpha\) is a parametric constant. This approach enforces a modification on the left-hand side of Einstein's field equations \(\mathcal{R}_{\mu\nu}-g_{\mu\nu}\mathcal{R}/2=\kappa T_{\mu\nu}\), in the usual notation [6; 7]. Interestingly, the field equations with the standard physical energy-momentum tensor could be mapped into the GR Einstein's field equations, but with an effective or modified energy-momentum tensor. This enables a rather straightforward calculation of the moment of inertia, tidal deformation, and other properties of the stars [19].
The EMSG theory suggests a bounce in the early universe due to maximum energy density and correspond
ingly a minimum length scale factor, thereby addressing the problem of big-bang singularity as well as the current cosmic accelerated expansion. This theory also correctly predicts cosmic behaviour and follows the actual progression of cosmological eras. Since EMSG is proposed to resolve the singularities classically, it is expected that the deviations from GR appear in the properties of compact stars [6]. Recently, the parameter \(\alpha\) in the EMSG model has been constrained [7] by using the \(2.0M_{\odot}\) maximum mass NS constraint and physicality of certain effective EoSs in the center of NS, to be in the range \(-10^{-38}\lesssim\alpha\lesssim 10^{-37}\) cm\({}^{3}\)/erg. Whereas, binary pulsar observation [9] yields a compatible value of \(-6{\rm x}10^{-38}\lesssim\alpha\lesssim 10^{-36}{\rm cm}^{3}\)/erg, but that reported from solar system test are relative lower \(-4{\rm x}10^{-27}<\alpha<10^{-26}\) cm\({}^{3}\)/erg.
In this article, we have employed a comprehensive set of relativistic mean field (RMF) theory for nuclear interaction that provides Lorentz covariant extrapolation from sub- to supra-saturation densities. The model has been extensively applied in the description of several finite nuclei properties and studies in NS structure. Further, we have employed a representative set of non-relativistic Skyrme-Hartree-Fock (SHF) model, and two microscopic theories based on Brueckner-Hartree-Fock (BHF) and variational approaches. Within these model EoSs, that have diverse high-density behavior, we shall explore the EMSG and the GR effects on the neutron star properties. Furthermore, we will examine the correlations between the NS properties, namely mass, radius, tidal deformability with the key nuclear EoS parameters (as well as between the thermodynamic variables, namely the pressure, speed of sound). Only if tight correlations between NS observables and EoS parameters in various models of gravity can be established, one can then provide suitable (model independent) bounds on these nuclear matter quantities by employing the precisely measured NS observables. Alternatively, these relations can be used to constrain an astrophysical observable from the knowledge of the correlated nuclear matter observables. In fact, by using a larger number of unified EoSs, extensive correlation analysis in general relativity have been conducted between neutron star mass \(M\), radius \(R\), etc with the parameters of the nuclear EoS, such as the nuclear matter incompressibility \(K(\rho_{0})\), its slope \(M(\rho_{0})\), the nuclear symmetry energy slope \(L(\rho_{0})\) and curvature \(K_{\rm sym}(\rho_{0})\), at the saturation density \(\rho_{0}\approx 0.16\) fm\({}^{-3}\)[20], and their linear combinations [21; 22; 23; 24; 25], as well as with the tidal deformability \(\Lambda\)[22] and corrections to mass-weighted tidal deformability [23] of the detected gravitational waves GW170817 [26]. While the individual EoS parameters were found to be weakly correlated, their specific linear combinations showed a rather strong correlation [21; 22; 23; 25; 27]. It will be instructive to investigate and understand how these correlations between the astrophysical observables and nuclear EoS behave in the alternative energy-momentum squared gravity (EMSG) model as compared to the predictions of General Relativity, and whether an approximate universal constraint can be imposed on the EoS parameters that is independent of nuclear and gravitational interactions.
The outline of the paper is as follows. In Sec. II, we briefly describe the modified field equations in EMSG. The modified structure in TOV equations for neutron star in EMSG are discussed in Sec. III. We then discuss the methodology to compute the moment of inertia in slow rotation approximation in Sec. IV and the tidal deformability parameter in Sec. V. Next, we provide a brief review of the key EoS parameters and the EoSs used the analysis in Sec. VI. Our results on the calculations of neutron star configurations within EMSG and GR are presented in Sec. VII. Within various diverse EoSs, correlations between the parameters of the EoSs and the NS properties in the EMSG modified theory of gravity will be also discussed. Finally, the conclusions are drawn in Sec. VIII. We will adopt the system of units \(\hbar=c=G=1\) throughout the manuscript.
## II Energy-momentum squared gravity
In the energy-momentum squared gravity theory, the Einstein-Hilbert action is modified by the addition of a scalar term \(f(T_{\mu\nu}T^{\mu\nu})=\alpha T_{\mu\nu}T^{\mu\nu}\) leading to [7; 8; 9]:
\[S=\int\left[\frac{1}{2\kappa}\left(\mathcal{R}-2\Lambda\right)+ \alpha T_{\mu\nu}T^{\mu\nu}+\mathcal{L}_{\rm m}\right]\sqrt{-g}\,\mathrm{d}^{ 4}x, \tag{1}\]
where \(\kappa=8\pi G\) is Newton's constant, \(\mathcal{R}\) denotes the Ricci scalar, \(g\) is the determinant of the metric, and \(\Lambda\) the cosmological constant. The Lagrangian density \(\mathcal{L}_{\rm m}\) represents source of the matter described by the energy-momentum tensor which can be defined as usual:
\[T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{L}_{\rm m})}{ \delta g^{\mu\nu}}=g_{\mu\nu}\mathcal{L}_{\rm m}-2\frac{\partial\mathcal{L}_{ \rm m}}{\partial g^{\mu\nu}}, \tag{2}\]
Consequently, the Einstein's field equation for the modified action becomes
\[G_{\mu\nu}+\Lambda g_{\mu\nu}=\kappa T_{\mu\nu}+\kappa\alpha\left(g_{\mu\nu}T _{\sigma\epsilon}T^{\sigma\epsilon}-2\theta_{\mu\nu}\right), \tag{3}\]
where \(G_{\mu\nu}=\mathcal{R}_{\mu\nu}-\frac{1}{2}g_{\mu\nu}\mathcal{R}\) is the Einstein tensor and the new tensor \(\theta_{\mu\nu}\) is defined as
\[\theta_{\mu\nu}= T^{\sigma\epsilon}\frac{\delta T_{\sigma\epsilon}}{\delta g^{ \mu\nu}}+T_{\sigma\epsilon}\frac{\delta T^{\sigma\epsilon}}{\delta g^{\mu \nu}}\] \[= -2\mathcal{L}_{\rm m}\left(T_{\mu\nu}-\frac{1}{2}g_{\mu\nu}T \right)-TT_{\mu\nu}\] \[+2T_{\mu}^{\gamma}T_{\nu\gamma}-4T^{\sigma\epsilon}\frac{\partial ^{2}\mathcal{L}_{\rm m}}{\partial g^{\mu\nu}\partial g^{\sigma\epsilon}}. \tag{4}\]
Here \(T=g^{\mu\nu}T_{\mu\nu}\) is the trace of the energy-momentum tensor. We consider the star to be a perfect fluid (i.e. non-viscous and stress-free), with energy-momentum tensor \(T_{\mu\nu}=(\rho+P)u_{\mu}u_{\nu}+Pg_{\mu\nu}\), where \(\rho\) is the energy
density, \(P\) is the isotropic pressure, and \(u_{\mu}\) is the four-velocity. Since the definition of matter Lagrangian for the perfect fluid described via the energy-momentum tensor is not unique, one can consider \(\mathcal{L}_{\text{m}}=P\) or \(\mathcal{L}_{\text{m}}=-\rho\); both these choices lead to the same \(T^{\mu\nu}\) in the case of minimal coupling of matter with gravity as in GR. In contrast, for non-minimal coupling as in EMSG, it gives rise to distinct theories with different predictions [28; 29]. In this work we consider the former choice of \(\mathcal{L}_{\text{m}}=P\) that has been commonly employed [28; 29; 30]. The covariant divergence of Eq. (3) then becomes
\[\nabla^{\mu}T_{\mu\nu}=-\alpha g_{\mu\nu}\nabla^{\mu}(T_{\sigma\epsilon}T^{ \sigma\epsilon})+2\alpha\nabla^{\mu}\theta_{\mu\nu}, \tag{5}\]
Note that the local/covariant energy-momentum conservation \(\nabla^{\mu}T_{\mu\nu}\) is not identically zero for \(\alpha\neq 0\). Using Eqs. (3), (4) and the above definition of \(T_{\mu\nu}\), one finally obtains [7]
\[G_{\mu\nu}+\Lambda g_{\mu\nu}=\kappa\rho\left[\left(1+\frac{P}{ \rho}\right)u_{\mu}u_{\nu}+\frac{P}{\rho}g_{\mu\nu}\right]\] \[+\alpha\kappa\rho^{2}\left[2\left(1+\frac{4P}{\rho}+\frac{3P^{2} }{\rho^{2}}\right)u_{\mu}u_{\nu}+\left(1+\frac{3P^{2}}{\rho^{2}}\right)g_{\mu \nu}\right]. \tag{6}\]
Equation (6) can be recast into GR Einstein's field equation
\[G^{\mu\nu}+\Lambda g^{\mu\nu}=\kappa T_{\text{eff}}^{\mu\nu}, \tag{7}\]
with an effective energy momentum tensor \(T_{\text{eff}}^{\mu\nu}=(\rho_{\text{eff}}+P_{\text{eff}})u^{\mu}u^{\nu}+P_{ \text{eff}}g^{\mu\nu}\) for an ideal fluid, where the effective energy density and pressure are given by
\[\rho_{\text{eff}} =\rho+\alpha\rho^{2}\left(1+\frac{8P}{\rho}+\frac{3P^{2}}{\rho^{ 2}}\right), \tag{8}\] \[P_{\text{eff}} =P+\alpha\rho^{2}\left(1+\frac{3P^{2}}{\rho^{2}}\right). \tag{9}\]
Using the mapped expression for the field equations, the NS configuration can be easily obtained.
## III **Tov equations in EMSG**
To obtain the Tolman-Oppenheimer-Volkoff equations [10; 11] for a non-rotating star in the EMSG description, we adopt the general spherically symmetric metric as
\[\text{d}s^{2}=-e^{2\nu(r)}\text{d}t^{2}+e^{2\lambda(r)}\text{d}r^{2}+r^{2} \text{d}\theta^{2}+r^{2}\sin^{2}\theta\,\text{d}\phi^{2}, \tag{10}\]
where metric functions \(\nu(r)\) and \(\lambda(r)\) depend only the radial coordinate \(r\). Using Eqs. (3) and (10), one obtains the \((tt)\) and \((rr)\) components of EMSG field equation as
\[\frac{1}{r^{2}}-\frac{e^{-2\lambda}}{r^{2}}\left(1-2r\frac{\text {d}\lambda}{\text{d}r}\right) =\kappa\rho_{eff}, \tag{11}\] \[-\frac{1}{r^{2}}+\frac{e^{-2\lambda}}{r^{2}}\left(1+2r\frac{ \text{d}\nu}{\text{d}r}\right) =\kappa P_{eff}, \tag{12}\]
where \(\rho_{\text{eff}}\) and \(P_{\text{eff}}\) are the effective values of mass density and pressure at a distance \(r\) from the center of NS. By defining the metric function \(\lambda\left(r\right)\) in terms of the mass function \(m(r)\) as
\[e^{-2\lambda(r)}=1-\frac{2m\left(r\right)}{r}, \tag{13}\]
and the metric function \(\nu(r)\) via the pressure as [7]
\[\frac{\text{d}\nu}{\text{d}r}= -\left[\rho\left(1+\frac{P}{\rho}\right)\left\{1+2\alpha\rho \left(1+\frac{3P}{\rho}\right)\right\}\right]^{-1}\] \[\times\left[\left(1+6\alpha P\right)\frac{\text{d}P}{\text{d}r}+2 \alpha\rho\frac{\text{d}\rho}{\text{d}r}\right], \tag{14}\]
one obtains the modified TOV equations in EMSG:
\[\frac{\text{d}m}{\text{d}r}= 4\pi r^{2}\rho\left[1+\alpha\rho\left(1+\frac{8P}{\rho}+\frac{3P ^{2}}{\rho^{2}}\right)\right], \tag{15}\] \[\frac{\text{d}P}{\text{d}r}= -\frac{m\rho}{r^{2}}\left(1+\frac{P}{\rho}\right)\left(1-\frac{2 m}{r}\right)^{-1}\] \[\times\left[1+\frac{4\pi r^{3}P}{m}+\alpha\frac{4\pi r^{3}\rho^{2 }}{m}\left(1+\frac{3P^{2}}{\rho^{2}}\right)\right]\] \[\times\left[1+2\alpha\rho\left(1+\frac{3P}{\rho}\right)\right] \left[1+2\alpha\rho\left(\frac{\text{d}\rho}{\text{d}P}+\frac{3P}{\rho}\right) \right]^{-1}, \tag{16}\]
The structure of the relativistic stars i.e. the mass and radius can be obtained by solving Eqs. (15)-(16) simultaneously with an input EoS \(P\equiv P(\rho)\), which describes the relation between the pressure \(P(r)\) and the density \(\rho(r)\) of the matter. It is evident from Eqs. (15)-(16) or equivalently from Eqs. (8)-(9), that EMSG modifications to GR for the NS configurations stem from the additional terms contributing to the energy density \(\rho_{\text{EMSG}}=\alpha(\rho^{2}+8\rho P+3P^{2})\) and pressure \(P_{\text{EMSG}}=\alpha(\rho^{2}+3P^{2})\).
## IV Moment of inertia
In this section we briefly present the calculation of the moment of inertia of a rotating neutron star in the energy-momentum squared gravity. We consider that the star rotates uniformly with a stellar frequency \(\Omega\) which is much lower in comparison with the Kepler frequency at the equator, i.e. \(\Omega\ll\Omega_{\text{max}}\approx\sqrt{M/R^{3}}\). The moment of inertia of such an axially symmetric and uniformly rotating neutron star [31; 32] in EMSG can be written as
\[I\equiv\frac{J}{\Omega}=\frac{8\pi}{3}\int_{0}^{R}r^{4}e^{-\nu(r)}\frac{ \bar{\omega}(r)}{\Omega}\frac{\left[\rho_{\text{eff}}(r)+P_{\text{eff}}(r) \right]}{\sqrt{1-2m(r)/r}}dr. \tag{17}\]
Note that the effective energy density \(\rho_{\text{eff}}\) and pressure \(P_{\text{eff}}\) of Eqs. (8) and (9) enter the expression. \(J\) is the angular momentum, \(\nu(r)\) and \(\bar{\omega}(r)\) are the metric functions. In the slowly rotating approximation, the line element for
the background metric of a stationary and axially symmetric star can be taken as
\[ds_{r}^{2}= -e^{2\nu(r)}dt^{2}+e^{2\lambda(r)}dr^{2}+r^{2}d\theta^{2}\] \[+r^{2}\sin^{2}\theta d\phi^{2}-2\omega(r)r^{2}\sin^{2}\theta dtd \phi\;. \tag{18}\]
Here the metric functions \(\nu(r)\) and \(\lambda(r)\) will be identical to the case of a static and spherically symmetric neutron star, and simply follow Eqs. (13) and (14).
To calculate the moment of inertia, we further require the form of metric function \(\omega(r)\) which appears due to the slow rotation of the star. The dimensionless relative frequency, defined as
\[\bar{\omega}(r)\!\equiv\!\frac{\Omega-\omega(r)}{\Omega}, \tag{19}\]
obeys the differential equation
\[\frac{d}{dr}\left[r^{4}j(r)\frac{d\bar{\omega}(r)}{dr}\right]+4r^{3}\frac{dj( r)}{dr}\bar{\omega}(r)=0, \tag{20}\]
where
\[j(r)=e^{-\nu(r)-\lambda(r)}=\begin{cases}e^{-\nu(r)}\sqrt{1-2m(r)/r}&\text{if $ r\leq R$},\\ 1&\text{if $r>R$}.\end{cases} \tag{21}\]
The solution to the above equation can be obtained by using the following two boundary conditions:
\[\bar{\omega}^{\prime}(0)=0, \tag{22a}\] \[\bar{\omega}(R)+\frac{R}{3}\,\bar{\omega}^{\prime}(R)=1. \tag{22b}\]
To solve the differential equation (20), one can start with a guess value of the central frequency \(\bar{\omega}_{c}\!=\!\bar{\omega}(0)\) and numerically integrate the equation up to the surface of the star. Since we start with an arbitrary value of \(\bar{\omega}_{c}\), usually, the boundary condition at \(R\) will not be satisfied. However, this can be achieved by simply rescaling \(\bar{\omega}_{c}\) by an appropriate constant. Once we have the solution of \(\bar{\omega}(r)\), the moment of inertia can be calculated from Eq. (17). After obtaining the solutions of \(\bar{\omega}(r)\) and \(I\), the consistency of the formalism may be verified from the condition \(\bar{\omega}^{\prime}(R)=6GI/R^{4}\)[31; 32].
## V Tidal deformability
The phase of the gravitational wave signal resulting from the merger of two neutron stars carries valuable information about the tidal deformability parameter that is directly related to the internal structure and composition of the star, particularly the equation of state of nuclear matter. It quantifies the deformations induced in the star due to an external tidal field of the companion star. The tidal deformability parameter \(\lambda\) can be expressed as [27; 33; 34; 35; 22; 36],
\[\lambda=-\frac{Q_{ij}}{\mathcal{E}_{ij}}, \tag{23}\]
where \(Q_{ij}\) represents the components of the induced quadrupole moment tensor and \(\mathcal{E}_{ij}\) denotes the components of the tidal field tensor. In terms of the Love number \(k_{2}\), the mass normalized dimensionless tidal deformability parameter is given by
\[\Lambda\equiv\frac{\lambda}{M^{5}}=\frac{2}{3}k_{2}\left(\frac{R}{M}\right)^{ 5}\equiv\frac{2}{3}k_{2}C^{5}, \tag{24}\]
where \(R\) and \(M\) are the radius and mass of the star, and \(C\equiv M/R\) is its compactness.
The tidal Love number \(k_{2}\) depends on the underlying EoS of the star and it can be expressed in terms of the dimensionless compactness parameter \(C\) as [33; 34; 35; 36],
\[k_{2} =\frac{8C^{5}}{5}\left(1-2C\right)^{2}\left[2+2C\left(y_{R}-1 \right)-y_{R}\right]\] \[\times\left\{2C\left[6-3y_{R}+3C(5y_{R}-8)\right]\right.\] \[+4C^{3}\left[13-11y_{R}+C(3y_{R}-2)+2C^{2}(1+y_{R})\right]\] \[+3(1-2C)^{2}\left[2-y_{R}+2C(y_{R}-1)\right]\ln\left(1-2C\right) \right\}^{-1}\!. \tag{25}\]
The function \(y_{R}\equiv y(r)|_{r=R}\) is related to the metric perturbation and satisfies the following differential equation:
\[r\frac{dy(r)}{dr}+y(r)^{2}+y(r)F(r)+r^{2}Q(r)=0\;, \tag{26}\]
where the functions \(F(r)\) and \(Q(r)\) are given
\[F(r) =\frac{r-4\pi r^{3}\left[\rho_{\rm eff}(r)-P_{\rm eff}(r)\right]} {e^{-2\lambda(r)}}\;,\] \[Q(r) =\frac{4\pi}{e^{-2\lambda(r)}}\Big{[}5\rho_{\rm eff}(r)+9P_{\rm eff }(r)+\frac{\rho_{\rm eff}(r)+P_{\rm eff}(r)}{\partial P_{\rm eff}(r)/\partial \rho_{\rm eff}(r)}\] \[-\frac{6}{4\pi r^{2}}\Big{]}-4\left[\frac{m(r)+4\pi r^{3}P_{\rm eff }(r)}{r^{2}e^{-2\lambda(r)}}\right]^{2}\,. \tag{27}\]
For a spherically symmetric star, the Love number and tidal deformability parameter \(\Lambda\) can be determined by simultaneously solving Eq. (26) and the TOV equations (Eqs. (15)-(16)), with the boundary conditions \(p(0)\!=\!P_{c}\) and \(m(0)\!=\!0\) in addition to \(y(0)=2\) that arises from perturbative expansion of the deformed metric up to the second order.
## VI Nuclear matter equations of state
In the parabolic approximation, the equation of state of isospin asymmetric nuclear matter at a given density \(\rho\) and asymmetry \(\delta\) can be written as [37; 13]
\[e(\rho,\delta)=e_{0}(\rho)+e_{\rm sym}(\rho)\delta^{2}+\mathcal{O}(\delta^{4})\,, \tag{28}\]
where \(e(\rho,\delta)\) is the total energy per nucleon at a nucleon density \(\rho=\rho_{n}+\rho_{p}\), and \(\delta=(\rho_{n}-\rho_{p})/\rho\) is neutron-proton asymmetry parameter, with \(\rho_{n}\) and \(\rho_{p}\) the neutron and proton densities, respectively. The first term
on the right-hand side \(e_{0}(\rho)\equiv e(\rho,\delta=0)\) represents the EoS for symmetric nuclear matter, and the second term \(e_{\rm sym}(\rho)\equiv\frac{1}{2}\frac{\partial^{2}e_{\rho}(\rho,\delta)}{ \partial\rho^{2}}|_{\delta=0}\) is the nuclear symmetry energy. The isoscalar part \(e_{0}(\rho)\) and the isovector part \(e_{\rm sym}(\rho)\) can be further Taylor series expanded around the saturation density \(\rho_{0}\) as
\[e_{0}(\rho) =e_{0}(\rho_{0})+\frac{K_{0}}{2}\chi^{2}+\frac{Q_{0}}{6}\chi^{3}+ \mathcal{O}(\chi^{4}), \tag{29}\] \[e_{\rm sym}(\rho) =e_{\rm sym}(\rho_{0})+L\chi+\frac{K_{\rm sym}}{2}\chi^{2}+ \mathcal{O}(\chi^{3}), \tag{30}\]
where the dimensionless variable \(\chi=(\rho-\rho_{0})/3\rho_{0}\) gives the deviation of density from the saturation value \(\rho_{0}\). The saturation parameters for the symmetric nuclear matter are the binding energy per nucleon \(e_{0}\equiv e_{0}(\rho_{0})\), incompressibility \(K_{0}=9\rho_{0}^{2}\frac{\partial^{2}e_{\rho}(\rho)}{\partial\rho^{2}}|_{\rho_ {0}}\), skewness coefficient \(Q_{0}=27\rho_{0}^{3}\frac{\partial^{3}e_{0}(\rho)}{\partial\rho^{2}}|_{\rho_ {0}}\). Similarly, the parameters for the symmetry energy expansion are the symmetry energy coefficient \(J\equiv e_{\rm sym}(\rho_{0})\), and the slope and curvature of symmetry energy i.e. \(L=3\rho_{0}\frac{\partial e_{\rm sym}(\rho)}{\partial\rho}|_{\rho_{0}}\) and \(K_{\rm sym}=9\rho_{0}^{2}\frac{\partial^{2}e_{\rm sym}(\rho)}{\partial\rho^{2} }|_{\rho_{0}}\), respectively.
The slope of the incompressibility, \(M_{0}=M(\rho_{0})=3\rho_{0}\frac{\partial K_{0}(\rho)}{\partial\rho}|_{\rho_ {0}}\), at the saturation density can be expressed in terms of \(Q_{0}\) and \(K_{0}\) as [27]
\[M_{0}=Q_{0}+12K_{0} \tag{31}\]
and the symmetry energy incompressibility is defined as \(K_{\tau}=9\rho_{\delta}^{2}\frac{\partial^{2}e_{\rm sym}(\rho)}{\partial\rho^ {2}}|_{\rho_{\delta}}\), where \(\rho_{\delta}\) is the saturation density of asymmetric nuclear matter corresponding to the asymmetry \(\delta\). The symmetry energy parameters \(K_{sym}\) and \(K_{\tau}\) are related by the following expression [27],
\[K_{\tau}=K_{sym}-6L-\frac{Q_{0}}{K_{0}}L. \tag{32}\]
For analysis of neutron star properties, we employ a representative set of 18 relativistic mean field (RMF) models [2; 3], 24 non-relativistic Skyrme-Hartree-Fock (SHF)-type models, and 2 microscopic calculations, one of these use the Brueckner-Hartree-Fock (BHF) approach with Argonne \(V_{18}\) plus 3-body Urbana-type nuclear potentials [38; 39], and the other a variational approach namely the Akmal-Pandharipande-Ravenhall (APR) EoS [40; 41].
In the RMF model, the nucleon-nucleon interactions are described by the exchange of scalar-isoscalar \(\sigma\) meson, vector-isoscalar \(\omega\) meson, and vector-isovector \(\rho\) mesons. Over the years, the model has been improved by the inclusion of non-linear self- and cross-couplings between the mesons. Based on the form of the interactions in the Lagrangian density, the RMF models that we have employed in this study, can be broadly classified as: NL-type with nonlinear \(\sigma\) term [42; 43]; NL3-type with additional \(\sigma\)-\(\rho\) and \(\omega\)-\(\rho\) term, [44], NL3\(\sigma\rho\)4, NL3\(\sigma\rho\)6 [45], NL3\(\omega\rho\)02 [46], NL3\(\omega\rho\)03 [47]; TM-type with nonlinear \(\omega\) term, TM1 [48], TM1-2 [49]; FSU-type with an additional form of nonlinear \(\omega\) coupling FSU2 [50]; BSRfamilies with more nonlinear couplings [51; 52]; and DD-type with density-dependent couplings, DD2 [53], DDH\(\delta\)[54], DDH\(\delta\)Mod [41], DDME1 [55], DDME2 [56], TW [57], and the GM1 [58].
The SHF models we have taken in the present calculation are SKa, SKb [59], SkI2, SkI3, SkI4, SkI5 [60], SkI6 [61], Sly2, Sly9 [62], Sly230a [63], Sly4 [64], SkMP [65], SKOp [66], KDE0V1 [67], SK255, SK272 [68], Rs [69], BSk20, BSk21 [70], BSk22, BSk23, BSk24, BSk25, and BSk26 [71]. The coupling constants are obtained by sophisticated fitting procedures to the finite nuclei such as the binding energies and charge radii, and the infinite nuclear matter properties at the saturation density \(\rho_{0}\).
All the models considered here have been successful in reproducing various experimental data for finite nuclei. These models are also in harmony with \(2M_{\odot}\) constraint from the maximum mass measurement of a neutron star. The SHF-type models can often exhibit a causality problem at very high densities. The SHF models that we have selected in this study, do not become causal up to the central density of \(2M_{\odot}\) neutron star. To obtain the EoS for neutron star matter, we have employed a unified inner-crust-core EoS, i.e. the inner crust EoS and the core EoS have been calculated using the same nuclear model, and the outer crust EoS is taken from the work of Baym-Pethick-Sutherland [72].
The values of the EOS parameters at \(\rho_{0}\) and the corresponding properties of neutron star obtained in these models show a significant variation. In this regard, we note that large-scale analysis [3] of experimental data from finite nuclei and heavy-ion collisions with various model calculations have provided reliable bounds on incompressibility of symmetric nuclear matter \(210\leq K_{0}\leq 260\) MeV [73; 74], the symmetry energy \(28\leq e_{\rm sym}(\rho_{0})\leq 34\) MeV [75] from combined analysis of observational data, and a reasonable constraint on the slope of symmetry energy \(46\leq L\leq 106\) MeV [76; 77; 78] at the saturation density \(\rho_{0}\). However, other nuclear matter parameters are not constrained and exhibit wide variations even at the saturation density. The large set of models of different classes employed in the present study will predict different NS configurations and thus will allow us to perform the correlation analysis between the nuclear matter parameters and NS observables with more accuracy.
## VII Results and Discussions
In this section, we first discuss with a few selected nuclear EoS how the results in the EMSG model for gravity differ from GR for the NS configurations due to modifications of the hydrostatic equilibrium. Thereafter we will focus on correlation analysis between the nuclear matter parameters and properties of neutron stars composed of neutrons, protons, electrons and muons in \(\beta\)-equilibrium.
### Neutron star properties in EMSG theory
It is useful to estimate the effects of EMSG modifications to GR on the observational properties of neutron stars using three nuclear EOSs with diverse high-density behaviour. Figure 1 displays the mass-radius relations using three different representative EoSs: namely the relativistic NL3 [45; 46; 47] based on the RMF model, the relativistic BSR2 [51; 52] which is an extended version of RMF with non-linear meson-meson cross-couplings, and the non-relativistic Sly4 [64] based on the SHF approach. To explore the EMSG modifications to GR, one ensures that the magnitude of the parameter \(\alpha\) should be such that it only induces perturbative changes in the structure of NS compared to GR. To this end, we consider the maximum value of \(\alpha_{\rm max}\approx 10^{-37}\) cm\({}^{3}\)/erg as estimated in Ref. [7] from combined constraints (at the 68% confidence level) from \(M-R\) measurements of NSs in low-mass X-ray binaries. For the NL3, BSR2, Sly4 models we also find similar upper-bound on \(\alpha\) beyond which the NS structures change dramatically. The minimum value of \(\alpha<0\) is enforced by the NS conditions: \(dm/dr>0\) from the surface (\(r=R\)) to the center (\(r=0\)) of the star, \(dP/dr<0\) from central pressure \(P_{c}\) to the surface value \(P=0\), as well as the stability criterion \(dM/d\rho_{c}\geq 0\), where the equality criterion provides the maximum mass \(M_{\rm max}\) at the central energy density \(\rho_{c}\). From the TOV equations (15)-(16), the \(dm/dr>0\) condition is satisfied by \(\alpha\rho_{c}(1+8P_{c}/\rho_{c}+3P_{c}^{2}/\rho_{c}^{2})<1\) and the condition \(dP/dr<0\) enforces \(\alpha[6P+2P^{2}({\rm d}\rho/{\rm d}P)^{2}]<1\). In Fig. 1 we present mass-radius results with the minimum values of \(\alpha_{\rm min}=-(1.9,1.2,0.8)\times 10^{-37}\) cm\({}^{3}\)/erg determined using these stability conditions for the (NL3, BSR2, Sly4) EoSs. (However, in the correlation analysis involving several diverse sets of EoSs, we shall use the minimum value of \(\alpha_{\rm min}\simeq-10^{-38}\) cm\({}^{3}\)/erg [7] which can be obtained by inserting, in the stability condition for \(dm/dr>0\), the typical (lower) central values \(P_{c}/\rho_{c}\sim 0.2\) and \(\rho_{c}\sim 10^{37}\) erg\({}^{-1}\)cm\({}^{3}\).)
For our choice of the three EoSs, the NL3 has the stiffest \(P-\rho\) variation and hence reveals the largest maximum mass \(M_{\rm max}\) and the correspondingly the largest radius \(R_{\rm max}\). As compared to GR, the EMSG model, in general, causes an effective stiffening of the EoS at low densities and softening at high densities for \(\alpha>0\), and conversely for \(\alpha<0\). The maximum masses are found to remain almost unaffected, whereas the radii increase (decrease) appreciably for the maximum (minimum) values of \(\alpha\) employed here. For the constant positive value of \(\alpha\equiv\alpha_{\rm max}\approx 10^{-37}\) cm\({}^{3}\)/erg, the softest Sly4 has as the largest increase in radii \(\Delta R\), whereas for \(\alpha<0\), the stiffest NL3 (with smallest \(\alpha_{\rm min}=-1.9\times 10^{-37}\) cm\({}^{3}\)/erg) exhibit the maximum decreases in \(\Delta R\). In fact, the maximum variation of the radius \(\Delta R\approx 0.6\) km is seen for the
Figure 2: The ratio \(P_{c}/\rho_{c}\) for central values of pressure and energy densities as a function of (a) neutron star mass \(M\), (b) moment of inertia \(I\), (c) Love number \(k_{2}\), (d) tidal deformability \(\Lambda\) in the NL3, BSR2, and Sly4 nuclear EoSs. The results are in GR an EMSG with \(\alpha\) parameters as given in Fig. 1.
Figure 1: Neutron star mass-radius curves in the BSR2, NL3, and Sly4 nuclear EoSs. The results are in GR (dotted lines) and in EMSG with a maximum value for \(\alpha\) parameter of \(\alpha_{\rm max}=10^{-37}\) cm\({}^{3}\)/erg (solid lines) and minimum values of \(\alpha_{\rm min}=-(1.9,1.2,0.8)\times 10^{-37}\) cm\({}^{3}\)/erg (dashed lines) that correspond to stable configuration stars in NL3, BSR2, Sly4 EOSs, respectively (see text for details). The contours and bands refer to \(M\)-\(R\) constraints from NICER measurements of PSR J0030+0451 [79] and PSR J0740+6620 [80], the pulsars PSR J0348+0432 [81] and PSR J1614-2230 [82; 83], the secondary component of the gravitational waves GW190814 with mass \(2.59^{+0.08}_{-0.09}M_{\odot}\)[84] (horizontal bands), and from the GW170817 event [26] (orange and grey contours).
\(M\sim 0.5M_{\odot}\) NS, relative to the GR calculation. These EMSG modifications can be understood by noting that TOV equations can be represented by a single relevant dimensionless quantity \(P/\rho\). This translates to the dimensionless compactness parameter \(C=GM/Rc^{2}\) of a star given that a larger degenerate pressure \(P\) essentially leads to a larger star radius \(R\)[12]. As discussed in subsection III, the corresponding ratio in the EMSG model turns out to be \(P_{\rm EMSG}/\rho_{\rm EMSG}=[1+8\rho P/(\rho^{2}+3P^{2})]^{-1}\). Finite limits can be placed at \(P/\rho=0\) (vacuum), \(P/\rho=1/3\) (ultra-relativistic Fermi gas: conformal bound) and \(P/\rho\leq 1\) (causality condition), which translate respectively to \(P_{\rm EMSG}/\rho_{\rm EMSG}\in(1,1/3,\leq 1/3)\). This implies from Eqs. (15), (16) that for \(\alpha>0\), EMSG stiffens the effective EoS below the conformal bound \(P/\rho=1/3\) and softens the effective EoS above the bound, and conversely for \(\alpha<0\). Figure 2(a) illustrates such a variation of the ratio \(P_{c}/\rho_{c}\) in the NS centers with the masses of the star sequence. Note that for each EoS shown, most of the stars in the sequence are confined within \(P_{c}<\rho_{c}/3\) leading to stiffening (softening) for positive (negative) values of \(\alpha\), and correspondingly predict larger (smaller) star radii. On other hand, the stars at and near the maximum mass \(M_{\rm max}\) are located above the conformality bound \(P_{c}/\rho_{c}=1/3\) and well within the causality constraint \(P_{c}/\rho_{c}\leq 1\).
Profound implications may follow in EMSG theory for negative \(\alpha\). For example, various parametrizations of the nuclear EoSs strive to simultaneously describe the observational tidal deformability bound of \(\Lambda_{1.4}\leq 580\) of a canonical \(1.4M_{\odot}\) NS inferred from GW170817 event [26] and the maximum mass bound \(M_{\rm max}\gtrsim 2M_{\odot}\). The current tension can be effectively addressed in EMSG (for \(\alpha<0\)) that predicts smaller radii (thereby even smaller \(\Lambda\sim(R/M)^{5}\)) for low mass neutron stars but relatively insensitive to the maximum mass of NSs. Furthermore, a star of extremely small mass \(M=0.77^{+0.20}_{-0.17}M_{\odot}\) and radius \(R=10.4^{+0.86}_{-0.78}\) km is estimated within the supernova remnant HESS J1731-347 [85], which has posed the interesting possibility of exotic strange stars. We emphasize that even a pure nucleonic star, owing to its small radius in the EMSG strong-field gravity, can be an exciting viable alternative.
Figure 3 explores the EMSG effects on the NS observables: the moment of inertia \(I\), tidal Love number \(k_{2}\) and the tidal deformability \(\Lambda\) as a function of compactness parameter \(C=M/R\) for each of the NL3, BSR2, Sly4 EoSs. The variation of these observables on the central pressure to central energy density ratio \(P_{c}/\rho_{c}\) for the star sequence are shown in Fig. 2(b)-(d). As discussed above, the dimensionless \(C\) naturally translates into a measure of the neutron stars pressure and energy at the center via the relation \(M/R\sim P_{c}/\rho_{c}\). The moment inertia can be a useful estimate of the EMSG effects since the dimensional relation \(I\propto MR^{2}\) gives relatively larger ranges from changes in the radius, and moreover, the accuracy of radius estimations are largely limited by uncertainties. Although the moment of inertia depends on the underlying stiffness/softness of the EoS, one notices in Fig. 3(a), small effects of gravity on the variation of moment of inertia with the dimensional parameter \(M/R\) in any individual nuclear EoS. This can be traced from Fig. 2(b) to the subdued effect of EMSG on the ratio \(P_{c}/\rho_{c}\), irrespective of the choice of nuclear EoS.
On the other hand, the variation of Love number \(k_{2}\) of (25) with compactness in Fig. 3(b) shows noticeable modifications in EMSG primarily near the \(k_{2}\) peak at \(C\approx 0.1\) that corresponds to \(M\approx 1M_{\odot}\). Whereas, \(k_{2}\) is found to be relatively independent of the details of the models of gravity as well as the EoSs at small compactness \(C\lesssim 0.05\) that is dominated by the large crustal radii for these small stellar masses. At large \(C\gtrsim 0.25\) near the maximum mass configurations, the values of \(k_{2}\) become much smaller than at their \(M_{\rm mass}\). Although \(k_{2}\) is seen here to be quite sensitive to the EoS, the EMSG modifications to GR at this strong gravity field regime are however smaller compared to the observed spread in \(k_{2}\) for \(1M_{\odot}\) stars.
Figure 3: Dependence of neutron star compactness parameter \(M/R\) on (a) moment of inertia \(I\), (b) dimensionless Love number \(k_{2}\), (c) dimensionless tidal deformability \(\Lambda\) and quadrupole polarizability \(\lambda\) (inset) in the NL3, BSR2, and Sly4 nuclear EoSs in the GR and EMSG as referred to in Fig 1.
Tidal fields from inspiraling binary neutron stars induce a quadrupole polarizability \(\lambda=(2/3)k_{2}R^{5}\) or a dimensional tidal deformability \(\Lambda=(2/3)k_{2}(R/M)^{5}\), which may be sensitive to models of gravity due to \(R^{5}\) dependence. The inset of Fig. 3(c) depicts a larger variation of \(\lambda\) with \(M/R\) in the EMSG, especially near \(1M_{\odot}\) as seen at the \(k_{2}\)-peak. In contrast, a strong correlation between the dimensionless tidal deformability \(\Lambda\) and compactness (as well as between \(\Lambda-P_{c}/\rho_{c}\); see Fig. 2(d)) appears, irrespective of the models of gravity and the three representative EoSs. In fact, such a tight correlation was found between \(\Lambda_{1.4}\) and \(R_{1.4}\) for canonical \(1.4M_{\odot}\) pure nucleonic stars [86] and with nucleon-quark phase transition [87] suggesting the possibility to constrain the radius and perhaps the symmetry energy \(e_{\rm sym}(\rho_{0})\).
### Correlation analysis between neutron star properties and nuclear matter parameters
In the following we shall explore the possible correlations between the NS observables (\(R\), \(I\), \(\Lambda\)) with the nuclear matter (NM) saturation parameters (\(K_{0}\), \(Q_{0}\), \(M_{0}\), \(J\), \(L\), \(K_{\rm sym}\)) and linear combination of two NM parameters (such as \(K_{0}+\beta L\), \(M_{0}+\eta L\), \(M_{0}+\zeta K_{\rm sym}\)), and their impact due to the EMSG theory. To facilitate the correlation study we include all the RMF, SHF and microscopic models described in section VI. Hereafter, we shall employ the fixed maximum and minimum values of the parameter \(\alpha\) estimated in the EMSG theory [7], \(\alpha_{\rm max}=10^{-37}\) cm\({}^{3}\)/erg and \(\alpha_{\rm min}=-10^{-38}\) cm\({}^{3}\)/erg for all the EoSs employed, which ensure stable configurations for all the neutron stars.
Before attempting such NS-NM correlations, it is instructive to employ the causality bound of speed of sound squared \(c^{2}=dP/\rho\leq 1\) to impose limits on the maximum value \(P_{c}/\rho_{c}\) at the center and its natural transform \(R_{\rm max}/M_{\rm max}\) for the superdense NS matter. Figure 4 displays the correlation between central sound speed \(c_{s}^{2}\) and the reduced central pressure \(\widetilde{P}_{c}\equiv P_{c}/\rho_{c}\) from all the diverse EoSs in the EMSG theory for the parameter \(\alpha=10^{-37}\) cm\({}^{3}\)/erg (red solid symbols) and in GR (green open symbols). The central speed of sound for the maximum mass stars increases with the reduced central pressure which is a measure of stiffness of dense nuclear matter inside a NS. Reasonably good correlations are found, given that a broad class of EoSs are employed. Albeit, the correlations are found to be distinct in EMSG and GR which are a direct consequence of the strong-field gravity. Compared to GR, a stiffer EOS in EMSG below the conformality bound \(P=\rho/3\) and a softer EoS above this bound for \(\alpha>0\) manifest in an increase in \(c_{s}^{2}\) at \(\widetilde{P}_{c}\lesssim 0.45\) and a reduced \(c_{s}^{2}\) at larger \(\widetilde{P}_{c}\). We find that the conformability bound (\(c_{s}^{2}\leq 1/3\)) appears to be violated at the central densities reached in all the stars. Also depicted in Fig. 4 are the linear regressions between \(c_{s}^{2}\) and \(P_{c}/\rho_{c}\) in EMSG (solid lines) and GR (dashed lines) with slope and intercept as:
\[c_{s}^{2} =(1.880\pm 0.120)\frac{P_{c}}{\rho_{c}}+(-0.113\pm 0.054),\ \ [{\rm EMSG}] \tag{33}\] \[c_{s}^{2} =(2.341\pm 0.102)\frac{P_{c}}{\rho_{c}}+(-0.304\pm 0.050),\ \ [{\rm GR }]. \tag{34}\]
The intercept at \(c_{s}^{2}=1\) enables to set an upper bound on reduced central pressure \(\widetilde{P}_{c}\) that is enforced by the causality requirement \(c_{s}^{2}\leq 1\). Our analysis suggests a central upper bound of \(\widetilde{P}_{c}\lesssim 0.592\) in EMSG and \(\widetilde{P}_{c}\lesssim 0.557\) in the effectively stiffer EoS in GR.
Figure 4: Correlation between speed of sound squared \(c_{s}^{2}\) and the ratio of central values of pressure and energy densities \(P_{c}/\rho_{c}\) corresponding to maximum mass stars in the RMF, SHF and microscopic models of EoS. The results are in EMSG with \(\alpha=10^{-37}\) cm\({}^{3}\)/erg (red solid symbols) and in GR (green open symbols). The lines represent the linear best-fit and the shaded regions correspond to 95% confidence band.
Figure 5: Same as Fig. 4 but for correlation between \(c_{s}^{2}\) and the ratio \(M_{\rm max}/R_{\rm max}\) for the maximum mass and corresponding radius of neutron stars.
The dependence of \(c_{s}^{2}\) on \(\widetilde{P}_{c}\) translates into its dependence on \(R_{\rm max}/M_{\rm max}\) for the maximum mass configurations at the NS centers as displayed in 5. The intrinsic structures in the TOV equations however prevent a perfect dimensionless mapping leading to some cluttering in the correlation with the compactness parameter. In fact, some model-dependence is revealed, viz the relatively stiffer EoSs in the relativistic mean field model generate stars with large \(M_{\rm max}\) but also have fairly large radius \(R_{\rm max}\)A, and thus yield less compact stars with smaller \(c_{s}^{2}\) as compared to those in the non-relativistic Skyrme-Hartree-Fock models. In general, the central speed of sound is found to increase with the compactness of the NSs [88]. The EMSG theory, that predicts slightly larger \(R_{\rm max}\) for \(\alpha>0\), have smaller sound speed compared to GR. Also depicted in Fig. 4 are our constructed linear regressions between \(c_{s}^{2}\) and \(M_{\rm max}/R_{\rm max}\) with 95% confidence bands by accounting for the EoS scatter in the EMSG and GR as:
\[c_{s}^{2}=(6.259\pm 0.808)\frac{M_{\rm max}}{R_{\rm max}}+(-1.071 \pm 0.223),\ \ \mbox{[EMSG]} \tag{35}\] \[c_{s}^{2}=(9.995\pm 1.167)\frac{M_{\rm max}}{R_{\rm max}}+(-2.3172 \pm 0.347),\ \ \mbox{[GR]}. \tag{36}\]
From the causality condition, we obtain a central upper bound on the compactness of about \(C_{\rm max}\equiv M_{\rm max}/R_{\rm max}\lesssim 0.338\) that corresponds to a lower limit for the radius \(R_{\rm max}/{\rm km}\gtrsim 4.370M_{\rm max}/M_{\odot}\) in EMSG theory. Similarly in the GR, a maximum compactness of \(C_{\rm max}\lesssim 0.314\) translates to the radius bound of \(R_{\rm max}/{\rm km}\gtrsim 4.704M_{\rm max}/M_{\odot}\). These compactness bounds are much smaller than Buchdahl's upper limit \(C_{\rm max}^{\rm up}=4/9\)[89]. A direct comparison of our estimated radius bound can be made with the NICER observations for the PSR J0740+6620 [80] radius of about \(12.39^{+1.30}_{-1.98}\) km with a mass \(M\approx 2.08^{+0.07}_{-0.07}M_{\odot}\) and for the PSR J0030+0451 radius \(\approx 12.71^{+1.14}_{-1.19}\) km with a mass \(\approx 1.34^{+0.15}_{-0.16}M_{\odot}\)[79]. Clearly, our estimated radii lower bounds in the models of gravity are well consistent with the NICER measurements. Inversely, constraints on the EoS variables can be applied by using the NS measurements. For example, the central mass-radius (\(M=2.08M_{\odot}\), \(R=12.39\)km) of PSR J0740+6620 yields from Eq. (35), a central sound speed of \(c_{s}^{2}\approx 0.481\) which (from Eq. (33)) corresponds to a reduced central pressure of \(\widetilde{P}_{c}\approx 0.316\).
Various approximate universal relations connecting the NS observables, such as the compactness \(C=M/R\), dimensionless moment of inertia \(\widetilde{I}\equiv I/M^{3}\propto C^{-2}\), dimensionless tidal deformability \(\Lambda\propto C^{-5}\), have been established that are insensitive to the microscopic details of the high-density EoSs [22; 23; 14; 25]. It is useful
Figure 6: Correlation between \(P_{c}/\rho_{c}\) and several neutron star quantities: (a) compactness \(M/R\), (b) scaled moment of inertia \(I/M^{3}\), (c) tidal deformability in the RMF, SHF, and microscopic models of EoS. The results are in GR (green circles) and EMSG with \(\alpha=10^{-37}\) cm\({}^{3}\)/erg (magenta stars). The lines are fifth-order polynomial fits to the correlations.
to test and validate these relations with respect to our collection of EoSs and to the models of gravity as well. Figure 6 shows correlations between the reduced central pressure \(\widetilde{P}_{c}\equiv P_{c}/\rho_{c}\) with the dimensionless quantities \(C,~{}\widetilde{I},~{}\Lambda\) of the NSs. Remarkably tight correlations do exist that are insensitive primarily to the EoSs and the gravitational interactions. Measurements of these NS observables thus provide accurate estimates of pressure at the fiducial densities which can be invoked in the model EoSs to constrain nuclear interactions and the nuclear matter parameters. Polynomial fits up to fifth order of the form \(\ln\widetilde{P}_{c}=\sum_{i=0}^{5}a_{i}\mathcal{S}^{i}\), where \(\mathcal{S}\equiv(C,~{}\ln\widetilde{I},~{}\ln\Lambda)\) are shown in EMSG and GR.
To explore the impact of tidal deformability on the structure of a star in EMSG, we display in Fig. 7 the correlation between \(\Lambda_{1.4}\) and radius \(R_{1.4}\) for star of \(M=1.4M_{\odot}\) computed for all the EOSs. The increase of \(R_{1.4}\) with \(\Lambda_{1.4}\) is simply due to the fact that \(\Lambda\) quantifies the variation of gravitational field relative to a point-mass object. The proportionality of \(\Lambda\) on \(R^{5}\), reveals in a tight correlation, i.e. an approximate universal relation independent of the input EoSs. Interestingly, the increase in \(R\) for positive \(\alpha\) values (as seen in \(M\)-\(R\) curve of Fig. 1) enforces distinct class of universalities for the EMSG and GR gravity models. In fact, the correlations can be expressed as \(\Lambda_{1.4}=\mathcal{A}\:R_{1.4}^{\xi}\), that are practically EOS-insensitive and reveal a small but finite dependence on the models of gravity, as evident from the parameters \(\mathcal{A}=8.37(9.67)\times 10^{-5}\) and \(\xi=6.15(6.12)\) for EMSG (GR). A bound on \(\Lambda_{1.4}=190^{+390}_{-120}\) at 90% confidence was extracted by LIGO-VIRGO from the observed binary neutron-star merger GW170817 event using Bayesian analysis with a common EoS for the compact binaries [90]. A more stringent constraint of \(R_{1.4}=12.9^{+0.8}_{-0.7}\) km and \(\Lambda_{1.4}=616^{+273}_{-158}\) at 90% credible level was estimated in GW190814 [84] from coalescence of a massive \((22.2-24.3)M_{\odot}\) black hole and a compact object (assumed to be a NS) of mass \((2.50-2.67)M_{\odot}\) that can provide an intriguing opportunity to test modifications of GR due to large asymmetry in the masses. Imposing the current observed bounds in Fig. 7 (grey shaded region), we find that the rather small sensitivity of the gravity models cannot be disentangled from the \(R_{1.4}-\Lambda_{1.4}\) relation. In this respect, we note that the higher multipole moments of the gravitational signal, that enables to test the multipolar structure of gravity, do not show any deviations from the predictions of GR [84]. While our derived correlations are consistent with the GW190814 constraints, the bound clearly favours EoSs with soft symmetry energy \(e_{\rm sym}(\rho)\) at density \(\rho\sim 2\rho_{0}\) and rules out the super-stiff EoSs that predict large radii.
We next analyze the correlation between the neutron star bulk observables presented above with the key nuclear matter (NM) parameters of the EoS, namely \(K_{0}\), \(M_{0}\), \(L\), \(K_{\rm sym}\) and a few selected linear combinations of these parameters. In particular, we also explore the influence of the EMSG modifications to GR on these correlations. The Pearson correlation coefficient, \(\mathcal{C}[a,b]\), has been used for a quantitative analysis of a linear correlation between two quantities \(a\) and \(b\), which can be expressed as [91],
\[\mathcal{C}[a,b]=\frac{\sigma_{ab}}{\sqrt{\sigma_{aa}\sigma_{bb}}}\,, \tag{37}\]
Figure 8: Neutron star mass \(M\) dependence of the Pearson correlation coefficients \(\mathcal{C}\) between NS observables and nuclear EoS parameters within the RMF, SHF and microscopic models. The correlations involve NS radii \(R\) (top panels), moment of inertia \(I\) (middle panels), tidal deformability \(\Lambda\) (bottom panels), with the EoS parameters \(b\in(K_{0},L,M_{0},K_{\rm sym})\) and their linear combinations: \(K_{0}+\beta L\), \(M_{0}+\eta L\), \(M_{0}+\zeta K_{\rm sym}\). The results are in the EMSG gravity model with coupling parameter \(\alpha_{\rm min}=-10^{-38}\) cm\({}^{3}\)/erg (dashed lines) and \(\alpha_{\rm max}=10^{-37}\) cm\({}^{3}\)/erg (solid lines).
where the covariance, \(\sigma_{ab}\), is given by
\[\sigma_{ab}=\frac{1}{N_{m}}\sum_{i}a_{i}b_{i}-\left(\frac{1}{N_{m}}\sum_{i}a_{i} \right)\left(\frac{1}{N_{m}}\sum_{i}b_{i}\right). \tag{38}\]
The index \(i\) runs over the number of models \(N_{m}\) used in the analysis; \(a_{i}\) and \(b_{i}\) respectively refer to the NS properties (such as radius, moment of interta, deformability) at a fixed mass, and the NM parameters in the EoS. A correlation coefficient \(\mathcal{C}[a,b]=\pm 1\) would suggest a perfect correlation/anticorrelation between the two quantities of interest, and \(\mathcal{C}[a,b]=0\) would indicate no correlation.
Figure 8 displays the NS mass dependence of the Pearson correlation coefficients between the NS quantities (\(R,\ I,\ \Lambda\)) and the EoS parameters in the EMSG model with coupling parameter \(\alpha_{\rm min}=-10^{-38}\ {\rm cm}^{3}/{\rm erg}\) (solid lines) and \(\alpha_{\rm max}=10^{-37}\ {\rm cm}^{3}/{\rm erg}\) (dashed lines) corresponding to the minimum and maximum estimated bounds [7]. Noticeable effects of the parameter \(\alpha\) on the correlation coefficients are seen. The isovector parameter, \(L\), corresponding to the slope of symmetry energy induces somewhat enhanced correlation due to larger radius (smaller compactness \(M/R\)) for positive \(\alpha_{\rm max}\). In contrast, the correlations with the isoscalar parameters \(K_{0},\ M_{0}\) show opposite dependence on \(\alpha\). On the other hand, for the isovector symmetry curvature \(K_{\rm sym}\), the correlation strengths between the positive and negative \(\alpha\) show inversion at \(M\approx 1.2M_{\odot}\). Further, the low mass NSs exhibit much stronger sensitivity to \(L,\ K_{\rm sym}\) (characterized by large correlation function) which gradually decreases with increasing NS mass, and eventually at \(M\gtrsim 1.4M_{\odot}\) the isoscalar parameters \(K_{0},\ M_{0}\) dominate the correlations. Such trends can be understood form the expressions of pressure on energy density (i.e. the EoS) when these are expressed in term of the NM parameters [21]. The linear combinations, \(K_{0}+\beta L\), \(M_{0}+\eta L\) and \(M_{0}+\zeta K_{\rm sym}\) indicate the strongest sensitivity to the NS observables over the entire mass range, wherein the correlations are designed to yield optimum values by tuning the coefficients \(\beta,\eta,\zeta\). This means, that these combinations have a stronger correlation compared to that for the individual nuclear parameters. Interestingly, the correlation \(M_{0}+\zeta K_{\rm sym}\), which shows an increasing trend with NS masses, has the largest value near the canonical \(1.4M_{\odot}\) star in case of all the NS observables \(R,\ I,\ \Lambda\). For orientation, we have listed in Tables 1 and 2 the correlation coefficients of the NS quantities with the individual NM parameters and their linear correlations for 1.4 solar mass NS.
In Fig. 9 we display the envisaged strong correlation between the NM parameters \(K_{0}+\beta L\), \(M_{0}+\eta L\) and \(M_{0}+\zeta K_{\rm sym}\) with the radii \(R_{1.4}\) (left panels) and tidal deformability \(\Lambda_{1.4}\) (right panels) for \(1.4M_{\odot}\) NS
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \(K_{0}+\beta L\) & \(M_{0}+\eta L\) & \(M_{0}+\zeta K_{\rm sym}\) \\ \cline{2-7} & \(\beta\) & \(\mathcal{C}\) & \(\eta\) & \(\mathcal{C}\) & \(\zeta\) & \(\mathcal{C}\) \\ \hline \(R_{1.4}^{<}\) & 0.878 & 0.848 & 16.972 & 0.923 & 5.351 & 0.914 \\ \(R_{1.4}^{>}\) & 0.963 & 0.858 & 18.484 & 0.926 & 5.596 & 0.907 \\ \(\Lambda_{1.4}^{<}\) & 0.670 & 0.837 & 13.846 & 0.919 & 5.008 & 0.941 \\ \(\Lambda_{1.4}^{>}\) & 0.768 & 0.856 & 15.979 & 0.925 & 5.482 & 0.937 \\ \(I_{1.4}^{<}\) & 0.614 & 0.823 & 12.917 & 0.907 & 5.136 & 0.950 \\ \(I_{1.4}^{>}\) & 0.710 & 0.839 & 14.820 & 0.912 & 5.548 & 0.945 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pearson correlation coefficients \(\mathcal{C}\) between the NS quantities and linear combinations of EoS parameters in the RMF, SHF, and microscopic models. The EMSG parameters and notations are the same as in Table 1.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \(K_{0}\) & \(Q_{0}\) & \(M_{0}\) & \(J\) & \(L\) & \(K_{sym}\) & \(K_{r}\) \\ \hline \(R_{1.4}^{<}\) & 0.704 & 0.572 & 0.743 & 0.559 & 0.743 & 0.713 & \(-0.686\) \\ \(R_{1.4}^{<}\) & 0.698 & 0.554 & 0.728 & 0.576 & 0.762 & 0.717 & \(-0.693\) \\ \(\Lambda_{1.4}^{<}\) & 0.730 & 0.605 & 0.778 & 0.507 & 0.696 & 0.719 & \(-0.662\) \\ \(\Lambda_{1.4}^{>}\) & 0.729 & 0.572 & 0.757 & 0.521 & 0.732 & 0.736 & \(-0.670\) \\ \(I_{1.4}^{<}\) & 0.729 & 0.609 & 0.781 & 0.451 & 0.673 & 0.731 & \(-0.625\) \\ \(I_{1.4}^{>}\) & 0.724 & 0.582 & 0.761 & 0.473 & 0.706 & 0.745 & \(-0.635\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Pearson correlation coefficients \(\mathcal{C}\) between the parameters in RMF, SHF, microscopic nuclear models and the the radii \(R_{1.4}\), moment of inertia \(I_{1.4}\) and tidal deformability \(\Lambda_{1.4}\) of a \(1.4M_{\odot}\) mass NS. The nuclear matter EoS parameters are incompressibility \(K_{0}\), skewness \(Q_{0}\), slope of incompressibility \(M_{0}\), symmetry energy \(J\), its slope \(L\), and curvature \(K_{\rm sym}\), and the parameter \(K_{\tau}\) all calculated at the saturation density. The correlations are calculated in the EMSG model with coupling parameter \(\alpha_{\rm min}=-10^{-38}\ {\rm cm}^{3}/{\rm erg}\) and \(\alpha_{\rm max}=10^{-37}\ {\rm cm}^{3}/{\rm erg}\), and denoted respectively by superscript \(<,>\) on the NS quantities.
in the EMSG theory. Such strong correlations can be traced essentially to the increase in the NS radii with the increase of isocal and symmetry energy pressures at \(\rho\sim(1.5-2)\rho_{o}\)[92]. Correspondingly, the stiffer effective EoS for \(\alpha>0\) at this density range leads to smaller correlation strength relative to \(\alpha<0\) as evident from the linear regression fits to these correlations. For \(\alpha_{\rm max}=10^{-37}\) cm\({}^{3}\)/erg, the constructed linear regressions (solid lines) can be represented as:
\[\begin{split}& K_{0}+\beta L=(36.42\pm 3.36)R_{1.4}+(-167.01\pm 4 4.51),\\ & M_{0}+\eta L=(762.49\pm 47.82)R_{1.4}+(-6045.71\pm 632.56),\\ & M_{0}+\zeta K_{\rm sym}=(698.48\pm 50.02)R_{1.4}+(-6656.65\pm 6 61.64 ),\\ & K_{0}+\beta L=(0.10\pm 0.01)\Lambda_{1.4}+(232.44\pm 6.80),\\ & M_{0}+\eta L=(2.14\pm 0.13)\Lambda_{1.4}+(2352.40\pm 101.82),\\ & M_{0}+\zeta K_{\rm sym}=(2.16\pm 0.12)\Lambda_{1.4}+(1057.95\pm 9 3.13 ).\end{split} \tag{39}\]
Here \((K_{0},L,M_{0},K_{\rm sym})\) are in the units of MeV and \(R_{1.4}\) in km. We use the above set of relations (and the coefficients \(\beta,\eta,\zeta\) listed in Table 2) in conjunction with the GW190814 bound on \(R_{1.4}=12.9^{+0.8}_{-0.7}\) km and \(\Lambda_{1.4}=616^{+273}_{-158}\)[84] to estimate the nuclear matter parameters. We utilize the quite accurately constrained nuclear matter incompressibility at the saturation density of \(K_{0}=240\pm 20\) obtained from analysis of isoscalar giant monopole resonance (ISGMR) collective excitations in \({}^{90}\)Zr and \({}^{208}\)Pb nuclei [73; 74; 93]. The central values are estimated to be (\(L=65.22\), \(M_{0}=2585.15\), \(K_{\rm sym}=-41.35\)) MeV for the \(R_{1.4}\) constraint, and (\(L=70.36\), \(M_{0}=2546.36\), \(K_{\rm sym}=-28.79\)) MeV for the \(\Lambda_{1.4}\) constraint. The obtained slope of symmetry energy are in line with \(L=(50.0\pm 15.5)\) MeV extracted from available nuclear masses of heavy nuclei [76], as well as the reported values of \(L=(106\pm 37)\) MeV [77] and \(L=(54\pm 8)\) MeV [78] from analysis of neutron skin thickness measurements of \({}^{208}\)Pb by the PREX-II experiment. Further, our estimated slope of the incompressibility \(M_{0}\) are consistent with the empirical constraint \(M_{0}=(1800-2400)\) MeV determined by comparing Skyrme-like energy density functional and the energies of the ISGMR \({}^{132}\)Sn and \({}^{208}\)Pb nuclei [94; 95]. Our estimate of the curvature parameter of symmetry energy lies well within the present fiducial value \(K_{\rm sym}=-107\pm 88\) MeV obtained from combined analysis of NS observables of GW170817 signal [22; 96; 23], energy density functionals constrained by terrestrial experiments and observational data [97], and metamodeling of nuclear EoS with these constraints [98].
Likewise, the linear regressions for correlations with EMSG parameter value \(\alpha_{\rm min}=-10^{-38}\) cm\({}^{3}\)/erg (dashed lines in Fig. 9) can be utilized to extract the NM parameters from the GW190814 constraints. The deviations from \(\alpha_{\rm max}\) are found to be at the level of about 18% for \(L\) and \(\sim 32\%\) for \(M_{0}\). The symmetry energy curvature \(K_{\rm sym}\) was found to have large sensitivity to \(\alpha\) parameter. Combining all these results, our estimated central values are found to be \(77.88\lesssim L\lesssim 65.22\) MeV, \(1951.32\lesssim M_{0}\lesssim 2589.12\) MeV, and \(-41.35\lesssim K_{\rm sym}\lesssim 117.49\) MeV. Although the EMSG theory suggests different classes of approximate universal relations for various \(\alpha\), the nuclear matter parameters obtained are within the current bounds from various model analysis of terrestrial and observational measurements.
## VIII Conclusions
Using a representative set of accurately calibrated models of nuclear equations of state, we have investigated within the energy-momentum squared gravity theory (EMSG: a non-minimal matter-coupling extension to general relativity) the impact of strong-field gravity on several properties of dense neutron star. In particular, correlations between nuclear matter parameters at saturation density and the neutron star observables were studied in the EMSG theory to ascertain the effectiveness of the theory and to quantify its modifications to the predictions in general relativity (GR). By using three realistic EoSs (NL3, BSR2, Sly4), we first showed that for a _fixed value_ of the coupling strength \(\alpha\) in EMSG, the NS mass-radius curves are affected differently as compared to GR predictions. The softest EoS in Sly4 enforces the largest increase in the radii of especially the small mass stars for the positive \(\alpha\) value. Whereas, the stability conditions, \(dm/dr>0\) and \(dP/dr<0\) (from center to surface of NS), enable smallest \(\alpha<0\) in the stiffer NL3 EoS and correspondingly provides the largest decrease in the radii near the \(1M_{\odot}\) neutron star. While the variation of the NS compactness \(C=M/R\) with the moment of inertia and tidal deformability in particular, are quite small, the peak value of tidal Love number \(k_{2}\) was found to have appreciable modifications to GR in the EMSG model.
We next explored the correlations between the NS observables and nuclear matter EoS parameters in EMSG and GR. An approximate universal correlation, independent of the nuclear model EOSs, was established for the variation of the central speed of sound squared \(c_{s}^{2}\) with the reduced pressure \(\widetilde{P}_{c}\equiv P_{c}/\rho_{c}\) and its natural transform the compactness \(C_{\rm max}\equiv M_{\rm max}/R_{\rm max}\) at the center of the stars. We found that \(c_{s}^{2}\) has a linear increase with \(\widetilde{P}_{c}\) and \(C_{\rm max}\). However, the universality is violated to some extent by the strong-field gravity that induces distinct correlations for different values of the parameter \(\alpha\) in EMSG. For instance, the causality bound on the NS mass-radius curve in EMSG suggested a lower limit on the star radius \(R_{\rm max}/{\rm km}\gtrsim 4.37M_{\rm max}/M_{\odot}\) in direct contrast to the GR bound of \(R_{\rm max}/{\rm km}\gtrsim 4.70M_{\rm max}/M_{\odot}\).
We also demonstrated that gravity modifications have marginal effects on the universal properties of the compactness, \(\widetilde{I}\), tidal deformability relations with the reduced central pressure and thus \(\widetilde{P}_{c}\) could be inferred from measurements of NS properties. A truly universal correlation within the realms of current observational
bounds was found between the measurable radius and tidal deformability of \(1.4M_{\odot}\) NS that is practically Eos-insensitive and depicted marginal separation between different classes of universality in the EMSG and GR. The correlation between nuclear matter incompressibility \(K_{0}\), its slope \(M_{0}\), the symmetry energy slope \(L\) and curvature \(K_{\text{sym}}\) and their linear combinations revealed the previously studied correlation with NS radii and deformability. These relations were found to be quite sensitive to EMSG theory of gravity that may hinder a precise estimation of the nuclear matter parameters from these correlations. To conclude, our study emphasizes that certain neutron star observables are insensitive to nuclear EoSs and gravity modifications and can be employed as approximate universal relations to determine the EoS parameters, whereas small, yet detectable signatures of gravity effects are evident in some neutron star observables.
###### Acknowledgements.
N.A. and S.P. acknowledge financial support by the Department of Atomic Energy (Government of India) under Project Identification No. RTI 4002.
|
2309.09311 | Towards Debiasing Frame Length Bias in Text-Video Retrieval via Causal
Intervention | Many studies focus on improving pretraining or developing new backbones in
text-video retrieval. However, existing methods may suffer from the learning
and inference bias issue, as recent research suggests in other
text-video-related tasks. For instance, spatial appearance features on action
recognition or temporal object co-occurrences on video scene graph generation
could induce spurious correlations. In this work, we present a unique and
systematic study of a temporal bias due to frame length discrepancy between
training and test sets of trimmed video clips, which is the first such attempt
for a text-video retrieval task, to the best of our knowledge. We first
hypothesise and verify the bias on how it would affect the model illustrated
with a baseline study. Then, we propose a causal debiasing approach and perform
extensive experiments and ablation studies on the Epic-Kitchens-100, YouCook2,
and MSR-VTT datasets. Our model overpasses the baseline and SOTA on nDCG, a
semantic-relevancy-focused evaluation metric which proves the bias is
mitigated, as well as on the other conventional metrics. | Burak Satar, Hongyuan Zhu, Hanwang Zhang, Joo Hwee Lim | 2023-09-17T15:58:27Z | http://arxiv.org/abs/2309.09311v1 | # Towards Debiasing Frame Length Bias in Text-Video Retrieval via Causal Intervention
###### Abstract
Many studies focus on improving pretraining or developing new backbones in text-video retrieval. However, existing methods may suffer from the learning and inference bias issue, as recent research suggests in other text-video-related tasks. For instance, spatial appearance features on action recognition or temporal object co-occurrences on video scene graph generation could induce spurious correlations. In this work, we present a unique and systematic study of a temporal bias due to frame length discrepancy between training and test sets of trimmed video clips, which is the first such attempt for a text-video retrieval task, to the best of our knowledge. We first hypothesise and verify the bias on how it would affect the model illustrated with a baseline study. Then, we propose a causal debiasing approach and perform extensive experiments and ablation studies on the Epic-Kitchens-100, YouCook2, and MSR-VTT datasets. Our model overpasses the baseline and SOTA on nDCG, a semantic-relevancy-focused evaluation metric which proves the bias is mitigated, as well as on the other conventional metrics.1
Footnote 1: [https://buraksatar.github.io/FrameLengthBias/](https://buraksatar.github.io/FrameLengthBias/)
## 1 Introduction
In text-video retrieval, nowadays, the state-of-the-art models [1, 2, 3, 4, 5, 6, 7] can achieve promising performance on famous benchmarks [5, 6, 7]. However, recent studies [5, 6, 7] demonstrate that many existing visual-text models are overly affected by superficial correlations. For instance, some works [1, 2, 3] address the static appearance bias for action recognition. While [1] focuses on object co-occurrences that bring spurious correlations specifically in the spatial domain, [5, 6] examine the same topic in the temporal domain. Some other works reveal the correlation between the start-end time of the actions and the actions themselves in untrimmed videos on video moment retrieval [5, 7] and temporal sentence grounding [5, 6] tasks. Unlike these studies, we focus on a temporal bias that has yet to be addressed in text-video-related tasks. Frame length discrepancy between training and test sets of trimmed video clips causes non-relevant retrieved items.
For example, in the case of text-to-video retrieval, as shown in Figure 1, the top twenty retrieved clips' average frame length is similar to the training class's average frame length, stating that some irrelevant clips are retrieved just because of the bias coming from the discrepancy. We refer to 'class' as a joint combination of'verb class' and 'noun class' by considering the verb and noun tokens together. Some classes can be semantically similar. For instance, 'take' and 'pick up' would be in the same verb class. Thus, we use the notion of class to calculate a matrix, measuring semantic relevancy among verbs and nouns. In addition, we utilise it to identify the biases in Figure 2, showing the discrepancy. Only a recent work [] closer to our approach attempts to mitigate video duration bias in watch-time prediction for video recommendation. However, the proposed model uses the video duration as textual input and does not consider any visual feature via any visual sampling method. They apply causal inference based on video duration while the discrepancy in the video duration is not considered, and it is followed by a pre-text task of watch-time prediction to increase the effect.
To address overlooked frame length bias in text-video retrieval, we first apply baseline debiasing methods, which delete either the shortest or longest video clips in a class to reduce the discrepancy between train and test sets. However, the effect is limited. Then, we intervene in the causal graph to remove the frame length's unwanted impact by applying
Figure 1: a) Motion semantics may differ between long and short video clips when the frames are uniformly sampled. If it is unbalanced between training/test sets, this may introduce frame length bias. b) AFLC denotes the average frame length of a class, meaning <verb, noun> pairs (classes) which affect the retrieved clips. We propose a novel causal intervention method to remove this spurious correlation.
Figure 2: The figures show the discrepancy among all the <verb, noun> pairs (classes) in each dataset, which is calculated by the average frame length difference between the training and test sets. The number of pairs in the X-axis is 1,144 and 125, respectively. See the Supplementary Material for more details regarding verifying the bias in the three datasets.
the backdoor adjustment principle []. Specifically, we divide the training data into splits regarding frame length; for each split, we learn a similarity matrix using the same text-video retrieval model. Then, we sum the similarity matrices. Note that we also consider the discrepancy within the splits regarding frame length to increase the debiasing effect. The contributions of this paper are threefold: **i)** To the best of our knowledge, we are the first to address a temporal bias in text-video retrieval tasks and also the first to address frame length bias in any text-video-related tasks. We verify the bias illustrated with various methods. **ii)** We propose a causal inference approach via backdoor adjustment to mitigate the frame length bias. **iii)** The experiments and ablation study verify the advantages of the proposed approach over the baseline and SOTA studies by evaluating retrieved clips semantically via Discounted Cumulative Gain (nDCG) as well as Recall and mAP.
## 2 Related Work
**Text-Video Retrieval.** In text-video retrieval, which aims to rank samples in a modality given another modality, deep learning-based approaches have emerged as promising techniques due to their ability to learn high-level features directly from the data. One popular method is to encode text and video features into a common space [], [], where the similarity can be measured using various distance metrics. Another approach is to utilise the semantic relationships between text and video features [], [], []. For instance, Chen _et al_. [] use semantic role labelling to capture the relationship between verbs and nouns in text and actions and objects in videos. Besides, Falcon _et al_. [] implements a positive and negative sampling strategy based on semantic similarities between verb and noun pairs. Recent models based on visual transformers have shown promising results with the help of pre-training on giant datasets []. For example, Bain _et al_. [] use raw video frames rather than extracted features and apply attention mechanisms for pre-training on various exocentric video datasets. On top of this work, Lin _et al_. [] pre-train the modified model on an enormous egocentric dataset curated from Ego4D []. Nevertheless, further research is needed to address existing biases in the task.
**Biases in Video-Language.** Recent studies have highlighted the presence of biases in video-language tasks, which can affect the performance of models since the models can rely on spurious correlations in the data rather than genuine causal relationships. For instance, temporal [] and spatial [],
example, whereas a sentence could define global features, local features are represented by words that refer to actions and entities. We establish the connection between actions and entities as \(r_{ij}\), where \(i\) denotes action nodes and \(j\) denotes entity nodes. Subsequently, the semantic role matrix \(W_{r}\), which is designed to accommodate various semantic roles, is multiplied with initialised node embeddings \(g_{i}^{0}=g_{i}\odot W_{r}r_{ij}\) such that \(g_{i}\)\(\epsilon\) {\(g_{e}\), \(g_{a}\), \(g_{o}\)}. The one-hot vector \(r_{ij}\) indicates the edge type from node \(i\) to node \(j\), while \(\odot\) signifies element-wise multiplication. Then, a graph-attention network is employed to process adjacent nodes. \(W_{t}\) matrix, which is utilised for all relationship varieties, exploits attended nodes, as shown in Eq. 1. When attention is applied to each node, the result is referred to as \(\beta\). Once these formulas are applied, we obtain the textual representation for global and local features \(c_{i}\)\(\epsilon\) {\(c_{e}\), \(c_{a}\), \(c_{o}\)}; for sentence node, verbs, and words, respectively.
\[g_{i}^{l+1}=g_{i}^{l}+W_{t}^{l+1}\sum_{j\in N_{i}}\beta_{ij}(g_{j}^{l}) \tag{1}\]
**Video Encoding.** Disentangling videos into hierarchical features can be challenging, although it is comparatively simple to parse language queries into hierarchical features. To this end, we employ three distinct video embeddings that concentrate on various levels of video aspects. Given a video, denoted as V, represented as a sequence of frame-wise features \(\sum_{i=1}^{M}f_{i}\) {\(f_{1},...,f_{M}\)}, we apply different weights to generate embeddings for three different levels, which are then incorporated with a soft attention mechanism.
\[v_{x,i}=\sum_{i=1}^{M}W_{x}^{v}f_{i},\hskip 28.452756ptx\ \epsilon\{e,a,o\} \tag{2}\]
**Text-Video Matching.** The matching score is computed by averaging the cosine similarity with the video and textual embeddings. We use the contrastive ranking loss [] by attempting to have positive and negative pairs larger than a predetermined margin in training. Suppose \(v\) and \(c\) symbolise visual and textual representations; positive and negative pairs can be formulated as \((v_{p},c_{p})\) and \((v_{p},c_{n})\)\(/\)\((v_{n},c_{p})\), respectively. A pre-set margin named \(Delta\) is used to determine contrastive loss.
\[s(V,C)=\sum_{i=1}^{3}\frac{<v_{i},c_{i}>}{||v_{i}||_{2}||c_{i}||_{2}} \tag{3}\]
\[L(v_{p},c_{p})=[\Delta+s(v_{p},c_{n})-s(v_{p},c_{p})]+[\Delta+s(v_{n},c_{p})- s(v_{p},c_{p})] \tag{4}\]
### Baseline Debiasing Method
```
1:\(V\leftarrow\) videos in ascending order based on the frame length
2:\(x\leftarrow\) avg frame length of class for the training set & \(y\leftarrow\) avg frame length of class for the test set
3:while\(y\geq x+\delta\)do
4: Delete the first clip \(v_{0}\) from the training set
5:if len(V) \(\leq\alpha\)then
6: break;
7:endif
8:endwhile
```
**Algorithm 1** Delete the shortest clips. For each class in the common class set:
We can naively remove this bias by following two methods. In the first method, _RmvOne_, we delete the shortest and longest class samples so that the training set's average frame length
becomes similar to the test set for only one class. Note that the class notion refers to <verb, noun> pairs to group the captions semantically. However, this method does not affect the evaluation metrics, but only a few samples. Thus, another simple method, _RmvAll_, can be suggested. We do the same as in _RmvOne_, but considering all classes such that the high discrepancy will be reduced in the whole dataset to a pre-set margin \(\delta\). We set the minimum number of video clips of a class in training as \(\alpha\) so that there are enough samples for each class. It aims to reduce discrepancies between training and test sets for the same classes. Specifically, Algorithm 1 presents the way if the average test set is higher than the average train set and removes the shortest clips. The same logic applies when the situation is the opposite, which deletes the longest clips.
### Method with Causal Intervention
Many works use extracted features that are uniformly sampled in order to remove the effect of frame length. However, these features may still contain bias due to the sparsity or density of the sampled frames, as shown in Figure 1. Thus, the model learns that action should be dense or sparse rather than motion semantics. The ideal case would be to have all the video clips at the same length. It is not just impractical but would also not reflect real-world applications. For example, while some actions take more time, others take less time intrinsically. Thus, while we need to keep this natural connection, we should remove the spurious correlation on video features that would occur because of the discrepancy in terms of frame length. Figure 3 shows our structural causal model (SCM) to illustrate how our model works. V, Q, Y and L denote video representation, textual representation, text-video matching and frame length, respectively. The link from (V, Q) to Y is for capturing the similarity between the textual and visual features. The link from L to Y signifies the frame length effect on similarity, suggesting that while some actions can take less time, others would take more time. Moreover, the link from L to V implies that frame length would affect the video encoder such that various videos could be retrieved not because of their semantic similarity to the query but instead of their frame length. If this bias is not addressed, densely sampled video features would be memorised in case the training set contains mostly shorter clips than the testing set.
Figure 4 shows the implementation of two splits for a dataset based on the frame length. As a high-level idea, we follow the principle of backdoor adjustment to remove bias by splitting the dataset based on frame length. We formalise our causal method in Formula 5 by using the law of iterated expectations. Note that L becomes independent via interventions. As shown in the last row of the formula, the final estimation can be created by individually estimating \(P(L)\) and \(E[Y|V,Q,L]\) and then combining those estimates. We divide the training samples into \(M\) equal portions based on frame length to cut off the link, discretising the \(P(L)\) distribution into separate components. These frame length groups are denoted by
Figure 3: Structural causal model.
\(\{L_{k}\}_{k=1}^{M}\). We estimate the deconfounded model via this approximation. Note that \(f_{k}(v,q)\) is the similarity score for each frame length group \(L_{k}\).
\[\begin{split} E[Y|do(V,Q)]&=\sum_{l}P(L=l|V,Q)E[Y|V, Q,L=l]\\ &=\sum_{l}P(L=l)E[Y|V,Q,L=l]\\ &\approx\sum_{k=1}^{M}(L_{k})E[Y|V,Q,L\in L_{k}]\\ &\triangleq\sum_{k=1}^{M}(L_{k})f_{k}\{V,Q\}\end{split} \tag{5}\]
## 4 Experiments
**Datasets.** We use three datasets for our experiments. **i)**_Epic-Kitchens-100 (EK-100)_[**B**], a collection of unscripted egocentric action data gathered worldwide using wearable cameras. The annotated videos display diverse daily kitchen activities, accompanied by captions provided by human annotators that include at least one verb and one or more nouns. The dataset comprises 67,217 training and 9,668 test set pairs. **ii)**_YouCook2_[**B**, **B**], which is from cooking-related videos via third-person viewpoint collected from YouTube with 89 different recipes. The video clips are recorded from a third-person viewpoint within diverse kitchen settings. Imperative English sentences and temporal boundaries referencing the actions are used to label the video clips, and human annotators are used. There are 10,337 pairs in the training set and 3,492 pairs in the test set. **iii)**_MSR-VTT dataset_[**B**] comprises 10,000 video clips with 20 descriptions for each video, a combination of human annotation and a commercial video search engine. The dataset offers several train/test splits, with one of the most popular ones being the 1k-A split, consisting of 9,000 clips for training and 1,000 clips for testing. The full split, which consists of 6,513 video clips for training, 2,990 video clips for testing, and 497 video clips for validation, is another often-used split.
**Implementation details.** We use the video features that TBN [**B**] has extracted for Epic-Kitchens-100. Each video clip included RGB, flow, and audio features. Note that we
Figure 4: The implementation of the causal model for training. Similarity matrices are constructed using the same retrieval model on different splits that are arranged with a causal perspective to mitigate frame length bias. Then, they are summed up. No change is needed for the inference.
use the frame itself rather than using extracted features for replicating a sota work, EgoVLP. We utilise S3D features from _Li et al._[L] pretrained on HowTo100M[L] for YouCook2. Since the test set is not made available to the public, we feed our model with the validation dataset for evaluation in accordance with other studies. For MSR-VTT, appearance level features of the ResNet-152 model provided by Chen _et al._[L] are implemented. The epoch is chosen as 100 for all. \(\Delta\) is determined as 0.2 by following the baseline model. \(\delta\) and \(\alpha\) are chosen as 10 and 60fps, respectively, for our baseline debiasing method. For SOTA methods RAN and RANP, negative and positive sampling thresholds are selected as 0.75 and 0.20, respectively. We report the best results out of three repetitions.
**Evaluation metrics.** We use the nDCG [L] by considering non-binary similarity to show how the bias affects various retrieved video clips and how the causal model mitigates the bias. Given a caption query \(q_{i}\) and a ranked list of video clips \(X_{r}\), it [L] is defined as \(nDCG(q_{i},X_{r})=\frac{DCG(q_{i},X_{r})}{IDCG(q_{i},X_{r})}\). Then, DCG is calculated as \(DCG(q_{i},X_{r})=\sum_{j=1}^{N_{r}}\frac{R(q_{i},x_{j})}{log_{2}(j+1)}\), and the ranking list only considers the first \(N_{r}\) items, while \(x_{j}\) is the \(j\)-th item in the list \(X_{r}\). \(IDCG\) is calculated via \(nDCG\) and is the ideal case where \(X_{r}\) is ordered by relevance. R, the relevancy matrix, is between 0 and 1 and represents the mean Intersection over Union (IoU) for the verb and noun classes. We follow [L] to define the R matrix as between a caption \(q_{i}\) and a video \(x_{j}\) by averaging the IoU of verb and noun classes. While \(q_{i}^{v}\) refers to the collection of verb classes in the caption, \(x_{k}^{N}\) denotes the set of noun classes in the video clip.
\[R(q_{i},x_{j})=\frac{1}{2}\Bigg{(}\frac{|q_{i}^{v}\cap x_{j}^{v}|}{|q_{i}^{V} \cup x_{j}^{v}|}+\frac{|q_{i}^{N}\cap x_{j}^{N}|}{|q_{i}^{N}\cap x_{j}^{N}|} \Bigg{)} \tag{6}\]
By using the same logic, \(nDCG\) is defined for a query video \(x_{i}\) and a set of captions \(C_{r}\). We follow the scripts provided by [L] to create the relevancy matrices for the datasets. We utilise the mean average precision (mAP) and Recall (R@k) for a fair comparison.
### Results
**Quantitative Results.** The first baseline debiasing method, _RmvOne_, is impractical to repeat for all video classes, although it works for many examples. Table 1 shows a result on the following baseline debiasing method, _RmvAll_, for Epic-Kitchens-100. Specifically, we delete 2,392 clips from 164 classes, equivalent to 3.6% of all the data, applying Algorithm 1. It reaches marginally higher results on nDCG, even though it uses fewer data for training. Considering that we lose some information for many classes, such as diverse and complex visual cues, it is reasonable not to see a sharp increase by this naive method. We also compare it to the model that randomly deletes the same amount of video clips called _RmvRand_, showing that knowing which clips to remove is essential. Although the ensemble approach overpasses the baseline, it is still lower than our method. Besides, its training takes three times more than ours; more importantly, its nDCG score is much lower than our approach, showing that the ensemble method does not address the bias as much as our causal model.
Tables 1-3 show the results of the causal method when M is chosen as 2. Specifically, the dataset is divided into two splits based on the frame length by considering the distribution of the dataset. Rather than having equal splits, we make one split that has more video clips than the other to have less discrepancy within the splits in terms of frame length. We choose the mean length of the test set as a threshold for splitting. Considering the baseline comparison, the average scores for nDCG increase by more than 2 points in each dataset. We see a similar trend for Recall and mAP metrics. A reasonable increase is observed when we apply
our method to SOTA methods. Since these methods implement a specific scheme for positive and negative sampling, they force fewer pairs to match anchor samples when the dataset is split, which may limit the increase. Note that 'T2V' refers to text-to-video, and 'V2T' refers to video-to-text. Refer to the Supplementary Material to see the results of the causal method on the MSR-VTT's full split and more detail on baseline debiasing experiments.
**Qualitative Results.** Figure 5 shows qualitative examples, proving that the bias is mitigated. We utilise the nDCG metric, knowing that we cannot examine this by using only Recall or mAP metrics due to their nature of binary similarity. Regarding text-to-video retrieval on the left side of the figure, the top retrieved video clips and the neighbour clips become more relevant than the baseline in the first example. In the second example, the top retrieved clip is already related to the query in the baseline model; however, the causal model eliminates most of the unrelated clips and provides more relevant clips in total. The third example's query is complex, but our approach still outperforms the baseline. On the right side, for video-to-text retrieval, queries are videos, and we retrieve the textual queries. However, for simplicity, we report their corresponding captions. Darker colours refer to higher relevancy. Please refer to the Supplementary Material for more analysis.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Method**} & \multicolumn{4}{c|}{**nDCG**} & \multicolumn{2}{c|}{**mAP**} \\ \cline{2-7} & **V2T** & **T2V** & **AVG** & **V2T** & **T2V** & **AVG** \\ \hline \multicolumn{7}{|c|}{**Epic-Kithenes-100**} \\ \hline Baseline & 39.40 & 38.91 & 39.15 & 40.47 & 36.60 & 38.54 \\ \hline Baseline + & & & & & \\ RnxRand & 39.69 & 38.42 & 39.06 & 40.37 & 35.7 & 38.04 \\ \hline Baseline + & & & & & \\ RnxAll & 40.06 & 38.82 & 39.44 & 41.01 & 36.34 & 38.67 \\ \hline Baseline + & & & & & \\ Ensemble + & 40.38 & 39.15 & 39.76 & 43.17 & 38.80 & 40.98 \\ \hline Baseline + & **42.73** & **40.61** & **41.67** & **45.36** & **37.80** & **41.58** \\
**Ours** & **(+3.33)** & **(+1.70)** & **(+2.52)** & **(+2.89)** & **(+1.20)** & **(+3.04)** \\ \hline \end{tabular}
\end{table}
Table 1: Baseline comparison on text-video retrieval for Epic-Kithenes-100.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Method**} & \multicolumn{4}{c|}{**Recall (T2V)**} & \multicolumn{4}{c|}{**nDCG**} \\ \cline{2-10} & **R@1T** & **R@5T** & **R@10T** & **MedR** & **MnR** & **Rsum** & **V2T** & **T2V** & **AVG** \\ \hline Baseline & 13.17 & 36.31 & 50.74 & 10 & 66.47 & 100.23 & 49.42 & 49.70 & 49.56 \\ \hline
**Baseline +** & **14.60** & **37.80** & **51.58** & 10 & **63.18** & **103.98** & **51.92** & **51.39** & **51.65** \\
**Ours** & **(+1.43)** & **(+1.49)** & **(+0.84)** & & **(-3.29)** & **(+3.75)** & **(+2.50)** & **(+1.69)** & **(+2.09)** \\ \hline \hline RAN & 13.29 & 36.37 & 50.40 & 10 & 64.85 & 100.06 & 50.17 & 50.35 & 50.26 \\ \hline _RAN_ & _14.92_ & _37.37_ & _50.86_ & 10 & _63.78_ & _103.75_ & _50.97_ & _51.35_ & _51.71_ \\ + _Ours_ & _(+1.63)_ & _(+1.00)_ & _(+0.46)_ & _(+1.07)_ & _(+3.09)_ & _(+0.80)_ & _(+0.90)_ & _(+0.85)_ \\ \hline RANP & 13.63 & _35.65_ & 50.32 & 10 & 64.34 & 99.60 & 50.49 & 50.19 & 50.34 \\ \hline _RANP +_ & _15.23_ & _37.60_ & _51.58_ & 10 & _61.34_ & _104.41_ & _51.53_ & _51.05_ & _51.29_ \\ _Ours_ & _(+1.60)_ & _(+1.95)_ & _(+1.26)_ & _(-3.00)_ & _(+4.81)_ & _(+1.08)_ & _(+0.86)_ & _(+0.95)_ \\ \hline \hline \multicolumn{7}{|c|}{**MSR-VTT 1k Split**} \\ \hline Baseline & 20.76 & 47.29 & 59.92 & 6 & 41.10 & 127.97 & 59.77 & 60.84 & 60.30 \\ \hline
**Baseline +** & **24.64** & **52.99** & **66.09** & **5 & **26.26** & **143.72** & **62.67** & **62.33** & **62.50** \\
**Ours** & **(+3.88)** & **(+5.70)** & **(+6.17)** & **(-1)** & **(-1.484)** & **(+15.75)** & **(+2.90)** & **(+1.49)** & **(+2.20)** \\ \hline RAN & 21.08 & 47.98 & 60.95 & 6 & 42.28 & 130.01 & 59.49 & 60.15 & 59.82 \\ \hline _RAN_ & _24.54_ & _53.50_ & _66.70_ & 5 & _26.91_ & _144.74_ & _60.95_ & _61.86_ & _61.41_ \\ + _Ours_ & _(+3.46)_ & _(+3.52)_ & _(+3.75)_ & _(-1)_ & _(-1.357)_ & _(+1.43)_ & _(+1.46)_ & _(+1.71)_ & _(+1.59)_ \\ \hline RANP & 21.14 & 47.72 & 60.32 & 6 & 41.66 & 129.18 & 59.94 & 60.55 & 60.25 \\ \hline _RANP +_ & _24.03_ & _53.24_ & _66.53_ & 5 & _27.35_ & _143.81_ & _61.54_ & _61.58_ & _61.56_ \\ _Ours_ & _(+2.89)_ & _(+5.52)_ & _(+6.21)_ & _(-1)_ & _(-14.31)_ & _(+14.63)_ & _(+1.60)_ & _(+1.03)_ & _(+1.31)_ \\ \hline \end{tabular}
\end{table}
Table 3: Baseline and SOTA comparison on text-video retrieval for YouCook2 and MSR-VTT. The lower, the better for MedR and MnR metrics; the higher, the better for the rest.
### Analysis
**Ablation Study.** Table 4 examines three questions: **i)**_How to split the dataset?_ When we adjust the splits based on the frame length distribution of the dataset rather than dividing them into two equal splits, we reach a higher result. Adjusted splits have higher entropy, bringing better cooperation between splits. **ii)**_Which split effects more?_ When the splits are adjusted according to the frame length distribution, they share a similar score in nDCG, even though the second split has fewer video clips. Also, the first split brings a higher score in mAP/Recall, as expected. **iii)**_How many splits do we need?_ The more splits we have, the lower the scores we get. To have the adjusted splits when \(M>2\), we first put the videos in ascending order according to the length of the frame, then divide them into two and continue to divide the remainder until we reach enough splits for the experiments.
**Computational Analysis.** Table 5 presents the computational cost breakdown, implemented by THOP library [], on the YouCook2 dataset where the dimensions of video embedding and batch size are 1024 and 64, respectively. Considering the causal method reaches better results with two splits, we highlight two points: **i)** If it is used sequentially, there is no need for extra resources compared to the baseline method. The advantage of the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**Epic-Kitchens-100**} & \multicolumn{2}{c|}{**YouCook2**} & \multicolumn{2}{c|}{**MSR-VTT**} \\ \cline{2-10} & **nDCG (avg)\(\uparrow\)** & **mAP (avg)\(\uparrow\)** & **nDCG (avg)\(\uparrow\)** & **R@10\(\uparrow\)** & **MnR\(\downarrow\)** & **nDCG (avg)\(\uparrow\)** & **R@10\(\uparrow\)** & **MnR\(\downarrow\)** \\ \hline Baseline & \(39.15\) & \(38.54\) & \(49.56\) & \(50.74\) & \(66.47\) & \(60.30\) & \(59.92\) & \(41.10\) \\ \hline Baseline + Ours (Equal 2 Splits) & \(41.19\) & \(41.07\) & \(51.01\) & \(51.15\) & \(66.22\) & \(62.18\) & \(66.98\) & \(26.65\) \\ \hline Baseline + Ours (Adjusted 2 Splits) & \(41.67\) & \(41.58\) & \(51.65\) & \(51.58\) & \(63.18\) & \(62.50\) & \(66.09\) & \(26.26\) \\ \hline Baseline + Ours (Adjusted 3 Splits) & \(41.06\) & \(39.48\) & \(51.45\) & \(49.28\) & \(68.94\) & \(62.48\) & \(64.56\) & \(30.54\) \\ \hline Baseline + Ours (Adjusted 4 Splits) & \(39.89\) & \(37.54\) & \(51.64\) & \(47.11\) & \(74.33\) & \(62.23\) & \(61.93\) & \(33.74\) \\ \hline \hline First Split Only (Adjusted) & \(37.24\) & \(38.59\) & \(48.86\) & \(48.42\) & \(80.44\) & \(61.02\) & \(52.79\) & \(50.71\) \\ \hline Second Split Only (Adjusted) & \(38.00\) & \(34.07\) & \(50.48\) & \(36.77\) & \(115.87\) & \(59.75\) & \(53.72\) & \(50.17\) \\ \hline \end{tabular}
\end{table}
Table 4: Ablation study for the causal method.
Figure 5: Qualitative results for text-video retrieval. The semantic relevancy, calculated based on nDCG, of the top 50 retrievals given a query from each dataset. The darker the colour, the more relevant retrievals to the query, varying from 0 to 1. While the left side is for T2V, the right side is for V2T. Best viewed in colour.
causal method is that run time takes 20% less for training. **ii)** If the splits are trained simultaneously, the run time can drop 50% by doubling parameters and GFLOPs in return. iii) We get a similar trend in all parameters for EK-100 and MSR-VTT datasets. The only difference is that the parameters and GFLOPs are proportional to the dimension. For instance, the visual feature dimensions in EK-100 and MSR-VTT are 2048 and 3072, respectively. Thus, our approach provides a faster run time without extra resources and latency.
**Spatial vs Temporal features.** Table 6 shows the importance of temporal features in the Epic-Kitchens-100 dataset such that removing them affects the result drastically. While we specifically focus on an overlooked temporal bias in this study, we note that biases in both domains should be addressed in the ideal case even though no study has achieved it yet.
**The models' effect on transformer-based models.** Noting that our approach is model-agnostic, Table 7 shows the results of its implementation to transformer-based models. While limited computation resources led our model not to converge on the EgoVLP experiment, other modalities may affect our approach to the MMT experiment. Either way, we notice that the method's effect becomes limited on transformer-based models, and we share our related assertions in the Supplementary Material which could be related to the spatial biases.
## 5 Conclusion
To the best of our knowledge, this is the first attempt to study the effect of a temporal bias caused by a frame length mismatch between training and test sets of trimmed video clips and show improvement with debiasing on the text-video retrieval task. We then discuss detailed experiments and ablation studies using our causal approach on the Epic-Kitchens-100, YouCook2 and MSR-VTT datasets. Benchmark using the nDCG metric demonstrates that the bias has been reduced. We reckon the following limitations for future works: **i)** Long video clips may contain ambiguity, including various actions irrelevant to the annotated action. **ii)** Other temporal biases may still affect the model, e.g. the order of the actions.
## Acknowledgements
This research is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project A18A2b0046).
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Epic-Kitchens-100** & **ToutCook2** & **MSR-VTT** \\ \hline
**Method** & **AVG** & **Method** & **AVG** \\ \hline EgoVLP [**L2**] & 12.53 & TACo [**L2**] & 53.53 & MMT [**L**] & 63.79 \\ \hline EgoVLP + Ours & 13.06 & TACo + Ours & 54.03 & MMT + Ours & 63.94 \\ \hline \end{tabular}
\end{table}
Table 7: Comparison with transformer-based models on text-video retrieval on nDCG metric. |
2309.16231 | Controllable Text Generation with Residual Memory Transformer | Large-scale Causal Language Models (CLMs), e.g., GPT3 and ChatGPT, have
brought great success in text generation. However, it is still an open
challenge to control the generation process of CLM while balancing flexibility,
control granularity, and generation efficiency. In this paper, we provide a new
alternative for controllable text generation (CTG), by designing a
non-intrusive, lightweight control plugin to accompany the generation of CLM at
arbitrary time steps. The proposed control plugin, namely Residual Memory
Transformer (RMT), has an encoder-decoder setup, which can accept any types of
control conditions and cooperate with CLM through a residual learning paradigm,
to achieve a more flexible, general, and efficient CTG. Extensive experiments
are carried out on various control tasks, in the form of both automatic and
human evaluations. The results show the superiority of RMT over a range of
state-of-the-art approaches, proving the effectiveness and versatility of our
approach. | Hanqing Zhang, Sun Si, Haiming Wu, Dawei Song | 2023-09-28T08:13:33Z | http://arxiv.org/abs/2309.16231v1 | # Controllable Text Generation with Residual Memory Transformer
###### Abstract
Large-scale Causal Language Models (CLMs), e.g., GPT3 and ChatGPT, have brought great success in text generation. However, it is still an open challenge to control the generation process of CLM while balancing flexibility, control granularity, and generation efficiency. In this paper, we provide a new alternative for controllable text generation (CTG), by designing a non-intrusive, lightweight control plugin to accompany the generation of CLM at arbitrary time steps. The proposed control plugin, namely Residual Memory Transformer (RMT), has an encoder-decoder setup, which can accept any types of control conditions and cooperate with CLM through a residual learning paradigm, to achieve a more flexible, general, and efficient CTG. Extensive experiments are carried out on various control tasks, in the form of both automatic and human evaluations. The results show the superiority of RMT over a range of state-of-the-art approaches, proving the effectiveness and versatility of our approach. Click here for source code.
## 1 Introduction
Controllable text generation (CTG) focuses on generating text while adhering to specific constraints Hu and Li (2021); Zhang et al. (2022). These constraints can range from high-level semantic elements, such as emotions, topics, and toxicity avoidance, to finer-grained content, e.g., including specific concepts or key elements in the generated text. With the development of generative AI based on large language models, social media will be flooded with AI-generated content. Therefore, CTG will be critical in the real-world Web applications to establish the safer, more reliable and practical Krause et al. (2021); Liu et al. (2021); Zhang et al. (2022) AI-driven social systems.
The state-of-the-art CTG methods are based on Large Language Models (LLMs) that build upon the Transformer structure Vaswani et al. (2017) and have gained significant attention due to their remarkable ability to understand and generate text. Recently, large-scale causal Language Models (CLMs), i.e., decoder-only language models, show particular advantages in zero/few-shot scenarios Brown et al. (2020), resulting in a series of successors such as ChatGPT and GPT4. This has been seen as a milestone towards the realization of Artificial Generative Intelligence. Despite the success of these models, CLMs are still facing certain challenges, especially in CTG.
Considering the significant scale of CLMs and the substantial cost of training such models, current mainstream CLM-based CTG methods fall into two categories, i.e., prompt-based and post-processing approaches. The prompt-based methods Zhang and Song (2022); Yang et al. (2022); Qian et al. (2022); Lu et al. (2022); Zhou et al. (2023) concatenate the control-prompt with the input head of the generative model to instruct more controllable text generation. As previous studies have revealed Zou et al. (2021); Carlsson et al. (2022), the control effectiveness tends to deteriorate with the increasing distance from the prompt. Additionally, inserting a control-prompt into a well-trained model may harm the model's original generative stream, thus losing the flexibility of control Carlsson et al. (2022). On the other hand, most post-processing methods leverage an auxiliary module to adjust the probability of naturally producing a token by the generative model at the decoding phase Krause et al. (2021); Liu et al. (2021); Yang and Klein (2021); Lu et al. (2022), hindering the model's capacity of content planning and thus limiting the fine-gained control. Furthermore, more recent decode-time methods Li et al. (2022); Mireshballah et al. (2022); Qin et al. (2022) improve the control granularity through iterative sampling or editing, but at the expense of generation efficiency. Therefore, the flexibility, control granularity and generation efficiency need to be better balanced, which demands a more versatile
CLM-based CTG framework1.
Footnote 1: We conclude current CTG methods in Appendix B.
In this paper, we propose a novel CTG plugin named Residual Memory Transformer (RMT), which borrows the paradigm of residual learning (He et al., 2016; Zhang et al., 2020) and only makes late fusion with a frozen CLM to noninvasively steer the generation process. Unlike prompt-based approaches, this paradigm does not disturb the original generative stream of the base CLM model, allowing for a better flexibility of CTG (i.e., control with a plug-and-play manner). In addition, the RMT architecture consists of an encoder-decoder structure, where the encoder handles different types of control information and influences the generation process, so as to achieve fine-grained control. Meanwhile, RMT utilizes cross-attention to uniformly apply control conditions to each generated token, avoiding the negative effect varying with context length. In particular, different from the vanilla decoder of Transformer, an additional causal attention is introduced to extract the prior knowledge from the generative stream of CLM, allowing RMT not to deviate too far away from the original generative model while its implied high-level semantics is leveraged. The reuse of the base-CLM's output allows RMT to achieve effective control with a tiny network, resulting in an improved generation efficiency.
The training of RMT includes two stages: pre-training and fine-tuning. The pre-training of RMT aims to reconstruct noisy text into a complete sentence, facilitating RMT's understanding of the semantics of various control conditions, while aligning with the generative process of CLM. During the fine-tuning stage (i.e., residual learning), the logit of the RMT is directly added to that of the fixed CLM, and the goal of training is to learn the parameters of RMT such that the joint model distribution gets close to the desired text distribution. Since the gradient does not need to be backpropagated to base-CLM, the training of RMT is very efficient. For instance, in our experiments based on GPT2-large, the entire pre-training stage of RMT with 4M sample could be completed in approximately 30 hours, using a single NVIDIA A6000 GPU.
We conduct extensive experiments to explore the superiority of our approach in three aspects. **(1) Flexibility:** Theoretically, the proposed RMT has the capability to intervene in the generation process at any step. Experimentally, it maintains the same level of control effectiveness in both with-context and without-context settings, showing that RMT enables long-distance control throughout the generation process. **(2) Control Granularity:** We test our approach on a range of CTG tasks of different granularity levels (i.e., fine-gained control tasks including word inclusion and sentence length control; and attribute control based on sentiment). RMT achieves a control effectiveness comparable to the state-of-the-art approaches, while guaranteeing the text quality. **(3) Efficiency:** The results show that three-layer blocks of RMT is enough to achieve a competitive CTG effectiveness, and the time-cost of text generation is almost the same to the original CLM.
## 2 Preliminary
### Attention in Transformer
**Self-Attention** mechanism is a critical component in Transformer (Vaswani et al., 2017), which is used to effectively capture long-range dependencies between words in the input sequence. Specifically, it is defined as a mapping between a query and a collection of key-value pairs. The values are weighted according to a compatibility function between the query and each corresponding key, and then summed up, eventually obtaining the output vector. The self-attention mechanism can be formatted as follows:
\[\mathrm{Att}(Q,K,V)=\mathrm{softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)V, \tag{1}\]
where \(Q,K,V\) represent the transformed representations of the sequence using the corresponding learnable matrix, and \(\sqrt{d_{k}}\) is the scaling factor. In the Transformer model, the key, query and value vectors are always derived from a shared sequence.
**Causal Attention** is a particular branch of self-attention, also called masked self-attention, which is usually applied in the decoder module of Transformer. Different from normal self-attention, the queries in causal attention are only confined to their preceding key-value pairs' positions and their current position to maintain the auto-regressive property. Usually, it can be implemented by a mechanism that masks the invalid positions and sets them to negative infinite:
\[\mathrm{Att}(Q,K,V)=\mathrm{softmax}\left(\frac{QAK^{T}}{\sqrt{d_{k}}}\right)V, \tag{2}\]
\[A_{ij}=\begin{cases}1\text{ if }i\geq j,\\ -\infty\text{ else}.\end{cases} \tag{3}\]
**Cross Attention** is another branch of self-attention in the decoder part of Transformer, which aims to capture the interaction between the encoder and decoder. Specifically, The key and value parts are obtained from the previous step outputs of the encoder, and the query is from the decoder, enabling the encoder to attend to each position in the decoder module at the same time.
### Causal Language Model
**Causal Language Model (CLM)** eliminates the need for a separate encoder component and typically leverages a decoder-only Transformer Vaswani et al. (2017), in which only the causal attention is used to perform step-wise density estimation, i.e., predicting the next token. Suppose a CLM is parameterized with \(\theta\). Given a partial sequence \(x_{<t}\), it assigns a probability \(P_{\theta}(x_{t}|x_{<t})\) over a vocabulary \(\mathcal{V}\) for next-token \(x_{t}\) generation. When generating a sequence of text \(X_{n}=\{x_{1},x_{2},\ldots,x_{n}\}\), it can be formulated by the chains rule as below:
\[P_{\theta}\left(X_{n}\right)=\prod_{t=1}^{n}P_{\theta}\left(x_{t}\mid x_{<t} \right). \tag{4}\]
The entire generation process is carried out iteratively. First, a token is sampled from the distribution \(P_{\theta}\left(x_{t}\mid x_{<t}\right)\). Then the selected token is concatenated with the input for the next step of generation.
## 3 Methodology
This section introduces the proposed Residual Memory Transformer (RMT) to control the text generation with a causal language model (CLM). We first provide an overview of the RMT-enhanced controllable text generation (CTG) framework, followed by a detailed description of RMT, and finally, the dedicated pre-training and fine-tuning methods for the proposed framework.
### CTG Framework with RMT
As the dashed diagram in Figure 1 illustrates, RMT operates in a non-intrusive control mode as a CLM plug-in, where the control signal passes through RMT independently and affects the output distribution of a frozen CLM through residual learning (\(y_{t}=y_{t}^{g}+y_{t}^{c}\)) without interfering with the CLM's free-generation passageway. Compared to intrusive controllable paradigms, e.g., prompt-based approaches, RMT allows for more flexibility in switching between freestyle generation and controllable generation modes. The proposed non-intrusive control paradigm is considered more challenging. Specifically, RMT is decoupled from the generation process of base-CLM has the potential for dynamic control intervention. Meanwhile, it also avoid tuning the base-CLM into a task-specific model, ensuring the universality of the base-CLM. Hence, it is more flexible and promising. More explanation could be see in Appendix B.
Formally, given a partial generated text \(x_{<t}\) and a sequence of control instruction \(C\), the proposed framework aims to generate eligible text \(X_{n}=\{x_{1},...,x_{n}\}\) that meets the control conditions:
\[P_{\Theta}\left(X_{n}\right)=\prod_{t=1}^{n}P_{\Theta}\left(x_{t}\mid x_{<t}; C\right), \tag{5}\]
where \(\Theta=\{\tilde{\theta};\phi\}\) represents the CTG model's parameters, which comprise both the frozen parameters inherited from the CLM (\(\tilde{\theta}\)) and the tunable parameters derived from the RMT (\(\phi\)). The entire generation process of our approach consists of the following four steps:
**CLM's Raw Generation.** At the generation step \(t\), CLM first maps the generated text \(x_{<t}=\{x_{1},...,x_{t-1}\}\) to hidden states \(\mathbf{h}_{<t}^{g}=\{\mathbf{h}_{1}^{g},...,\mathbf{h}_{t-1}^{g}\}\). Afterward, a linear layer and softmax normalization are used to project the hidden state of the last token \(\mathbf{h}_{t-1}^{g}\) to the probability distribution \(\mathbf{y}_{t}^{g}\) over the vocabulary:
\[\mathbf{y}_{t}^{g}=P_{\tilde{\theta}}\left(x_{t}|x_{<t}\right)=\text{softmax }(\text{Linear}(\mathbf{h}_{t-1}^{g})), \tag{6}\]
where the CLM naturally generates the next token \(x_{t}\) given the context \(x_{<t}\) as per its training.
**RMT's Control Encoding.** Next, the RMT's encoder is responsible for encoding the control instruction \(C=\{c_{1},...,c_{m}\}\) into the control memory \(\mathbf{C}=\{\mathbf{c_{1}},...,\mathbf{c_{m}}\}\), which is used to guide the controllable text generation in the RMT's decoder:
\[\mathbf{C}=\text{RMT-Enc}(C;\phi_{\text{enc}}). \tag{7}\]
**RMT's Control Decoding.** After encoding the control signal, the RMT's decoder maps the
generated text \(x_{<t}\) to new hidden states \(\mathbf{h}_{<t}^{c}=\{\mathbf{h}_{1}^{c},...,\mathbf{h}_{t-1}^{c}\}\) by considering both the control memory \(\mathbf{C}\) and CLM's vanilla hidden states \(\mathbf{h}_{<t}^{g}\), and synthetically predicting next token's probability distribution \(\mathbf{y}_{t}^{c}\) over the vocabulary:
\[\mathbf{h}_{<t}^{c}=\{\mathbf{h}_{1}^{c}...,\mathbf{h}_{t-1}^{c}\}= \text{RMT-Dec}(x_{<t},\mathbf{h}_{<t}^{g},\mathbf{C};\phi_{\text{dec}}), \tag{8}\] \[\mathbf{y}_{t}^{c}=P_{\phi}\left(x_{t}|x_{<t};C\right)=\text{ softmax}(\text{Linear}(\mathbf{h}_{t-1}^{c})).\]
**Residual Learning to Generate.** We employ residual learning to fuse the output distributions from the CLM (Eq. 6) and the RMT (Eq. 8), allowing the framework to obtain the joint predictions for the next token and achieve a noninvasive CTG:
\[\mathbf{y}_{t}=P_{\Theta}\left(x_{t}|x_{<t};C\right)=\mathbf{y}_{t}^{g}+ \mathbf{y}_{t}^{c}. \tag{9}\]
### Residual Memory Transformer (RMT)
The detailed structure of RMT is shown in Figure 1. Specifically, RMT adopts an encoder-decoder architecture and reuses the CLM's suite of word and position embeddings.
**RMT Encoder.** The RMT encoder is responsible for encoding the control description \(C\) into the control memory \(\mathbf{C}\) (Eq. 7). The encoder is composed of a stack of \(M\) identical blocks. Each block comprises a self-attention layer (Eq. 1) and a fully connected feed-forward layer. Additionally, it incorporates residual connections around above each layer, and is followed by a layer normalization.
**RMT Decoder.** The RMT decoder aims to predict the probability distribution of the next token \(\mathbf{y}_{t}^{c}\) (Eq. 8) in the control mode. The decoder is also composed of a stack of \(M\) identical blocks. Each block contains three carefully-designed attention layers and a fully connected feed-forward network. Similar to the encoder, residual connections and layer normalization are also applied. The three attention layers are directed at the already generated text \(x_{<t}\), the CLM's output hidden states \(\mathbf{h}_{<t}^{g}\), and the control memory \(\mathbf{C}\), respectively:
* _Causal Self Attention_. The first attention layer utilizes a causal attention operation
Figure 1: Illustration of controllable text generation with Residual Memory Transformer (RMT). In the top left, we present a miniaturization of our method framework, where the gray sector represents the frozen CLM, and the red box symbolizes our plug-in control module, i.e., RMT. The right part illustrates the embodiment of each detailed component.
(Eq. 2), whereby \(Q\), \(K\), and \(V\) are all firstly mapped from the original generated text \(x_{<t}\) itself. This attention mechanism facilitates the identification and capturing contextual features of generated sequence from scratch.
* _Causal CLM Attention_. The second attention layer also employs causal attention (Eq. 2), but with a key difference: \(Q\) is sourced from the previous causal self-attention layer's output, while \(K\) and \(V\) are obtained from the CLM's last hidden states \(\mathbf{h}_{<t}^{g}\). This design establishes an inner residual connection with CLM, enabling RMT to consider the high-level contextual features and maximally interact with the generative stream of CLM.
* _Cross Control Attention_. The third attention layer is a cross-attention for the control memory from the RMT encoder. Specifically, \(Q\) is sourced from the previous causal CLM attention layer, while \(K\) and \(V\) are derived from the control memory \(\mathbf{C}\). This cross-attention layer bridges the RMT's encoder and decoder, introducing the control signals to the generation.
### Model Training
**Pre-training.** To enable the RMT's semantic understanding capability and allow it align with the CLM's generation process, we utilize the denoising auto-encoder pre-training method, whereby the encoder processes the corrupted text \(\hat{X}\) and the decoder reconstructs the original text \(X\).
Specifically, we corrupt the pretraining text (\(X\rightarrow\hat{X}\)) in four ways referring to (Lewis et al., 2020): _(1) Token Masking:_ randomly masking tokens with special tokens. _(2) Token Deletion:_ randomly deleting tokens in the text. _(3) Span Replacing:_ randomly replacing the text spans of different lengths with special tokens. Different from token masking, each span is replaced with a single special token. _(4) Text Rotation:_ randomly selecting a token and rotating the text around it, i.e., the text begins with that token and ends with its previous token. More details are supplied in Appendix A.1.
**Fine-tuning.** Once the pre-training is complete, RMT can be fine-tuned to perform various controllable text generation tasks. This allows the CLM to generate sequences that meet specific control requirements in an auto-regressive manner. The specific objective can vary according to different tasks: we use the _Maximum Likelihood Estimation (MLE)_ for the word inclusion and length control tasks; and use the unlikelihood training strategy (Zhang and Song, 2022) for the attribute control task.
It is worth noting that RMT shares the trait of training efficiency. Firstly, RMT efficiently reuses the output of the base CLM, which makes RMT requiring fewer pre-training datasets and expediting the pre-training process. Additionally, RMT is lightweight and is built on the top layer of the base CLM. Therefore, gradients do not need to propagate into the base CLM during the backpropagation process, which can save a significant amount of training time and GPU memory.
## 4 Experiments
### Word Inclusion Experiment
Two different settings are experimented in this subsection. The first one is the without-context setting, i.e., steering the CLM to generate a single sentence containing the keywords without the context. The second is the with-context generation setting, which requires the CLM to continue to generate target text under an extended context. The consideration of these two modes aims to test the CTG's flexibility, capable of controlling the generation of CLM in a position-independent way.
**Experimental Setting.** Following the NRP (Carlsson et al., 2022), we use the GPT2-large as the backbone CLM, and the training data for pre-training and fine-tuning also comes from Wikipedia. More details on training and inference are provided in Appendix A.2. We test our approach on CommonGen (Lin et al., 2020) (for without-context setting), and C2Gen (Carlsson et al., 2022)(for with-context setting).
**Baselines.** As for the without-context setting, we firstly choose those methods that are specifically trained for lexical constraints tasks. They are KG-BART (Liu et al., 2021), which fine-tunes the whole parameters of BART while it is augmented by knowledge graph; and POINT (Zhang et al., 2020), which is an insertion-based method and iteratively injects words around the target words. Finally, we also employ a pure decode-time approaches for lexical constraint, namely NeuroLogic (Lu et al., 2022), as a baselines. The general CTG methods are also compared, including Non-Residual Prompt (Carlsson et al., 2022)(i.e., a recent state-of-the-art method using position-independent prompting model to guide the gen
eration of GPT2-large), and approaches directly instructing the GPT2-large and ChatGPT to generate target sentences. As for the with-context setting, KG-BART and POINT could not be applied to this scene and thus they are removed. The test results of POINT, KG-BART, and NRP are adopted from the open source code3.
Footnote 3: [https://github.com/FreddeFallan/Non-Residual-Prompting](https://github.com/FreddeFallan/Non-Residual-Prompting)
**Evaluation.** Following the NRP [11], We employ coverage (Cov), measuring the average coverage rate of the generated text with respect to the target words, to assess the controllability. Perplexity (PPL) is used to evaluate the fluency. We calculate PPL using an off-the-shelf GPT2-large model. Additionally, we utilize Self-Bleu-5 to evaluate the diversity of the generated text, with lower Self-Bleu values indicating greater syntactic diversity. Considering the non-intrusive setting (i.e., the CLM is fixed and the task is conducted on the open-ended text generation setting), our experiments omit that task-specific metrics like BLEU, METEOR, CIDEr, and SPICE for CommonGen dataset. Additionally, we conduct a human evaluation to assess the conformity of the generated text to common sense (**CS**) and text **Fluency** from human perspective. For the with-context setting, we introduce an additional manual evaluation metric called Relevance (Rel), where human evaluators scored the relevance of the generated text to the given context. More detailed information can be found in the Appendix C.
**Result and Analysis.** As is shown in Table 1, in the without-context setting, the coverage of our method significantly outperforms the vanilla prompt method (i.e., Prompt+GPT2) and pure decode-time approach (i.e, NeuroLogit) with the same decode algorithm using by RMT. It shows slight advantages over ChatGPT and NRP. Compared to task-specific models (i.e., POINT, KG-BART), the coverage of RMT is close to them, yet with lower PPL. It is worth noting that, in the with-context setting, the strong baselines, including ChatGPT and NRP, respectively suffer from a 12.0% and 8.0% decline of control ability compared to the no-context setting. However, RMT maintains the same-level of control ability with the no-context setting, and outperforms NRP and ChatGPT, which shows the superiority of RMT in that the control ability will not degrade with the context length.
In term of human evaluation in CommonGen, POINT is an insertion-based generation framework that naturally suffered from worse text quality; thus
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Method** & **Cov (\(\uparrow\))** & **PPL (\(\downarrow\))** & **Self-Bleu (\(\downarrow\))** & **CS (\(\uparrow\))** & **Fluency (\(\uparrow\))** \\ \hline CommonGen (without-context setting) & & & & & \\ \hline POINT [11] & **98.0** & 65.6 & 27.7 & 4.8 & 4.0 \\ KG-BART [13] & 97.2 & 51.3 & 33.0 & 7.4 & 7.3 \\ NeuroLogit [13] & 77.7 & **23.3** & 56.2 & 4.6 & 4.5 \\ NRP [11] & 93.0 & 42.3 & 28.4 & 6.1 & 7.0 \\ Prompt+GPT2 & 70.9 & 61.3 & 48.3 & 6.3 & 7.5 \\ Prompt+ChatGPT & 90.2 & 59.1 & - & **7.9** & **8.2** \\ RMT (w/o CL) & 93.1 & 56.4 & **20.9** & 6.5 & 6.7 \\ RMT (_CL=15_) & 93.9 & 60.3 & 22.5 & - & - \\ RMT (_CL=18_) & 93.7 & 47.9 & 23.7 & - & - \\ RMT (_CL=20_) & 93.8 & 44.1 & 23.2 & - & - \\ \hline C2Gen (with-context setting) & & & & CS / Rel & \\ \hline Prompt+ChatGPT & 82.2 & 46.7 & **5.2** & **9.0 / 8.5** & **8.8** \\ NRP\({}^{*}\)[11] & 81.0 & - & - & - & - \\ RMT (w/o CL) & **91.8** & **46.2** & 5.8 & 7.1 / 8.3 & 6.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The experiment results on word inclusion. _CL_ represent the external control length setting in our experiment. We set the number of RMT block-layers to three (\(M=3\)). The result of NRP\({}^{*}\) in C2Gen is taken from what was reported in the original paper\({}^{2}\). For the ChatGPT baseline in CommonGen, we test it on a subset of 500 samples, due to the problem of API access.
the result of RMT is much better than POINT on terms of commonsense and fluency. Like NRP and Promp+GPT2, we all use frozen GPT-2-large as the backbone and do not introduce external knowledge. RMT achieves comparable results, which prove that the non-intrusive paradigm with residual learning would maintain the language model's ability, thereby guaranteeing the text quality. Our approach falls behind KG-BART enhanced by knowledge graph and ChatGPT with hundred-billion level parameters. This is within exception. If inserting RMT into larger-scale CLM, e.g., GPT3, we believe the text quality of our approach will also be improved. As for C2Gen, RMT produces comparable results in contextual relevance with ChatGPT, but there is still a gap in common sense. _More qualitative examples of different CTG approaches can be seen in Appendix D._
### Sentence Length Control Experiment
We test the length-control ability of RMT, i.e., instructing the CLM to generate a sentence which satisfies the word inclusion objective under an exact number of words. This setting increases the challenge of word inclusion task, effectively approving the fine-gained control ability. We observe that RMT can plan accordingly by the instructions of the required token numbers and target words.
We report control performance for \(CL=15,18,20\). As shown in Table 1, RMT can maintain same level word coverage under different control-length. The longer the target length, the lower the generated text's PPL. This result is quite intuitive since it is harder to produce fluent text within the shorter required length.
To quantitatively measure the controllability of sentence length, we conduct tests on the Common-Gen validation dataset to analyze the control effect of our proposed method. Some examples are presented in Table 2, which demonstrate that RMT effectively steers the CLM to generate texts with specific keywords and required sentence lengths. Furthermore, we calculated the mean offset (Mean) and the standard deviation (SD) for different control lengths. The results are visualized in Figure 2. We observe that the offset and standard deviation both remain within one, indicating that RMT successfully achieves the sentence generation with desired sentence lengths as instructed.
### Attribute Control Experiment
**Setting and Baselines.** We follow DisCup Zhang and Song (2022), and use a discriminator-involved training strategy to fine-tune RMT, and the checkpoint of attribute-classifier comes from the open source code4. During the training, we optimize RMT using two different strategies, one is to use top-k=90 algorithm to select re-rank candidate tokens and the other setting uses top-p=0.95 to select candidate tokens. The top-k setting increases the depth of token sampling, thus the diversity and control ability of the model will increase, yet with the burden of decreasing text fluency. The top-p keeps the re-ranked tokens within the distribution of base-CLM, which could potentially achieve higher fluency but lower text quality and control ability. More details could be seen in Zhang and Song (2022). Different from DisCup, which optimizes unique prompt for every attribute, we use the RMT module to directly encode different attribute instructions uniformly. GPT2-large is used as the backbone CLM, and the training data is the widely used Stanford Sentiment Tree (SST-5) Socher et al. (2013) collected from movie reviews. We follow the commonly used setting in previous work Krause et al. (2021); Zhang and Song (2022); Lu et al. (2022). Specifically we use 5K neutral prompts, 2.5K positive prompts, and 2.5K negative prompts as test dataset (provided by DEXPERT Liu et al. (2021)), and every CTG approach generates another 20 continuations based on given prompts, to achieve sentiment (i.e., positive and negative) controllable text generation. We collect the primary baselines reported by DisCup Zhang and Song (2022) and DEXPERT Liu et al. (2021), including the decode
Figure 2: Control length and control performance.
time approaches with GPT2-large as base-CLM (i.e., PPLM (Dathathri et al., 2020),GEDI (Krause et al., 2021),and DEXPERT (Liu et al., 2021)), and the training approaches (i.e., CTRL (Keskar et al., 2019) and DisCup (Zhang and Song, 2022)).
**Evaluation.** Following previous work (Krause et al., 2021; Zhang and Song, 2022; Lu et al., 2022), we use an external sentiment classifier provided by Huggingface.co to classify the generated texts, and get sentiment control accuracy (i.e., Correctness). PPL and Dist-1/2/3 are reported to test their fluency and diversity, respectively. For human evaluation, we introduce _Relevance_, i.e., how the text conforms to required sentiment; _Topicality_, i.e., how the generated continuations are context-consistent with the given prompts, and _Fluency_, i.e., the text's fluency evaluated from the human perspective. More details can be seen in Appendix C.
**Result and Analysis.** As shown in Table 3, RMT in top-k setting demonstrates a superior control performance compared to all baselines, while maintaining comparable text quality and diversity. And RMT in the top-p setting shows better text quality yet weaker control performance. The closest performers to our approach are DEXPERT and DisCup. However, DEXPERT requires fine-tuning an external pair of CLM, resulting in tuning two time more parameters of the base CLM. In contrast, RMT is parameter-efficient, requiring tuning relevant parameters which are only around 16% of the base CLM's parameters. Regarding DisCup, both approaches employ the same loss function. However, DisCup optimizes different continuous prompts for each attribute, while we utilize the RMT to steer the CLM using residual learning. DisCup requires fine-tuning fewer parameters, while RMT is completely decoupled from the CLM and controls different attributes using a unified model, providing a greater flexibility and a more powerful control capability.
The human evaluation result presented in Table 4 indicates that DEXPERT performs slightly better in terms of fluency and contextual relevance, but exhibits lower control ability, and RMT excels in attribute relevance. We speculate that the use of a classifier-based objective in RMT and DisCup might result in overly strong control, leading to the generation of abrupt turning points that may compromise fluency and topicality to some extent. DisCup performs relatively poorly, as too few control parameters limit its expression ability. _More detailed examples are presented in Appendix D._
### Further Analysis
**Block Layers and base-CLM size.** In order to test the scalability over the number of block layers and base-clm, we investigate the influence of the number of block layers on the RMT's control per
\begin{table}
\begin{tabular}{l|c|l} \hline \hline
**Target Words** & **Control Length** & **RMT’s Generated Text** \\ \hline \multirow{3}{*}{circle, sit, talk} & 13 & The group of people sitting around talking in a circle around them. \\ \cline{2-3} & 17 & The woman is sitting in a circle and talking to a man in a white shirt. \\ \cline{2-3} & 25 & The group of people are sitting in a circle, talking about what they’re going to do with their lives. \\ \hline \multirow{3}{*}{drink, sit, table, wine} & 13 & The man is sitting on a table and drinking wine from a glass. \\ \cline{2-3} & 17 & The man is sitting on a table with a drink in his hand and drinking wine. \\ \cline{2-3} & 25 & The man is sitting at a table with a drink in his hand, while another man is drinking wine from a glass. \\ \hline \hline \end{tabular}
\end{table}
Table 2: The examples of generated text under different control length. The generated length is counted with GPT2’s Tokenizer, thus it will make a little difference with the number of actual words.
Figure 3: RMT block layers and control performance. RMT could achieve a consistently high and stable control performance for CLMs with different sizes. About three block layers are sufficient for optimal performance.
formance over different size of base-CLM models. The results shown in Figure 3 indicate that a value of \(M=3\) is deemed appropriate for achieving effective word inclusion and sentence length control. This suggests that RMT has the advantage of being parameter-efficient over the similar approaches like NRP Carlsson et al. (2022) that necessitates an external prompting model of the same size as the original CLM. Moreover, RMT achieves a consistently high and stable control performance for CLMs with different sizes, which indicates that RMT is potentially applied to various size of CLM models.
**Ablation Study.** To assess the effectiveness of the Residual Memory Transformer (RMT), we conduct an ablation study. The study involves three settings: removing the causal self-attention, excluding the causal CLM attention, and evaluating RMT's performance without pre-training. The results are presented in Table 5. Interestingly, under all three settings, we observe a significant decline in performance. This finding serves as compelling evidence that every component in RMT, as well as the pre-training stage, is crucial and indispensable for achieving optimal results.
**Generation Efficiency.** RMT is a lightweight plugin, allowing for efficient inference speeds comparable to the original CLM. Table 6 demonstrates that RMT outperforms typical CTG approaches in term of efficient generation speed, approaching the speed of the pure CLM (GPT2-large).
## 5 Related Work
As summarized by Zhang et al. (2022), CTG approaches could be divided into three categories, i.e., retraining/refactoring, fine-tuning, and post-process. The first two categories refer to the training approaches, which aim to fine-tune (e.g., reinforcement learning is used to fine-tune pretrained language model(PLM) Ouyang et al. (2022); Lu et al. (2022), optimizing continuous prompts Yang et al. (2022); Zhang and Song (2022), and prompting model Carlsson et al. (2022) to steer text generation) or retrain Chan et al. (2021); Keskar et al. (2019); Gururangan et al. (2020) a PLM to generate texts that meet the desired control condition. These methods have shown significant performance improvements in the field. However, with the increasing size of PLM, they have become resource-intensive to fine-tune or retrain. As a result, post-process approaches have become more
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Models** & **Cov (\%)** \\ \hline RMT & **93.8** \\ _w/o causal Self-Att_ & 73.2 \\ _w/o causal CLM-Att_ & 75.1 \\ _w/o Pre-training_ & 74.5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Human evaluation results of sentiment control. As for each metric, \(\uparrow\) indicates that the higher corresponding value is better, and \(\downarrow\) is the opposite.
\begin{table}
\begin{tabular}{l l r r r r} \hline \hline \multirow{2}{*}{**Target Sentiment**} & \multicolumn{3}{c}{**Correctness** (\(\uparrow\))} & \multicolumn{2}{c}{**Fluency** (\(\downarrow\))} & \multicolumn{2}{c}{**Diversity** (\(\uparrow\))} \\ & \multicolumn{1}{c}{Positive} & \multicolumn{1}{c}{Neutral} & \multicolumn{1}{c}{Negative} & \multicolumn{1}{c}{PPL} & \multicolumn{1}{c}{Dist-1 / Dist-2 / Dist-3} \\ \hline \multirow{8}{*}{Positive} & PPLM Dathathri et al. (2020) & 52.68 & 8.72 & 113.54 & 0.39 / 0.83 / 0.89 \\ & CTRL Keskar et al. (2019) & 77.24 & 18.88 & 48.24 & 0.13 / 0.53 / 0.79 \\ & GEDI Krause et al. (2021) & 86.01 & 26.80 & 123.56 & 0.20 / 0.66 / 0.85 \\ & DEXPERT Liu et al. (2021) & 94.46 & 36.42 & 60.64 & 0.18 / 0.63 / 0.84 \\ & DisCup Zhang and Song (2022) & 94.20 & 60.40 & 46.6 & 0.14 / 0.51 / 0.78 \\ & RMT+top-k & **97.62** & **67.20** & 46.0 & 0.14 / 0.56 / 0.79 \\ & RMT+top-p & 94.50 & 42.60 & **17.3** & 0.13 / 0.45 / 0.65 \\ \hline \multirow{8}{*}{Negative} & PPLM Dathathri et al. (2020) & 10.26 & 60.95 & \multicolumn{1}{c}{} & 122.41 & 0.40 / 0.83 / 0.90 \\ & CTRL Keskar et al. (2019) & 20.95 & 62.37 & \multicolumn{1}{c}{} & 45.27 & 0.13 / 0.51 / 0.78 \\ & GEDI Krause et al. (2021) & 60.43 & 91.27 & \multicolumn{1}{c}{} & 138.93 & 0.19 / 0.66 / 0.86 \\ & DEXPERT Liu et al. (2021) & 64.01 & **96.23** & \multicolumn{1}{c}{} & 67.12 & 0.20 / 0.64 / 0.83 \\ & DisCup Zhang and Song (2022) & 62.80 & 91.40 & \multicolumn{1}{c}{} & 47.90 & 0.13 / 0.50 / 0.77 \\ & RMT+top-k & **77.16** & 95.92 & \multicolumn{1}{c}{} & 49.15 & 0.15 / 0.60 / 0.82 \\ & RMT+top-p & 56.4 & 92.7 & \multicolumn{1}{c}{} & **19.0** & 0.13 / 0.48 / 0.70 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The experimental results of sentiment controllable text generation. Among the table, \(\uparrow\) indicates that the higher corresponding value is better, and \(\downarrow\) is the opposite.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Models** & **Cov (\%)** \\ \hline RMT & **93.8** \\ _w/o causal Self-Att_ & 73.2 \\ _w/o causal CLM-Att_ & 75.1 \\ _w/o Pre-training_ & 74.5 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study of RMT.
popular in the research community.
The post-process methods are dedicated to guiding language model toward desired texts in the decode-time stage using an auxiliary module. Achieving the goal of generating attribute-specific texts, PPLM (Dathathri et al., 2020) leverages a simple attribute classifier to update the head hidden layer of LM by gradient feedback. Then, Diffusion-LM (Li et al., 2022) combines Diffusion-LM and classifier-guided gradient updates to the continuous sequence of latent variables, achieving plug-and-play controllable generation. COLD (Qin et al., 2022) proposes a novel decoding framework, by combining energy-based model and gradient-based sampling. Fudge (Yang and Klein, 2021) uses the discriminator that contains future attribute information to re-rank the token distribution produced by a frozen GPT2 model. NeuroLogits (Lu et al., 2022, 2021) incorporates the lexical constraints into the decode-time algorithm. In order to accelerate the generation process, GeDi (Krause et al., 2021) and DEXPERT (Liu et al., 2021) train another smaller language model as generative discriminators to guide the generation from a base GPT2. Plug-and-Blend (Lin and Riedl, 2021) extends GeDi to controllable story generation by introducing a planner module. Moreover, controlling multiple control elements is also explored (Kumar et al., 2021; Yang et al., 2022).
Most decoder-time approaches either solely intervene with the LM during the token selection phase of generation, lacking planning capabilities, or require multiple iterations, resulting in excessive generation times. Our approach shares a plug-and-play trait with decoder-time approaches. However, the key distinction is that we use a lightweight residual model to integrates control information and multi-level contextual streams from the CLM, enabling fine-gained content planning and efficient text generation.
## 6 Discussion and Future Work
RMT still suffer some the limitations. (1) Challenges in applying RMT to close-ended CLMs. Presently, the application of RMT needs to obtain the last hidden states or the logits of CLM, thus applying RMT to some commercial CLMs, e,g., GPT-4, still faces challenges. This is also a common problem of all plugin-style CTG. (2) RMT does not focus on commonsense, may result in generating texts which are not confirmed to commonsense sometimes. This issue could be potentially relieved by introducing external knowledge graph in the future work.
Up to now, it is still challenging to avoid the factual errors that appeared in the generated text for large-scale causal language models, which RMT does not address either. A promising future work is to combine RMT and information retrieval systems to enhance the factual accuracy of those generative models. Moreover, RMT could also be used to encode personal profiles and build personalized chatbots, or fusion with image information so as to be applied to multi-modal scenes.
## 7 Conclusion
In this paper, we have proposed a new CTG alternative, which leverages a residual model to steer the CLM to generate desired text noninvasively. Additionally, we propose a residual memory transformer, a novel encoder-decoder architecture, to fuse the raw contextual text, generative stream of CLM, and control information in a shot, thus better collaborating and controlling the generation of CLM. The experiments show that RMT exhibits better performance in flexibility, control granularity, and efficiency, making it a compelling solution for controllable text generation.
|
2309.12057 | 3D Muographic Inversion in the Exploration of Cavities and Low-density
Fractured Zones | Muography is an imaging tool based on the attenuation of cosmic muons to
observe the density distribution of large objects, such as underground caves or
fractured zones. Tomography based on muography measurements -- that is, three
dimensional reconstruction of density distribution from two dimensional muon
flux maps -- brings up special challenges. The detector field of view covering
must be as balanced as possible, considering the muon flux drop at higher
zenith angles and the detector placement possibilities. The inversion from
directional muon fluxes to 3D density map is usually underdetermined (more
voxels than measurements) which can be unstable due to partial coverage. This
can be solved by geologically relevant Bayesian constraints. The Bayesian
principle results in parameter bias and artifacts. In this work, the linearized
(density-length based) inversion is applied, the methodology is explained,
formulating the constraints associated with inversion to ensure the stability
of parameter fitting. After testing the procedure on synthetic examples, an
actual high quality muography measurement data set from 7 positions is used as
input for the inversion. The result demonstrates the tomographic imaging of a
complex karstic crack zone and provides details on the complicated internal
structures. The existence of low density zones in the imaged space was verified
by samples from core drills, which consist altered dolomite powder within the
intact high density dolomite. | László Balázs, Gábor Nyitrai, Gergely Surányi, Gergő Hamar, Gergely Gábor Barnaföldi, Dezső Varga | 2023-09-21T13:24:48Z | http://arxiv.org/abs/2309.12057v1 | # 3D Muographic Inversion in the Exploration of Cavities and Low-density Fractured Zones
###### Abstract
Muography is an imaging tool based on the attenuation of cosmic muons to observe the density distribution of large objects, such as underground caves or fractured zones. Tomography based on muography measurements - that is, three dimensional reconstruction of density distribution from two dimensional muon flux maps - brings up special challenges. The detector field of view covering must be as balanced as possible, considering the muon flux drop at higher zenith angles and the detector placement possibilities. The inversion from directional muon fluxes to 3D density map is usually underdetermined (more voxels than measurements) which can be unstable due to partial coverage. This can be solved by geologically relevant Bayesian constraints. The Bayesian principle results in parameter bias and artifacts. In this work, the linearized (density-length based) inversion is applied, the methodology is explained, formulating the constraints associated with inversion to ensure the stability of parameter fitting. After testing the procedure on synthetic examples, an actual high quality muography measurement data set from 7 positions is used as input for the inversion. The result demonstrates the tomographic imaging of a complex karstic crack zone and provides details on the complicated internal structures. The existence of low density zones in the imaged space was verified by samples from core drills, which consist altered dolomite powder within the intact high density dolomite.
keywords: Inverse theory - Tomography - Muography - Numerical solutions - Fractures, faults, and high strain deformation zones +
Footnote †: journal: 000, 1–72
Introduction
Cosmic radiation on the surface of the Earth is a natural phenomenon known for a century, based on the pioneering discovery by Theodor Wulf (1909). Shortly after the cosmic origin was demonstrated by the balloon measurements of Victor Hess - by the observed increasing radiation in the higher atmosphere (Hess 1912). Since then, several key properties of cosmic rays have been discovered, showing that most of those reaching Earth surface are muon particles: collision cascade and subsequent decay products of primary cosmic rays of Galactic origin. Nature is rather kind to us on Earth: unlike other planets in the Solar System (Kedar et al. 2013, Leone et al. 2021a), the atmosphere is sufficiently thick to filter most of the hadrons, and thin enough to let through most of the produced muons. This allows one to precisely quantify the muon flux at any point on the surface of the Earth or up to kilometer deep underground, with negligible time variation for the purposes of this paper.
Imaging with cosmic muons, or "muography" in short, was pioneering in the 60s, first applied for archaeology by Alvarez et al. (1970). Ever since, this emerging scientific field has a long history (Kaiser 2019), an overview of which is well covered in a recent monograph by Olah et al. (2022). One of the basic setting is "underground muography": the flux reduces strongly and in a well predictable manner as a function of material crossed, similar to X-ray radiography. The flux is nearly precisely dependent only on the integrated density along the measurement line, and the zenith angle (Lesparre et al. 2010). In fact, the muon travels along a nearly straight line up to the point of stopping (Olah et al. 2019), which allows a directional measurement with a sufficiently precise muon tracking detector. It is also worth mentioning here, that not only the attenuation of muons can be utilized for imaging. For example, the muon scattering indicates density and atomic number distribution inside an examined volume by measuring the direction of the particles before and after the target. This method was first proposed by a group from the Los Alamos National Laboratory (Borozdin et al. 2003, Priedhorsky et al. 2003), and recently an overview of similar studies was given by Barnes et al. (2023).
The aim of underground muography measurement is to investigate the "target", that is, the density distribution in the volume above the detector. In order to map the three dimensional structure, multiple measurement views of two dimensional measurements (tomography) may be invoked. This is inherently an inversion problem, since the muon flux is related to a complicated integral of the density.
This work demonstrates practical tomographic inversion of actual muographic data, and shows the structural determination of a three-dimensional (3D) crack zone by multi-view muography. Radiographic approach to image inhomogeneous density zones with muography has been applied earlier by Tanaka et al. (2007, 2011, 2020), Miyamoto et al. (2017), Olah et al. (2021) among others. A tomographic measurement is rather complex (Miyamoto and Nagahara 2022), and one needs not only high statistics data, but systematic errors need to be well controlled from the multiple views. The inversion needs to use some existing information, that is, a Bayesian approach, and may invoke other measurement techniques such as gravity (Nishiyama 2022, Barnoud et al. 2019, Guardincerri et al. 2017, Cosburn et al. 2022) or electric resistivity (Lesparre et al. 2012). Muon tomography has been successfully applied earlier, such as to identify underground density increase by an ore body (Schouten et al. 2018) and decrease by cavities (Borselli et al. 2022, Cimmino et al. 2019, Liu et al. 2023), or to image the internal structure of a volcanic cone (Nagahara et al. 2022) or nuclear reactor (Procureur et al. 2023). The present paper describes the adaptation of a maximum likelihood inversion method with the combination of geologically relevant Bayesian constrains for multi-view
mographic measurements. The method includes uncertainty propagation, quantifications for focus zone determination, and synthetic data test.
The applicability for 3D density estimation is demonstrated for the first time on a high resolution muography survey of a karstic underground crack zone at shallow depth (40-60 m), and validated by drill samples. The measurements were performed in the Kiralylaki tunnel systems near Budapest, as indicated in Fig. 1, along a horizontal tunnel. The location was chosen due to the possible convenient geometry of the measurements along a straight line, but at the same time expecting a complicated density structure above the tunnel.
## 2 Linearized Muon-Tomographic Density Distribution Reconstruction
In linearized tomographic inversion the initial data system is the line integral of the density (density-lengths) obtained by transforming the directionally recorded pulse count rates of the muon detectors. More precisely, one quantifies the "integrated energy loss" along the given direction, however, this has little (or if needed quantifiable) dependence on material composition (Lesparre et al., 2012). In this approximation, the response of the volume of interest (direct problem) can be described as a linear transformation of the discretized density distribution of the surveyed geological object, where the transformation is a function of the measurement geometry alone. To estimate the correct density distribution, the error distribution of count rates (related to the flux) must also be transformed.
### Forward problem: the geometric model
In the case of absorption muon tomography, one tries to estimate the density distribution \((\rho(\mathbf{r}),\mathbf{r}\in V)\) of a simply connected continuous domain \((V)\) (partially bounded by the known surface topography). For that, multi-view radiographic muon transmission data are collected from the domain, defined as density-length
\[\gamma=\int_{L}\rho(\lambda)\,d\lambda, \tag{1}\]
where \(\lambda\) is the length measure along a muon trajectory \(L\), and the aim is to determined the density distribution by solving the inverse problem, similarly to X-ray based computed tomography (CT). The muon transmission derives from the attenuation of cosmic muons through the
Figure 1: Geographic location of the measurements in Budapest. The horizontal tunnel is oriented towards the west, with increasing overburden along its length.
medium, depending basically on the density properties of the medium due to energy loss. This means, that with increasing density-length, the rate of cosmic muons decreases and the muon spectrum hardens. The initial - energy (\(E\)) dependent - muon flux spectrum on the surface of the Earth (\(\varphi_{0}(\mathbf{\theta},E)\)) depends also on the zenith angle of the arriving muon (its direction norm vector will be denoted by \(\mathbf{\theta}\)). In this paper, the method by Guan et al. (2015) has been used to approximate \(\varphi_{0}\). The muon flux measurements are performed below the domain \(V\) of interest at specific locations, that is, sampled on a few points of the domain edge (in the detector locations, indicated as rectangles in the bottom of Fig. 2.) by the direction sensitive measurements. In these points, a muon tracking detector registers count rates generated by muons with a surface energy greater than a threshold value (\(E_{\rm min}\)) which required to survive through the medium. The actual measured data is a vector of the muon count number \(\mathbf{y}\), which can be expressed as count rate \(\hat{\mathbf{y}}=\mathbf{y}/\Delta t\), where \(\Delta t\) is the measurement live time. In turn, the count rate can be normalized to flux by dividing with \(\eta\): the product of detector efficiency, acceptance (effective sensitive area).
Muons measured at different positions and discretized (binned) by direction angle represented by direction unit vectors \(\mathbf{\theta}_{i}^{0}\) (bin center of \(i^{\rm th}\) measurement) with disjoint solid angle ranges (\(\Delta\Omega_{i}\)) of bins, on which the detector integrates the muon arrivals. The theoretically expected muon count rates in the measurements are:
\[\Psi_{i}=\int\limits_{\mathbf{\theta}\in\Delta\Omega_{i}}\int\limits_{E\geq E_{ \rm min}(\gamma_{i})}\eta_{i}\varphi_{0}(\mathbf{\theta},E)dEd\Omega\approx\eta_{ i}\Delta\Omega_{i}\int\limits_{E\geq E_{\rm min}(\gamma_{i})}\varphi_{0}(\mathbf{ \theta}_{i}^{0},E)dE \tag{2}\]
here \(\mathbf{\theta}\) is the measured muon direction, \(\mathbf{r}_{i}\) is the detector position of \(i^{\rm th}\) measurement, the index \(i\in[1..N]\) where \(N\) is the total number of measurement bins, and \(\gamma_{i}\) the density-length derived from the modeled medium. The expected muon count rate is useful for significance and detection limit calculations. With sufficiently small angular bins, one can use the efficiency calculated in the bin center, and the integration over solid angle translates to a multiplication with the bin size \(\Delta\Omega_{i}\).
The association between the density-length and the muon count rate can be derived from equation (2) through the \(E_{\rm min}(\gamma)\). The minimum required energy for a muon to survive after energy deposition in a given density-length is well known in the literature, we use the method by Lesparre et al. (2010) for this calculation.
However the direct measurement data is muon count number (\(\hat{\mathbf{y}}\)), it is preferable to make two steps further to produce the input data for inversion. The first one is to determine the measured (energy integrated) flux \(\Phi^{m}\),
\[\Phi_{i}^{m}(\mathbf{\theta}_{i}^{0},\mathbf{r}_{i})=\frac{\hat{\mathbf{y}}}{ \eta_{i}\Delta\Omega_{i}}. \tag{3}\]
From here on, the superscript \(m\) denotes quantities derived from measurements. Even though muon flux is derived directly from measured data, it is not easy to interpret, therefore it is rather transformed to density-length in a second step. This transformation requires \(\varphi_{0}\) as input, as well as the above quoted \(E_{\rm min}(\gamma)\) dependence, resulting in the (implicitly expressed) measured density-length \(\gamma^{m}\):
\[\Phi_{i}^{m}=\int\limits_{E\geq E_{\rm min}(\gamma_{i}^{m})}\varphi_{0}(\mathbf{ \theta}_{i}^{0},E)dE. \tag{4}\]
Using \(\mathbf{\gamma}^{m}\) instead of \(\hat{\mathbf{y}}\) or \(\Phi^{m}\) (count rate or measured flux) do not only result in an easier interpretation by geoscientists, but linearizes the inversion problem since density-length is an additive quantity.
The directions of the measured muon trajectories are within the angular bin represented by its central direction vector \(\mathbf{\theta}_{i}^{0}\). The angular
bins are not always negligibly small, which means that one must average over all the directions within the \(\Delta\Omega\) bin size. The density-length can be readily calculated as the integral of the density distribution \(\rho(\mathbf{r})\) along (the averaged) trajectory.
For the tomographic reconstruction, the continuous density distribution of the volume \(V\) is expressed in a finite dimensional basis (in the simplest case a grid, covering \(V\)) as a form of parameter discretization. The grid elements \(\boldsymbol{\beta}_{k}\) are non-overlapping volume elements which provide a disjoint coverage of the domain \(V\) under study (\(k\in[1..K]\), where \(K\) denotes the number of volume elements). These grid elements (expressed as a vector) can be used to generate orthogonal basis functions to describe the density distribution:
\[\beta_{k}(\mathbf{r})=\left[\begin{smallmatrix}1;\,\mathbf{r}\in\boldsymbol{ \beta}_{k}\\ 0;\,\mathbf{r}\notin\boldsymbol{\beta}_{k}\end{smallmatrix}\right., \tag{5}\]
that is, the \(\beta_{k}(\mathbf{r})\) functions take the value of 1 only within the \(k\)-th volume element, and zero outside the element. This formalism is indicated schematically in Fig. 2. An element of the density vector (\(\boldsymbol{\rho}\)) is therefore constructed by the discretization of density distribution \(\rho_{k}=\rho(\mathbf{r})\cdot\beta_{k}(\mathbf{r})\). Let \(F_{i,k}\) denote the path length of (averaged) muon trajectories within grid elements, approximately indicated by the shaded cross section area in Fig. 2. This \(\mathbf{F}\) is the Jacobian matrix, transforming between the discretized density and density-length (index \(i\) runs through the measurement lines, while index \(k\) refers to the grid elements). The matrix includes all the geometric information needed for the inversion. Adding up the path lengths, weighted with the density, results the vector of density-lengths:
\[\boldsymbol{\gamma}=\mathbf{F}\boldsymbol{\rho}. \tag{6}\]
Given a suitable (error model dependent) metric, the distance from the measurement (vector norm of \(\Delta\boldsymbol{\gamma}=\boldsymbol{\gamma}^{m}-\mathbf{F}\boldsymbol{\rho}\), where \(\gamma_{i}^{m}\) vector elements are derived from Eq. 4) will be the basis for the reconstruction. According to Fig. 2 the topography defines the upper boundary of the volume of interest, that is, the path lengths and density distribution are calculated only below the topography (zero above).
Figure 2: Schematic representation of the relevant variables and their geometrical relations. The measurement is done in a finite solid angle bin, which intersects a specific voxel. The mean path length in the voxel is encoded in the \(\mathbf{F}\) matrix. (The voxel size and the solid angle cone is exaggerated relative to a typical configuration for the sake of clarity).
### Inverse problem and related errors
#### 2.2.1 The complex error model associated with tomographic inversion
Muon tomography is characterized by imaging over a relatively restricted angular range (looking only upwards), which is therefore a cone beam type limited angle transmission tomography. The one-sided arrangement implies the possibility of distortion of the fitted distribution along the projection lines and associated artifacts. The number of measurement points, angular resolution and the coverage (projection line network density) determine the possible resolution for the reconstruction, i.e. the finest grid subdivision that can be applied (Tanaka and Oshiro, 2017).
The method of parameter fitting (density reconstruction) is determined by the error model associated with the imaging. It also determines the functional (quadratic form, \(Q\)) to be minimized during parameter fitting process. The error of the measured data can be considered as Poisson distribution, of which the Maximum Likelihood (ML) principle leads (approximately) to the weighted least squares (WLS) fitting, to minimize the following functional with the measured data (\(\mathbf{y}\)) and theoretical muon count rates (\(\mathbf{\Psi}\)):
\[Q_{\mathbf{y}}(\boldsymbol{\gamma})=\sum_{i=1}^{N}W_{\mathbf{y},i,i}(y_{i}- \Psi_{i}(\gamma_{i})\Delta t)^{2};\quad W_{\mathbf{y},i,i}\approx\frac{1}{ \mathbf{y}_{i}}\, \tag{7}\]
where \(\mathbf{W}_{\mathbf{y}}\) the weight matrix (inverse of measurements covariance matrix), for the \(N\) number of measurements.
As explained above, (\(\mathbf{y}\rightarrow\boldsymbol{\gamma}^{m}\)) transformation simplifies the tomographic inverse problem, which also transforms the original error distribution. Changing of data variances requires the modification of weight (\(\mathbf{W}_{\gamma}\)), taking care of error propagation. Since the transformation is non-linear, it causes a small bias (\(\delta\gamma_{i}=[\mathbb{E},\Psi^{-1}]y_{i}\)) which is negligible at count rates (Szatmary, 2002) relevant for muography (counts per bin above 100). The functional associated to the fit after the transformation becomes linear:
\[Q_{\rho}=(\boldsymbol{\gamma}^{m}-\mathbf{F}\boldsymbol{\rho})^{T}\mathbf{W}_ {\gamma}(\boldsymbol{\gamma}^{m}-\mathbf{F}\boldsymbol{\rho}). \tag{8}\]
The weight matrix elements are related to the transformation between flux and density-length:
\[W_{\gamma,i,i}=(\eta_{i}\Delta t_{i})^{2}\left(\frac{\partial\Phi_{i}^{m}}{ \partial\gamma_{i}}\right)^{2}\frac{1}{y_{i}}. \tag{9}\]
The density-length covariance matrix (\(\mathbf{C}_{\gamma}\)) is assumed to be diagonal, neglecting the small-scale data correlation due to the transformation. Expressing with the weight matrix:
\[\mathbf{C}_{\gamma}=\sigma_{\gamma}^{2}\mathbf{W}_{\gamma}^{-1}\, \tag{10}\]
where \(\sigma_{\gamma}^{2}\) is the calibration factor for the posterior \(\mathbf{C}_{\gamma}\) which can be estimated from the post-fitting value of normalized \(Q_{\gamma}\).
Up to now, the grid structure is arbitrary, defined by Eq. 5. This grid may be adapted to the geometry of the domain to be mapped, using the direction-dependent weight of the measurements:
\[w_{k}=\sum_{i=1}^{N}F_{i,k}W_{\gamma,i,i}. \tag{11}\]
The weights can be used to filter out blind spots (low weight voxels) in a given measurement configuration. Increasing the grid size can locally improve the weight of a grid element, but at the same time reduces the variance of the density estimate of the grid element, degrades the spatial resolution and, with varying density, the fitted density value may become less and less representative of the environment. In the
case of cavity exploration, it is generally assumed that cavities are located at zero density in a rock mass with a relatively accurately known density. Increasing the size of grid elements may blur and reduce the density contrast. It is worth noting that the topographic error appears as a parameter error, a small near-surface density anomaly after the fit. The data system can also be checked for the detectability of possible density anomalies before the fit. The feasibility conditions for muographic surveys has been examined in multiple papers, eg. in Leone et al. (2021b) or in Lesparre et al. (2010).
#### 2.2.2 Mathematical background of inversion
A typical muon tomography of cavity exploration sometimes requires a resolution of up to one meter, if possible, and a corresponding grid spacing. When designing the grid, it should be borne in mind that the resolution deteriorates as the distance from the measurement site increases (due to the angle uncertainty). The large number of grid elements (number of parameters) may lead to underdetermination of the parameters to be fitted (ambiguity, high parameter correlation) in parts of the mapped range. To make this task mathematically tractable, but also geologically relevant, Bayesian maximum _a posteriori_ probability (MAP) principle was used to set up the fitting criterion. This means that the likelihood function \(L(\mathbf{\gamma},\mathbf{\rho})\) is also multiplied by the _a priori_ density function for the parameter distribution \(p(\mathbf{\rho})\). Thus, the functional \(Q_{\gamma}\) from the ML principle is complemented by another quadratic form \(Q_{\rho}\) for the parameters to be fitted (Menke, 2018, Tarantola, 1987):
\[Q^{(0)}=Q^{(0)}_{\gamma}+Q^{(0)}_{\rho}=(\mathbf{\gamma}^{m}-\mathbf{F}\mathbf{\rho})^ {T}\mathbf{W}_{\gamma}(\mathbf{\gamma}^{m}-\mathbf{F}\mathbf{\rho})+(\mathbf{\rho}-\mathbf{ \rho}^{(0)})^{T}\mathbf{W}^{(0)}_{\rho}(\mathbf{\rho}-\mathbf{\rho}^{(0)})\, \tag{12}\]
where \(\mathbf{W}^{(0)}_{\rho}=(\mathbf{C}^{(0)}_{\rho})^{-1}\) matrix is the inverse of the covariance matrix of the _a priori_ distribution in Bayes' theorem. The above functional provides a dual metric for fitting the density vector. The first part is the criterion for the measurement space, the second part is the _a priori_ requirement for the parameter space: \(\mathbf{\rho}^{(0)}\) centered on the assumed parameter distribution. Setting \(\mathbf{\rho}^{(0)}\) to a constant value, it corresponds to assuming a prior with Gaussian distribution around the well-defined solid rock density. Diagonal elements of \(\mathbf{W}^{(0)}_{\rho}\) can be seen as effective error terms for the a priori distribution (Tarantola, 1987) whereas the matrix can include various forms of damping or smoothing. Since the statistical errors are more complicated to evaluate in that (non-diagonal matrix) case, and the results seemed stable in the setting of muographic inversion, \(\mathbf{W}^{(0)}_{\rho}\) was set to be proportional to a unity matrix. The normal equations associated with parameter fitting:
\[\partial_{\rho}Q^{(0)}=-\mathbf{F}^{T}\mathbf{W}_{\gamma}\mathbf{\gamma}^{m}+ \mathbf{R}\mathbf{\rho}+\mathbf{W}^{(0)}_{\rho}(\mathbf{\rho}-\mathbf{\rho}^{(0)})=\mathbf{0}\, \tag{13}\]
where \(\mathbf{R}=\mathbf{F}^{T}\mathbf{W}_{\gamma}\mathbf{F}\) notation has been applied to the symmetric quadratic matrix in the formula. Hence the first order estimate of the density distribution (Tarantola, 1987):
\[\mathbf{\rho}^{(1)}=(\mathbf{R}+\mathbf{W}^{(0)}_{\rho})^{-1}(\mathbf{F}^{T} \mathbf{W}_{\gamma}\mathbf{\gamma}^{m}+\mathbf{W}^{(0)}_{\rho}\mathbf{\rho}^{(0)}). \tag{14}\]
The covariance matrix associated with the expected value (denoted by \(\mathbb{E}\)) of the density distribution vector variation (\(\delta\mathbf{\rho}\)) in equation (14) can be derived as follows:
\[\mathbf{C}_{\rho}^{(1)}=\mathbb{E}\left(\delta\mathbf{\rho}^{(1)}( \delta\mathbf{\rho}^{(1)})^{T}\right)=\\ \mathbb{E}\left[(\mathbf{R}+\mathbf{W}_{\rho}^{(0)})^{-1}( \mathbf{F}^{T}\mathbf{W}_{\gamma}\delta\mathbf{\gamma})(\delta\mathbf{\gamma}^{T} \mathbf{W}_{\gamma}\mathbf{F})(\mathbf{R}+\mathbf{W}_{p}^{(0)})^{-1}\right]+\\ \mathbb{E}\left[(\mathbf{R}+\mathbf{W}_{\rho}^{(0)})^{-1}( \mathbf{F}^{T}\mathbf{W}_{\gamma}\delta\mathbf{\rho})(\delta\mathbf{\rho}^{T}\mathbf{ W}_{\gamma}\mathbf{F})(\mathbf{R}+\mathbf{W}_{p}^{(0)})^{-1}\right], \tag{15}\]
where the estimated density vector variation is a function of measured density-length variation and the prior density variation. The above formula combines the effect of these two sources of error in a weighted way (Menke, 2018). After some rearrangement:
\[\mathbf{C}_{\rho}^{(1)}=\left(\mathbf{R}+\mathbf{W}_{\rho}^{(0)}\right)^{-1}. \tag{16}\]
The Bayesian assumption introduces a bias with respect to the first-order asymptotically unbiased ML (Maximum Likelihood) estimate.
To estimate the bias, we can calculate the relationship between the true and the estimated densities:
\[\mathbb{E}\left[\mathbf{\rho}^{(1)}\right]=\left(\mathbf{R}+\mathbf{W}_{\rho}^{(0 )}\right)^{-1}\left(\mathbf{F}^{T}\mathbf{W}_{d}\mathbf{F}\mathbf{\rho}_{\rm real }+\mathbf{W}_{\rho}^{(0)}\mathbf{\rho}^{(0)}\right). \tag{17}\]
Hence the expected value of bias (\(\mathbf{b}^{(1)}=\mathbb{E}\left[\mathbf{\rho}^{(1)}-\mathbf{\rho}_{\rm real}\right]\)):
\[\mathbf{b}^{(1)}= \left(\mathbf{R}+\mathbf{W}_{\rho}^{(0)}\right)^{-1}\left( \mathbf{F}^{T}\mathbf{W}_{d}\mathbf{F}\mathbf{\rho}_{\rm real}+\mathbf{W}_{\rho}^ {(0)}\mathbf{\rho}^{(0)}\right)-\mathbf{\rho}_{\rm real} \tag{18}\] \[= \left(\mathbf{R}+\mathbf{W}_{\rho}^{(0)}\right)^{-1}\left(( \mathbf{F}^{T}\mathbf{W}_{d}\mathbf{F}-\mathbf{R}-\mathbf{W}_{\rho}^{(0)})\bm {\rho}_{\rm real}+\mathbf{W}_{\rho}^{(0)}\mathbf{\rho}^{(0)}\right)\] (19) \[= \left(\mathbf{R}+\mathbf{W}_{\rho}^{(0)}\right)^{-1}\mathbf{W}_{ \rho}^{(0)}\left(\mathbf{\rho}^{(0)}-\mathbf{\rho}_{\rm real}\right). \tag{20}\]
The bias occurs just at the cavities reducing the indication. Non-diagonal elements of \(\mathbf{R}\) contribute to the appearance of a smaller artifact.
Given the histogram of the estimated parameters, a critical level can be defined for the separation of cavities during post-processing. Recall also that almost any regularized version of the method of least squares is equivalent to some form of Bayesian estimation.
#### 2.2.3 Reducing the inversion problem from 3D to 2+1D
Inversion of a muon tomography measurement is an inherently three dimensional problem. In special cases, such as when the measurement locations are along a line - just as it happened for those to be presented in this paper - the configuration reduces to an independent sum of 2 dimensional problems, to planes containing the measurement line. The inversion planes never intersect other than the measurement line, therefore both the measured density-length values, as well as the resulting density distributions are uncorrelated and independent. Inversion is solved in each plane independently, and the 3 dimensional density distribution is derived by the projection of the solutions to the voxel base of the examined space. The general formulation is fully valid, which means that one can expect the same artifacts in the inversion plane due to biases like in a "fully 3D" inversion.
## 3 Simulated measurements - preliminary tests
Based on the known top surface (topography) or possible underground structures of the domain under investigation, the efficiency and focal area of a given measurement system can be investigated prior to the measurements, and the parameters of the measurement system can be optimized. In this chapter such a study is presented mainly with the aim to verify performance and understand the details of the output. In this example a predefined 3 m diameter cavity (zero density) is located in a homogeneous measurement environment at 20 m depth and viewed by intuitively organized series of measurements 50 m below a flat surface. Figure 3. shows the mapping method in two similar measurement configurations (wider and narrower range of positions), while on Figure 4 the focal range of the mapping can be analyzed based on the projected weights.
As can be seen from Figure 4, with differently tilted detectors the mapping is slightly differently focused, but the coverage is still weak at the edges.
The inversion results are shown in Fig. 5, assuming density value \(\boldsymbol{\rho}^{(0)}\) = 2500 kg/m\({}^{3}\) and variance \(\sqrt{\boldsymbol{C}_{\rho}^{(0)}}\) = 450 kg/m\({}^{3}\). The left panel is a noise-free (infinite statistics) calculation, which show characteristic artifacts along the projection lines. Mathematically these are due to nonzero off-diagonal elements of **R** matrix. In terms of the Bayesian approach, one can understand that the _a priori_ assumption is a flat density map, therefore the most probable _a posteriori_ density map does not fully describe the zero density anomaly (cavity). The fit fills the cavity, and balances with reduced density along the observation lines. One must note that the artifacts do not reach the magnitude of the real anomaly, but still allow a qualitative verification. Generally the artifacts are linked to the real cavity as a characteristic radial patterns.
A key issue with real measurement is the finite statistics, that is, noise. Inversion with simulated noise (statistical uncertainty after collecting 1 month data, assuming a 0.16 m\({}^{2}\) detector with 1degangular bins) are shown in the middle panel of Fig. 5. The qualitative picture does not change, and at the cost of the artifacts, the Bayesian approach suppresses oscillations in the inversion result which would result from a standard maximum-likelihood fit.
Figure 3: Projection lines of two measurement layouts under investigation: with different measurement points and detector tilt angles. The known cavity is located at the center. The voxel resolution is 1 m in horizontal and 0.5 m in vertical direction. The scale of axes are not the same.
## 4 Inversion of field measurements
### Geometrical configuration
The application of Bayes inversion is demonstrated on a real one-line field measurement in the Kiralylaki tunnel near Budapest, Hungary. Seven measurements have been performed by Close Cathode Chamber technology (Olah et al. 2012, Barnafoldi et al. 2012, Varga et al. 2013) along the tunnel (see Fig. 6) using a 0.16 m\({}^{2}\) detector with 1\({}^{\circ}\) angular bins. The detector was placed in the straight tunnel to study the density distribution of the overlying rocks for the purpose of cavity exploration.
The measurements were taken over a period of approximately one month per position to reach the proper variance and detection limit. Fig. 7 shows the topography of the examined region and the geometric configurations of the measurements, where the coordinates are on
Figure 4: The sensitivity map (the logarithms of back projected measurement weight from Eq. 11, \(\log(w_{k})\)) of different measurement configurations introduced on Fig. 3. The scale of axes are not the same.
Figure 5: Density inversion results with a hypothetical void in a homogeneous rock: without noise (left), with noise if 1 month measurements assumed (middle), and the standard deviation of estimated densities (right) in g/cm\({}^{3}\). The scale of axes are not the same.
the basis of the National Hungarian Grid (EOV). Run numbers and associated arrows indicate each detector position and viewing direction, collecting data in a \(\sim\)90degcone in each viewing direction. These specific configurations have been chosen based on preliminary surveys to focus on the most significant and closest density anomalies.
The detected density-length anomalies (difference between the measured density-length and that of a homogeneous rock) are presented in Fig. 8. The measured density-lengths were the input for the inversion.
### Results of inversion and quantification of uncertainties
The results of the Bayesian inversion are displayed on the Fig. 9 in each slice with 1.5 m resolution in horizontal and 0.5 m resolution in vertical direction, assuming density value \(\boldsymbol{\rho}^{(0)}\) = 2400 kg/m\({}^{3}\) and variance \(\sqrt{\boldsymbol{C}_{\rho}^{(0)}}\) = 450 kg/m\({}^{3}\). Note that this Bayes-prior variance is larger than the typical measurement error (5%), therefore gives higher weight to the measured data. The coordinates have been shifted: vertical origin is at the level of the top of the tunnel, the horizontal origin is fixed at the end of the Kiralylaki tunnel (EOV East 647000). The southern slices (-0.2 and -0.1) do not contain significant anomalies, accordingly, the resulting density distribution is essentially homogeneous in the middle part (in the focus area). The north slices show density anomalies reaching very close to the tunnel.
The inversion works efficiently in the focus area defined by high values of back propagated weight (left panel of Fig. 10). However, this quantity falsely implies high sensitivity close to the detectors as well, since a voxel will be taken into account by multiple measurement line in this nearer region. The standard deviation of the density values (similar in all slices) are mapped in the right panel of Fig. 10.
The proper parameter fit in the measurement space is illustrated in Figure 11, where the data for a selected detector location are compared with the density-lengths calculated from the fitted densities. The asymmetry of the curves can be explained by the topography. The quality of the fit is similarly excellent at all measurement points. The standard deviation is minimal in the direction of the focus range. With these
Figure 6: Images of the tracking detectors consisting Close Cathode Chambers, installed in the Kiralylaki tunnel.
measurements the problem of detectability is also demonstrated: the anomaly compared to the theoretical measurements calculated with the reference rock density can be detected at the selected 95% confidence level (1.65 sigma).
The histogram (Fig. 12) shows that the estimated error distribution (residual distribution) has an almost zero mean Gaussian shape
Figure 8: The detected density-length anomalies (deviation from a homogeneous density) are shown at the measurement positions in the tunnel, each plotted on a square grid of horizontal coordinate system. Contour lines represent the detector-to-surface rock thicknesses from the given point of view.
Figure 7: Topography and the geometric conditions of the measurements and inversion. Left: the section of the topography perpendicular to the tunnel. The slices (indicated from -0.2 to 0.5 by the tangent of their zenith angles) are the 2D planes in which the tomographic inversion has been made (see Sec. 2.2.3). Right: the section of the topography parallel to the tunnel. Red arrows show the detector pointing of the measurements, dashed line indicates the section of the left figure.
distribution. The bias due to the Bayesian term is hardly noticeable. This may be explained by the relatively small fraction of fractured or cavity zones. The fitted parameters (density distribution) show a more complicated distribution, with the peak at the Bayesian prior of the homogeneous assumption.
Figure 9: The result of the tomographic inversion (_a posteriori_ density distributions) is shown in relevant slices (as Fig. 7). On the 0.1, 0.3, and 0.4 slices purple arrows show the validation drill locations.
## 5 Validation using drill core samples
The measurements, both viewed as transmission images or tomographgic results, indicated large anomalies - decreased density regions - which are promising targets for a direct verification. Drilling seemed possible from the inside of the tunnel, with positions and directions aiming for anomalies close to the tunnel ceiling.
The length of the control drill holes were limited to 10 m due to technical reasons. Altogether, three drill holes were bore between the zenith angle of 3degand 22.5degwith the length of 5.4 m, 5.8 m and 9.2 m, indicated in Fig. 9 with purple arrows in slice 0.1, 0.3, and 0.4. Although none of the drills found empty voids except the 20-50 cm space between the brick wall of the tunnel and the original rock body, the
Figure 11: Left: The density-lengths for a selected measurement (Run 5, -0.1 tangent slope), the result of fitting quality, and the assumed density-lengths for homogeneous rock. Right: The density-length anomalies compared to the detection limit (95% confidence level).
Figure 10: Maps of quantities describing the focus area and uncertainties in a chosen 2D slice (0.2 tangent slope). Left panel shows the logarithmic weight factors (sensitivity map) which relates to the sum of count numbers from all the measurement lines crossing the given cell. Right panel shows the estimated errors, propagated by the bias calculations.
validation of the results were relevant. The low density zones detected by the measurements were large fissures filled up with significantly low density (\(\sim\)1.8 g/cm\({}^{3}\)) altered dolomite powder, while the density of the intact rock (cherty dolomite) was 2.6-2.7 g/cm\({}^{3}\) (see in Fig. 13). The contact boundaries of the high and the low-density zones (the walls of the fissure) were at the position previously predicted by the muon measurements by 20 cm accuracy. One of the drill holes went completely through of one of the low-density zones reaching the further wall of the fissure. Most of the dolomite powder has been washed away by the water of the diamond core driller, making it difficult to even continue the drilling operation. The extent of the low-density zone was the same as predicted. Generally, despite the differences between the predicted and the actual absolute densities measured on the drill cores, the overall geometrical structure of the fissure zones were detectable by muon tomography at very high precision.
## 6 Conclusions
Bayesian inversion tuned with homogeneous _a priori_ density map proved effective in solving the largely one-sided tomographic problem of an actual multi-view muography measurement: the relevant Bayesian assumption made the inverse sufficiently stable. The performed measurements applying a gaseous muon tracking detector system was originally with the aim of search for unknown cavities or density anomalies. High quality data were taken in the artificial Kiralylaki tunnel, in Budapest, Hungary. In parallel to the muon tomography measurements, complex geophysical cross-check measurements were done: mapping, scanning, and after evaluating the results, drilling for rock samples.
The angular resolution of the trackers, better than 1\({}^{\circ}\), enabled a spatial resolution of 1-2 meters to meet the needs of cavity exploration. Owing to the stability of the inversion and the resolution achieved, a sufficiently detailed estimate of the 3D distribution of crack zones in the rocks under investigation was obtained in a way that it could be verified by drilling.
Figure 12: Left: The histogram of estimated error (residuals) for fitted density-length data (ratio of estimations and measurements). Right: The histogram of estimated densities (right).
One must note that the parameter bias due to the application of Bayes' principle is largest in the region of the cavities, or anywhere departing from the _a priori_ input, because the Bayes-approach "prefers" a solution not too far from the assumption. Anomalies will not disappear due to this effect, but density contrast reduces. Once data suggests the existence of anomalies, then more focused, better positioned, or higher statistics confirmation measurements may be planned, to improve the quantitative evaluation. The principal merit of the Bayesian approach is that it offers a controlled and properly formulated method to extract the information from the limited available information.
## Acknowledgements
This work has been supported by the Joint Usage Research Project (JURP) of the University of Tokyo, ERI, under project ID 2020-H-05, the "INTENSE" H2020 MSCA RISE project under GA No. 822185, the "Mine.io" HEU project under GA No. 101091885, the Hungarian NKFIh research grant under ID OTKA-FK-135349 and TKP2021-NKTA-10, the Janos Bolyai Scholarship of the HAS and the ELKH-KT
Figure 13: Drill cores with different densities: (a) intact cherry dolomite (2.6-2.7 g/cm\({}^{3}\)); (b) slightly altered dolomite close to the walls of the fissures (2.4-2.5 g/cm\({}^{3}\)); (c) altered dolomite powder (less than 1.8 g/cm\({}^{3}\)); (d) the full extent of the last 2 m drill core from one of the drill hole. Dolomite powder only partially recovered.
SA-88/2021 grant. Detector construction and testing was completed within the Vesztergombi Laboratory for High Energy Physics (VLAB) at Wigner RCP.
L. Balazs conceptualized the mathematical formalism, calculated the inversion results of the synthetic and field measurement, created figures, and wrote the first version of the draft. G. Nyitrai formalized research goals and ideas, planned and carried out the muographic survey and validation drills, produced data, created figures and texts, reviewed the manuscript. G. Suranyi formalized research goals, provided tools and study materials, planned and carried out the muographic survey and validation drills, created figures and texts, and reviewed the manuscript. G. Hamar formalized research goals, produced data, and acquired funds. G. G. Barnafoldi formalized research goals, planned and carried out the muographic survey and validation drills, and reviewed the manuscript. D. Varga formalized research goals and ideas, supervised and reviewed the formalism and the manuscript, added texts, and acquired funds.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2309.14426 | Finite Pulse-Time Effects in Long-Baseline Quantum Clock Interferometry | Quantum-clock interferometry has been suggested as a quantum probe to test
the universality of free fall (UFF) and the universality of gravitational
redshift (UGR). In typical experimental schemes it seems advantageous to employ
Doppler-free E1-M1 transitions which have so far been investigated in quantum
gases at rest. Here, we consider the fully quantized atomic degrees of freedom
and study the interplay of the quantum center-of-mass (COM) $-$ that can become
delocalized $-$ together with the internal clock transitions. In particular, we
derive a model for finite-time E1-M1 transitions with atomic intern-extern
coupling and arbitrary position-dependent laser intensities. We further provide
generalizations to the ideal expressions for perturbed recoilless clock pulses.
Finally, we show at the example of a Gaussian laser beam that the proposed
quantum-clock interferometers are stable against perturbations from varying
optical fields for a sufficiently small quantum delocalization of the atomic
COM. | Gregor Janson, Alexander Friedrich, Richard Lopp | 2023-09-25T18:00:03Z | http://arxiv.org/abs/2309.14426v3 | # Finite Pulse-Time Effects in Long-Baseline Quantum Clock Interferometry
###### Abstract
Quantum-clock interferometry has been suggested as a quantum probe to test the universality of free fall (UFF) and the universality of gravitational redshift (UGR). In typical experimental schemes it seems advantageous to employ Doppler-free E1-M1 transitions which have so far been investigated in quantum gases at rest. Here, we consider the fully quantized atomic degrees of freedom and study the interplay of the quantum center-of-mass (COM) - that can become delocalized - together with the internal clock transitions. In particular, we derive a model for finite-time E1-M1 transitions with atomic inter-extern coupling and arbitrary position-dependent laser intensities. We further provide generalizations to the ideal expressions for perturbed recoilless clock pulses. Finally, we show at the example of a Gaussian laser beam that the proposed quantum-clock interferometers are stable against perturbations from varying optical fields for a sufficiently small quantum delocalization of the atomic COM.
+
Footnote †: preprint:
## I Introduction
Light-pulse atom interferometry (LPAI) has demonstrated its versatility in a myriad of applications: Starting from measuring gravitational acceleration [1; 2; 3] and rotation [4] to field applications [5; 6; 7; 8] and mobile gravimetry [6], the measurement of Newton's gravitational constant [9] as well as the so-far most accurate determination of the fine structure constant [10; 11]. In the last decade, there have been proposals for mid-band gravitational wave detection [12; 13; 14], complementary to LIGO/VIRGO and LISA, and recently construction has started on first prototypes which might be sensitive to ultra-light dark matter signals [15; 16; 17] and serve as testbeds for gravitational wave antennas [18; 19; 20] based on atom interferometry.
These advancements have paved the way to perform tests on the fundamental physical principles underlying today's best physical theories with high precision atomic sensors [21; 22; 23; 24]. On the other hand, ever-increasing precision goals require an upscaling of the interferometers' spacetime areas. For that reason, several very-large baseline projects are currently being planned globally, hoping to reach the kilometer scale: AION-km in the UK [25], MAGIS-km in the USA [18], MIGA/ELGAR in Europe [26; 27], and ZAIGA in China [28].
A side beneficiary of these endeavors will be new long baseline tests of the Einstein equivalence principle, encapsulated in its three pillars [29] consisting of local Lorentz invariance, the universality of free fall (UFF) and local position invariance which in turn contains the universality of gravitational redshift (UGR) and universality of clock rates (UCR). Together these principles form the backbone of general relativity [30; 31]. All aspects of the equivalence principle have proven to be extremely resilient to experimental challenges over an extremely large regime ranging from the microscopic to the cosmic scale [32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44]. UFF in particular has been tested via LPAI by comparison of the free fall rates of different atomic isotopes and species [42; 43; 45; 24] as well as for different internal states [46] of the same atomic species. However, LPAI using quantum clocks as initial states have been shown [47] to be insensitive to UGR violations in a linear gravitational field without additional internal transitions _during_[48; 49; 50; 51] the interferometer. Recently, two different LPAI schemes were introduced for UGR and UFF tests by Roura [49] and Ufrecth et al. [52] In contrast to the original proposals by Zych et al. [53] and Sinha et al. [54] to detect general relativistic time dilation by the interference of quantum clocks in a gravitational field, both proposals are predicated on the essential step of initializing the atomic clock inside the interferometer in order to unequivocally isolate such a signal. Without this crucial step one is stuck with the no-go result [47]. Therefore the scheme of Roura [49] needs a superposition of internal states to gain UGR sensitivity (and being insensitive to UFF violations); the alternative approach of Ufrecth et al. [52] does not require superpositions of internal states (as seen by the laboratory frame). In turn it becomes sensitive to both, UGR and UFF violations. Similarly, other proposals [29; 51] can test different aspects of local position invariance like UCR.
Ideally one would like to initialize an atomic clock inside the interferometer without disturbing the center-of-mass (COM) motion of the atomic test masses which serve as inertial reference. Hence, recoilless internal transitions are strongly beneficial or might even be necessary since they can ease the experimental constraints and the implementation significantly. In this study, we examine recoilless transitions implemented via two-photon E1-M1 couplings, i.e. two-photon transitions consisting of one electric dipole (E1) and one magnetic dipole (M1) transition. This type of two-photon process has previously been investigated for Doppler-free two-photon spectroscopy [55; 56] and for the application in optical atomic vapor clocks [57; 58] without COM motion. In contrast to these previous studies, we will consider the full quantum nature of all atomic degrees of freedom - internal and COM. Due to the quantized nature of the COM degrees of freedom one would _a priori_ expect that the LPAI phase shift suffers from the corresponding delocalizing light-matter interaction [59]. In particular, we find after incorporating COM motion that additional momentum kicks and thus branches appear when considering realistic spatial laser profiles. However, we show that the protocols of Roura [49] and Ufrecth et al. [52] are resilient to leading order effects in the induced COM spread when compared to the interferometer size. Nonetheless, our results can serve as a guide when such or similar corrections need to be accounted for or modeled in future high precision experiments.
### Overview & Structure
Our article is structured as follows: In Sec. II we will recapitulate the two interferometer schemes presented by Roura[49] and Ufrecht et al.[52], and put them into the context of the dynamical mass energy of composite particles. In Sec. III we will introduce an idealized model for E1-M1 transitions using plane waves for the electromagnetic field. The internal structure of the atom will be described by a three-level system that can be reduced to an effective two-level system using adiabatic elimination. To achieve the absorption of two counter-propagating photons a specific polarization scheme is needed[57; 58]. In Sec. IV we will extend these results and include the finite pulse-time effects for E1-M1 transitions using a more realistic model, i.e. taking into account position-dependent laser intensities. The generalized \(\pi\)- and \(\pi/2\)-pulse operators will be obtained in Sec. IV.1 for the experimentally relevant case of a Gaussian laser beam. In Sec. V we will come back to the two interferometer schemes[52; 49] and analyze the implications due to the finite pulse times, in particular their impact on the phase and visibility of the interferometers. We conclude with a summary, discussion and contextualization of our results in Sec. VI.
## II UGR and UFF tests with quantum clock interferometry
Here, we will briefly review the two interferometer schemes[52; 49] employing quantum clocks to test UGR and UFF. In the following we will denote the scheme proposed by Roura[49] as scheme (A) and the one proposed by Ufrecht et al.[52] as scheme (B). Before starting this discussion we will introduce the relevant aspects of the dynamical mass energy (or mass defect) of atoms which is the underlying connection to test the UGR and UFF in an interferometer with quantum clocks. We note that our introduction only serves as a sketch of the ingredients necessary for incorporating a description of dynamical mass energy perturbatively into atoms.
While Einstein's mass energy equivalence \(E=Mc^{2}\) has been known for more than 100 years now, its impact on quantum interference due to the possibility of obtaining which-path information for composite particles with time-evolving internal structure has only recently been highlighted in the works of Zych et al.[53] and Sinha et al.[54] How dynamical mass energy manifests in a Mach-Zehnder atom interferometer was first sketched by Giulini[48] in the context of the redshift debate[60; 61; 62; 63; 64; 65]. Based on these initial considerations, significant progress has been made. For a review of these initial discussions and proposed experiments beyond the ones discussed here[52; 49] see e.g. the works of Pikovski et al.[66] or Di Pumpo et al.[51] and references therein.
However, to the authors' knowledge, dynamical mass energy itself was already discussed in the works of Sebastian[67; 68] on semi-relativistic models for composite systems interacting with a radiation field. There the author indicates that the appearance of these terms (including dynamical mass energy) is intimately linked to relativistic corrections to the COM coordinates first derived by Osborn et al.[69; 70] and Krajcik et al.[71] over 50 years ago. The last few years have seen significant efforts and discussions devoted to providing first principles derivations from atomic physics of dynamical mass energy. Specifically, we refer to the works of Sonleitner et al.[72] and Schwartz et al.[73; 74] for systems with quantized COM motion, respectively without and with gravity. A field theoretical derivation has recently been performed by Assmann et al.[75] Moreover Perche et al.[76; 77] contains a discussion under which conditions and by which guiding principles effective models for composite systems can be constructed in curved spacetime. Extensions examining the coupling of Dirac particles to gravitational backgrounds have recently also been discussed[76; 78; 79] yielding overall sensible but in the details slightly differing results in the weak-field limit. A general review discussing the issues and problems regarding such couplings of quantum matter to gravity is available in Guilini et al.[80]
### A Simple Model for the Dynamical Mass Energy of Atoms
In the non-relativistic limit, a first-quantized Hamiltonian description of a particle of mass \(M\) moving in a weak gravitational field is prescribed[81] by the sum of the kinetic COM energy and its gravitational potential energy \(U(\mathbf{R})\)
\[\hat{H}(\hat{\mathbf{R}},\hat{\mathbf{P}};M)=Mc^{2}+MU(\hat{\mathbf{R}})+ \frac{\hat{\mathbf{P}}^{2}}{2M}-\frac{\hat{\mathbf{P}}U(\hat{\mathbf{R}})\hat {\mathbf{P}}}{2M}, \tag{1}\]
with COM position \(\hat{\mathbf{R}}\) and momentum \(\hat{\mathbf{P}}\) where we have neglected any terms contributing at orders higher than \(1/M\). Practically, the gravitational potential can often be approximated as \(U(\hat{\mathbf{R}})=U(\mathbf{R}_{0})+\mathbf{g}^{\ast}(\hat{\mathbf{R}}- \mathbf{R}_{0})+(\hat{\mathbf{R}}-\mathbf{R}_{0})^{\ast}\Gamma\,(\hat{\mathbf{ R}}-\mathbf{R}_{0})/2\,\mathrm{up\,to}\,\mathrm{the}\,\mathrm{gravity\, gradient\, contribution\, where\,\,we\, adopted\,\mathrm{symmetric}\)\(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
scaled by the square of the speed of light and shifted by the rest mass. Since there is a one-to-one mapping between the eigenvalues and eigenstates of the the mass operator and the internal Hamiltonian no additional complexity of the system in terms of additional Hilbert spaces or new dynamics is gained, and to this order this looks like a simple reformulation in terms of different quantities.
Furthermore, if the smallest and largest eigenstate of the internal Hamiltonian \(\hat{H}_{A}\) are separated by an energy \(\Delta\mathcal{E}\ll Mc^{2}\), then the intern-extern coupling can be treated perturbatively. This is a useful approximation e.g. in the case of optical clock transitions in ytterbium or strontium where the (relevant part of the) internal Hamiltonian has a spectral range in the optical regime and we can thus estimate [47]\(\|\hat{H}_{A}\|/(Mc^{2})\simeq 10^{-11}\). Thus, often we can assume the perturbative identification [47]\(M^{-1}\big{(}1-\hat{H}_{A}/(Mc^{2})\big{)}\simeq M^{-1}\big{(}1+\hat{H}_{A}/(Mc ^{2})\big{)}^{-1}\) via the geometric series. Consequently, we can also replace \(M^{-1}\mapsto\hat{M}^{-1}\) in the terms in Eq. (1) describing the potential and kinetic energy. The overall Hamiltonian \(\hat{H}^{\rm(MD)}\), including the mass defect, accordingly takes the form
\[\hat{H}^{\rm(MD)}=\hat{H}(\hat{\mathbf{R}},\hat{\mathbf{P}};\hat{M})=\hat{M}c ^{2}+\hat{M}U(\hat{\mathbf{R}})+\frac{\hat{\mathbf{P}}^{2}}{2\hat{M}}-\frac{ \hat{\mathbf{P}}U(\hat{\mathbf{R}})\hat{\mathbf{P}}}{2\hat{M}}. \tag{3}\]
All but the first term in Eq. (3) induce a coupling of the internal atomic energies to the kinetic and potential energy of the COM. Alternatively, using the energy eigenstates \(|n\rangle\) and the mass operator eigenvalues \(M_{n}=M+\mathcal{E}_{n}/c^{2}\), the Hamiltonian \(\hat{H}^{\rm(MD)}\) can be rewritten as
\[\hat{H}^{\rm(ME)}=\sum_{n}\hat{H}_{n}^{\rm(ME)}\,|n\rangle\!\langle n|\quad \text{ with }\hat{H}_{n}^{\rm(ME)}=\hat{H}(\hat{\mathbf{R}},\hat{\mathbf{P}};M_{n}), \tag{4}\]
which is equivalent to a collection of single particles, characterized by the state-dependent masses \(M_{n}\). In this form the Hamiltonian directly embodies the equivalence of inertial and gravitational mass [81]. While we have omitted the coupling to external (electromagnetic) fields in all our considerations for simplicity, they are in principle instrumental to actually prepare and manipulate the atomic wave packet in experiments. These may be accounted for in an interaction Hamiltonian \(\hat{H}_{\rm int}\) added to Eq. (3) or Eq. (4), since the mass eigenstates are identical to the internal energy eigenstates except for an energy shift. The details of this interaction Hamiltonian can be quite complicated [72, 74, 75] when all corrections from the mass defect are included. However, to leading order it consists of the standard electric or magnetic dipole transitions described by
\[\hat{H}_{\rm int}=-\hat{\mathbf{d}}\cdot\mathbf{E}(t,\hat{\mathbf{R}})+\hat{ \boldsymbol{\mu}}\cdot\mathbf{B}(t,\hat{\mathbf{R}}), \tag{5}\]
where \(\hat{\mathbf{d}}\) and \(\hat{\boldsymbol{\mu}}\) are the atomic electric and magnetic dipole operators, respectively. Note that we have neglected here terms to leading order in the electric charge and Bohr radius that are further suppressed by the atomic mass \(M\), such as the Rontgen term [59]. In conclusion we arrive at the total model Hamiltonian (excluding higher order contributions for the electromagnetic field coupling)
\[\hat{\mathcal{H}}=\hat{H}^{\rm(MD)}+\hat{H}_{\rm int}=\hat{H}^{\rm(ME)}+\hat{H }_{\rm int} \tag{6}\]
for an atom with internal structure interacting with an external electromagnetic field.
Figure 1: Two LPAI proposals utilizing atomic clocks to detect UGR and UFF violations. Panel **(a)** Scheme **(A)** is doubly differential: The atom enters the interferometer sequence in the ground state \(|g\rangle\) and is split up onto two branches, e.g. via a Bragg pulse. At time \(T_{2}\) a recoilless E1-M1 pulse drives the atom into a superposition of excited (denoted by \(|e\rangle\)) and ground state, corresponding to the initialization of an atomic clock. The branches are recombined afterwards and one can measure the intensities in the ground and excited state channels. The experiment is then repeated with a different initialization time \(T_{2}^{\prime}=T_{2}+\tau\). The COM wave function is denoted by \(|\psi\rangle_{\rm cm}\). Panel **(b)** Scheme **(B)** on the other hand is symmetrical: The atom enters the interferometer sequence in the ground/excited state and is split up onto two branches, e.g. via a double Bragg diffraction pulse. In the middle segment, in between the times \(t=T\) and \(t=T+T^{\prime}\), the internal state is changed to the excited/ground state via a recoilless E1-M1 transition. After recombining the two branches the detection is performed. A single run of the experiment consists of two runs of the interferometer sequence with different initial internal states.
While our introduction here can only serve as a sketch, motivated by mass energy equivalence, it turns out that the derivation of the intern-extern coupling can be made fairly rigorous [47; 73; 74; 75; 81; 82], however with serious gains in the theoretical complexity of the model depending on the setting as well as the starting point. Nevertheless, the basic premises and leading order results do not change significantly.
### Phase Shift in a Light-Pulse Atom Interferometer
There are multiple methods available to calculate the phase shift in a LPAI. In simple cases, with quadratic Hamiltonians and for instantaneous beam splitter pulses, one can often rely on path-integral methods [84]. However path-integrals become quite unwieldy in case of non-quadratic systems as there are no or only few standard methods available for their solution [85]. In these more involved cases, e.g. with multiple internal states and complicated external potentials involved, the Hamiltonian approach [86; 65; 87] offers a more versatile toolbox. Moreover, phase-space methods [88; 89; 90] are also available and sometimes helpful for interpretation.
However, in all cases the interference signal in an exit port of a two-path interferometer arises from the superposition of two branches characterized by the evolutions \(\hat{U}_{1}\) and \(\hat{U}_{2}\) and is determined by the expectation value [29; 65]
\[I_{\phi_{\text{exit}}}=\langle\psi_{0}|\,\hat{U}_{\text{tot}}^{\dagger}\hat{ \Pi}_{\text{exit}}\hat{U}_{\text{tot}}\,|\psi_{0}\rangle\,, \tag{7}\]
with the overall evolution given by \(\hat{\Pi}_{\text{exit}}\hat{U}_{\text{tot}}=\hat{U}_{1}+\hat{U}_{2}\). Here \(\hat{U}_{\text{tot}}\) is the total time evolution, \(\hat{\Pi}_{\text{exit}}\) is a projection operator with the property \(\hat{\Pi}_{\text{exit}}^{2}=\hat{\Pi}_{\text{exit}}\) characteristic to the detection process occurring in the exit port and \(|\psi_{0}\rangle\) is the initial state at the start of the interferometer. Note that here the individual evolutions \(\hat{U}_{1}\) and \(\hat{U}_{2}\) need not be unitary by themselves. In fact, even in a Mach-Zehnder interferometer they are not. This is due to the fact that only half of the atoms participates in each branch of the interferometer. Furthermore, the individual nature of the beam splitters creating the interferometer decides the balance between the interferometer branches. On the other hand the total evolution \(\hat{U}_{\text{tot}}\) usually is unitary, unless e.g. atom losses occur or not all paths are included in the modelling of the interferometer and thus \(\hat{U}_{\text{tot}}\) becomes an open system evolution. After expanding the sum over the individual branches in the exit port signal, defined in Eq. (7), it takes the form
\[\begin{split} I_{\phi_{\text{exit}}}=\langle\psi_{0}|& \,\hat{U}_{1}^{\dagger}\hat{U}_{1}\,\,|\psi_{0}\rangle+\langle\psi_{0}|\, \hat{U}_{2}^{\dagger}\hat{U}_{2}\,\,|\psi_{0}\rangle\\ &+\langle\psi_{0}|\,\hat{\mathcal{O}}\,|\psi_{0}\rangle+\text{c. c.}\end{split} \tag{8}\]
where we introduced the amplitude
\[\langle\hat{\mathcal{O}}_{21}\rangle=\langle\psi_{0}|\,\hat{U}_{2}^{\dagger} \hat{U}_{1}\,\,|\psi_{0}\rangle=\mathcal{V}_{21}\exp(\text{i}\Delta\phi_{21}) \tag{9}\]
of the so-called overlap operator [65]\(\hat{\mathcal{O}}_{21}=\hat{U}_{2}^{\dagger}\hat{U}_{1}\) between the branches. The absolute value of this amplitude is the visibility \(\mathcal{V}_{21}=|\,\langle\psi_{0}|\,\hat{U}_{2}^{\dagger}\hat{U}_{1}\,\,| \psi_{0}\rangle\,|\) of the interference signal, while the argument \(\Delta\phi_{21}=\arg\langle\psi_{0}|\,\hat{U}_{2}^{\dagger}\hat{U}_{1}\,\,| \psi_{0}\rangle\) is the interferometer phase [29; 52; 65].
In general, the situation in a realistic LPAI can be a bit more complex and the overall signal \(I_{\phi_{\text{exit}}}\) detected in an exit port results from the pair-wise interference of all paths through the interferometer contributing to the exit port. Practically, additional and often undesired paths can originate e.g. from imperfect diffraction processes [91; 92] or perturbing potentials acting during the interferometer.
However, any interfering pair of paths contributing to the signal amplitude of the exit port in such a multi-path LPAI has a contribution of the form of the expectation value of an overlap
\[I_{\phi_{\text{exit}}}^{(Im)}=\langle\psi_{0}|\,\hat{U}_{\ell}^{\dagger} \hat{U}_{m}\,\,|\psi_{0}\rangle+\text{c.c.}=\mathcal{V}_{\ell m}\exp(\text{i} \Delta\phi_{\ell m})+\text{c.c.} \tag{10}\]
Here we have introduced the relative path visibility \(\mathcal{V}_{\ell m}\) and relative phase between paths \(\Delta\phi_{\ell m}\) which generalizes the same quantities from the two-path case. Summation over the signal amplitude contributions \(I_{\phi_{\text{exit}}}^{(Im)}\) with respect to the indices \(\ell\) and \(m\) directly leads to the overall exit port signal
\[I_{\phi_{\text{exit}}}=2\sum_{\ell\geq 1}\mathcal{V}_{\ell\ell}+\sum_{ \begin{subarray}{c}\ell,m\geq 1\\ \ell\neq m\end{subarray}}\mathcal{V}_{\ell m}\exp(\text{i}\Delta\phi_{\ell m }). \tag{11}\]
When we also note that the relative phase between paths obeys the relation \(\Delta\phi_{\ell m}=-\Delta\phi_{m\ell}\) we arrive at the expression
\[I_{\phi_{\text{exit}}}=2\Big{(}\sum_{\ell}\mathcal{V}_{\ell\ell}+\sum_{ \begin{subarray}{c}\ell,m\geq 1\\ \ell\neq m\end{subarray}}\mathcal{V}_{\ell m}\cos\Delta\phi_{\ell m}\Big{)} \tag{12}\]
for the exit port signal. This expression is a superposition of the cosines of the _relative path phases_ weighted by the _relative path visibilities_. In an (open) two-path interferometer the sums terminate after two terms, and is thus identical to Eq. (8).
### Interferometer Phase, (Classical) Action and Proper time
Usually, the interferometer phase in a LPAI is linked to the (classical) action by appealing to the relativistic action of a massive particle in a gravitational background [47; 51; 65; 84] and a subsequent non-relativistic expansion. The resulting expression is then quantized and introduced as governing action \(\mathcal{S}\) of an appropriate path integral for the particle. Afterwards one identifies the quantum mechanical phase [47] acquired along the trajectory via
\[\phi=-\omega_{C}\tau=-\frac{1}{\hbar}\int\!\!\mathrm{d}t\,\,\mathcal{L}( \mathbf{R},\dot{\mathbf{R}},t)+S_{\text{em}}/\hbar, \tag{13}\]
where \(\omega_{C}=Mc^{2}/\hbar\) is the Compton frequency and \(\mathcal{L}\) is the classical Lagrangian \(\mathcal{L}(\mathbf{R},\dot{\mathbf{R}})\) corresponding to the Hamiltonian, Eq. (1), of the particle. Here \(S_{\text{em}}\) is the action corresponding to the Lagrangian for the electromagnetic interaction, Eq. (5), needed for manipulation of the atom. Fundamentally, this interpretation originates from a semi-classical approximation for the Feynman path-integral [84; 85] being a valid approximation. This is due to the fact that only in the semiclassical limit the dominant contributions to the path-integral
come from the classical trajectories, resulting from solving the Euler-Lagrange equations for the (classical) Lagrangian [85]. Ultimately, this is what makes the identification between proper time and the action in Eq. (13) possible also for quantum particles but only in the semi-classical limit.
### UGR Sensitive Scheme (A)
The interferometer scheme (A) [49], shown in Fig. 1(a), initializes an atomic clock by a recoilless \(\pi/2\)-pulse so that the atoms that enter the interferometer in the ground state are in a 50:50 superpostion of excited and ground state atoms after the clock initialization. Due to the atoms having a different mass \(M_{g,e}\) in their respective internal ground and excited states, the Compton frequency \(\omega_{g,e}\) becomes state-dependent. One can measure the frequency in the ground and excited state exit port between the two branches via the differential phase shift \(\Delta\phi_{g,e}\) and separate out the gravitational redshift by a double-differential measurement, i.e. calculating the phase difference \(\Delta\phi_{-}\) between the excited and ground state exit port and performing two runs of the experiment with different initialization times \(T_{2}\) of the atomic clock:
\[\Delta\phi_{-}(T_{2})-\Delta\phi_{-}(T_{2}+\tau)=-\frac{\Delta M}{\tilde{M}}gk _{p}\delta T\tau, \tag{14}\]
where
\[\Delta\phi_{-}(T_{2})=\Delta\phi_{g}(T_{2})-\Delta\phi_{e}(T_{2}), \tag{15}\]
\(\tilde{M}=(M_{e}+M_{g})/2\) is the mean mass, \(g\) is the gravitational acceleration, \(k_{p}\) is the wave number of the laser that drives the atoms onto the two branches, \(\delta T\) is the separation time, and \(\Delta M=M_{e}-M_{g}\) is the mass difference due to the mass defect. Since the rest mass \(M\) and the mean mass \(\tilde{M}\) are equivalent to our order of approximation, i.e. to order \(\mathcal{O}(c^{-2})\)[47], we may identify the mean mass as \(M\).
### UGR and UFF Sensitive Scheme (B)
The interferometer scheme (B) [52] (cf. Fig. 1(b)) is sensitive to both, the UGR and UFF. In contrast to scheme (A) it does not require a superpostion of internal states. The sensitivity arises from the specific space-time geometry of the interferometer and a change of internal states so that the atoms are in the same state at equal times (in the laboratory frame). The total phase
\[\Delta\Phi=\Delta\Phi_{M}-\frac{\Delta Mc^{2}}{2\hbar}\sum_{n}\lambda_{\pm} \Delta\tau_{n} \tag{16}\]
consists of two contributions: the contribution \(\Delta\Phi_{M}\) is independent of the mass defect \(\Delta M\) and is obtained via the reference Hamiltonian \(\hat{H}_{M}\) at the mean mass \(M\). This part of the total phase can be used for tests of UFF [52]. The proper time differences \(\Delta\tau_{n}\) in each segment \(n\) of the interferometer enter the phase proportional to the mass defect \(\Delta M\) such that it can be associated with the ticking rate of an atomic clock [51]. The \(\lambda_{\pm}\) indicate the internal state for each segment: \(\lambda_{-}=-1\) for the ground state and \(\lambda_{+}=+1\) for the excited state. Since the sum \(\Delta\tau=\Delta\tau_{1}+\Delta\tau_{2}+\Delta\tau_{3}\) of the proper-time differences vanishes in this geometry, the proper-time difference in the middle segment can be written as \(\Delta\tau_{2}=-(\Delta\tau_{1}+\Delta\tau_{3})\). Changing the internal state in the middle segment (associated with \(\Delta\tau_{2}\)), the total phase becomes
\[\Delta\Phi=\Delta\Phi_{M}\pm\frac{\Delta Mc^{2}}{\hbar}\Delta\tau_{2}, \tag{17}\]
depending on the choice of the initial internal state. Again, performing two runs of the experiment with different initial internal states one can separate the UFF and the UGR effects by adding or subtracting the phases, respectively:
\[\Delta\phi_{+} =2\Delta\phi_{M}, \tag{18a}\] \[\Delta\phi_{-} =2\frac{\Delta Mc^{2}}{\hbar}\Delta\tau_{2}. \tag{18b}\]
### Common Challenges
Both interferometer schemes presented above require the manipulation of the internal states during the interferometer sequence. While this manipulation can be achieved by (technically challenging) optical Double-Raman diffraction [93] in scheme (B), i.e. kicking the atoms and changing the internal states simultaneously, scheme (A) requires recoilless internal transitions. There are several reasons why one would like to avoid Double-Raman diffraction: first of all, to drive this kind of transitions one needs quite long laser pulses leading to finite pulse-time effects. Secondly, the single-photon detuning cannot be chosen arbitrarily large if one still wants to have significant Rabi frequencies. This constraint for the detunings leads to problems with spontaneous emission. Furthermore, Double-Raman diffraction requires a high stability for the difference of the two laser frequencies during the pulse. Replacing the Double-Raman diffraction by a momentum-transfer pulse, e.g. Double-Bragg diffraction, and a state-changing pulse could alleviate these issues. These recoilless transitions can be achieved by E1-M1 transitions where the atom absorbs two counter-propagating photons with equal frequency \(\omega\) so that the total momentum kick caused by the two-photon transition vanishes. Such E1-M1 transitions were only investigated without (quantized) COM motion in the context of optical clocks [57; 58; 94]. However, in atom interferometry the COM motion plays a crucial role. Hence, its influence also needs to be included when modeling the pulses to account for possible corrections. This is the task of the following sections.
## III Idealized model for E1-M1 transitions
In this section we will derive an effective model for the E1-M1 transition processes during the LPAI schemes discussed in the previous section for an atomic cloud in a gravitational potential, see Fig. 2. The cloud will be modelled as a fully first-quantized
atom, including its quantized COM motion. In particular, this can be applied to general initial atomic wavepackets. We will assume, for now, that the electromagnetic fields of the laser beam are classical plane waves. The extension to realistic position-dependent laser intensities and finite pulse-time effects will be treated in Sec. IV.
### Model
For an arbitrary three-level atom of mass \(M\) in a gravitational field along \(-Z\) and via the dipole approximation the Hamiltonian reads
\[\hat{\hat{H}}=\frac{\hat{\mathbf{P}}^{2}}{2M}+\sum_{n}\mathcal{E}_ {n}\ket{n}\!\!\bra{n}-\hat{\mathbf{d}}\cdot\mathbf{E}(t,\hat{\mathbf{R}})+\hat {\boldsymbol{\mu}}\cdot\mathbf{B}(t,\hat{\mathbf{R}})+Mg\hat{Z}, \tag{19}\]
where \(\mathcal{E}_{n}\) are the atomic internal energies. For simplicity, we will assume the electric and magnetic fields, \(\mathbf{E}\) and \(\mathbf{B}\), to be plane waves with frequency \(\omega\) for now. Note that we have neglected the mass defect, cf. Eq. (2), during the interaction with the laser since the pulse time \(t\) is much smaller than the characteristic interferometer time \(T\). The internal-state dependent mass energy enters the phase via \(\Delta Mc^{2}\cdot t/\hbar\) and \(\Delta Mc^{2}\cdot T/\hbar\), respectively. The effects of the mass defect during the laser pulse compared to the effects during the rest of the interferometer sequence is therefore negligible. In particular, the \(\mathcal{O}(\epsilon^{-2})\) correction of Eq. (3) is subdominant with respect to the dipolar interaction terms. Moreover, for the same reason we have only retained the linear potential contribution from the gravitational potential energy and do not consider the higher order contributions due to gravity gradients and kinetic-energy to position couplings from Eq. (61). If necessary they could be included perturbatively, similar to the optical potentials in Sec. IV.
In order to describe the laser beam in a retro-reflective geometry, we consider two counter-propagating electromagnetic plane waves. To obtain a recoilless two-photon transition one has to ensure that the atom absorbs two counter-propagating photons. This can be achieved by choice of a certain polarization scheme as we will discuss later on in Sec. III.3; for now we will keep the polarization arbitrary. Then the fields can be written as:
\[\mathbf{E}(\hat{Z}) =\sum_{j=0}^{1}\mathrm{i}\,\mathbf{E}_{j}\mathrm{e}^{(-1)^{j}i \,k_{L}\hat{Z}}\mathrm{e}^{-\mathrm{i}\,\omega t}+\mathrm{h.c.}, \tag{20a}\] \[\mathbf{B}(\hat{Z}) =\sum_{j=0}^{1}\mathrm{i}\,\mathbf{B}_{j}\mathrm{e}^{(-1)^{j}i\, k_{L}\hat{Z}}\mathrm{e}^{-\mathrm{i}\,\omega t}+\mathrm{h.c.} \tag{20b}\]
We particularize to E1 transitions only between the ground state \(\ket{g}\) and the ancilla state \(\ket{a}\) and M1 transitions only between the ancilla state \(\ket{a}\) and the excited state \(\ket{e}\), i.e. the matrix elements \(\mathbf{d}_{ea}=\bra{e}\hat{\mathbf{d}}\ket{a}\) and \(\boldsymbol{\mu}_{ag}=\bra{a}\hat{\boldsymbol{\mu}}\ket{g}\) vanish. This can be ensured by considering the selection rules for electric and magnetic dipole transitions that are discussed subsequently. Thus, in the internal atomic eigenenergy basis, the electric and magnetic dipole moment operators reduce, respectively, to
\[\hat{\mathbf{d}}(t)=\mathbf{d}_{ag}\ket{a}\!\bra{g}+\mathrm{h.c.}, \ \hat{\boldsymbol{\mu}}(t)=\boldsymbol{\mu}_{ae}\ket{a}\!\bra{e}+\mathrm{h.c.} \tag{21}\]
We further define \(\hbar\omega_{ij}=\hbar(\omega_{i}-\omega_{j})\) as the energy spacings between the internal atomic states \(\ket{i}\) and \(\ket{j}\) (\(\{i,j\}\in\{a,e,g\}\)). Then, we can introduce the single-photon detuning \(\Delta=\omega_{ag}-\omega\) for the E1 transition between the ground state \(\ket{g}\) and the ancilla state \(\ket{a}\) and the overall detuning of the two-photon process, i.e. \(\delta=\omega_{eg}-2\omega\), as shown in Fig. **2**. The time dependence of the Hamiltonian with respect to the atomic frequencies can be simplified via the unitary transformation
\[\hat{U}=\mathrm{e}^{\mathrm{i}(\omega_{ae}+\Delta)t}\ket{a}\!\bra{a}+\mathrm{ e}^{\mathrm{i}(\omega_{e}+\delta)t}\ket{e}\!\bra{e}+\mathrm{e}^{\mathrm{i} \omega_{eg}t}\ket{g}\!\bra{g}, \tag{22}\]
leading to the interaction Hamiltonian in the (modified) internal
Figure 2: Panel **(a)** Incoming and reflected electromagnetic waves with frequency \(\omega\) and amplitudes \(\mathbf{E}_{i}\) and \(\mathbf{B}_{i}\) driving E1-M1 transition in a three-level atom with quantized COM motion (black arrow). Panel **(b)** Three-level atom with ground state \(\ket{g}\), excited state \(\ket{e}\) and ancilla state \(\ket{a}\) modelled by the states \(\mathrm{i}S_{0}\), \(\mathrm{i}P_{0}\) and \(\mathrm{i}^{\mathcal{D}}P_{1}\), respectively. Counter-propagating fields drive E1-M1 transitions, i.e. an E1 transition between \(\ket{g}\) and \(\ket{a}\) with single-photon detuning \(\Delta\) and subsequent M1 transition between \(\ket{a}\) and \(\ket{e}\) leading to the overall detuning \(\delta\). The ancilla state \(\ket{a}\) lies then virtually between the energy levels of \(\ket{g}\) and \(\ket{e}\). Panel **(c)** Left: Two-photon excitation by absorbing two counter-propagating photons with momentum \(\pm\hbar\mathbf{k}_{L}\). The first absorption leads to a transition from \(\ket{g}\) to \(\ket{a}\) and a momentum kick \(\hbar\mathbf{k}_{L}\), the second absorption is a transition from \(\ket{a}\) to \(\ket{e}\) with momentum kick \(-\hbar\mathbf{k}_{L}\). Right: Two-photon decay by stimulated emission of two photons in opposite directions. The first emission leads to a transition from \(\ket{e}\) to \(\ket{a}\) and a momentum kick \(-\hbar\mathbf{k}_{L}\) and the second emission is a transition from \(\ket{a}\) to \(\ket{g}\) with momentum kick \(\hbar\mathbf{k}_{L}\). Both two-photon processes have a vanishing netto momentum kick.
atomic interaction picture
\[\hat{H}_{\text{rot}}= \hat{U}^{\dagger}\hat{H}\hat{U}+\text{i}\hbar\left(\frac{\text{d}}{ \text{d}t}\hat{U}^{\dagger}\right)\hat{U}\] \[= \sum_{j=0}^{1}\left\{\text{e}^{\text{i}\omega t}\mathbf{\mathrm{d}}_{ ag}\cdot\left[\text{ }-\text{i}\mathbf{E}_{j}\text{e}^{(-1)^{j}\text{i}k_{L}\hat{Z}}\text{e}^{-\text{i} \omega t}+\text{h.c.}\right]\mid a\rangle\langle g|\right.\] \[\qquad+\text{e}^{-\text{i}\omega t}\mathbf{\mu}_{ae}\cdot\left[\text{ i}\mathbf{B}_{j}\text{e}^{(-1)^{j}\text{i}k_{L}\hat{Z}}\text{e}^{-\text{i} \omega t}+\text{h.c.}\right]\mid a\rangle\langle e|\right.\] \[\qquad+\text{h.c.}\left.\right\}+\frac{\hat{\mathbf{p}}^{2}}{2M}+ \hbar\Delta\left|a\rangle\langle a\right|+\hbar\delta\left|e\rangle\langle e \right|+Mg\hat{Z}. \tag{23}\]
Performing a displacement transformation
\[\hat{D}(t)=\exp\left(-\frac{\text{i}}{\hbar}(Z_{\text{cl}}(t)\hat{P}_{z}-P_{ \text{cl}}(t)\hat{Z})\right) \tag{24}\]
corresponding to \(\hat{Z}\rightarrow\hat{Z}+Z_{\text{cl}}(t)\) and \(\hat{P}_{z}\rightarrow\hat{P}_{z}+P_{\text{cl}}(t)\), with \(Z_{\text{cl}}(t)=-\frac{1}{2}gt^{2}\) and \(P_{\text{cl}}(t)=-Mgt\) being the solutions of the classical equation of motion in the gravitational potential, yields the Hamiltonian
\[\hat{H}_{\text{rot}}^{\prime}= \sum_{j=0}^{1}\left\{\text{e}^{\text{i}\omega t}\mathbf{\mathrm{d}}_ {ag}\cdot\left[\text{ }-\text{i}\mathbf{E}_{j}\text{e}^{(-1)^{j}\text{i}k_{L}\hat{Z}(t)}\text{e}^{ -\text{i}\omega t}+\text{h.c.}\right]\mid a\rangle\langle g|\right.\] \[\qquad+\text{e}^{-\text{i}\omega t}\mathbf{\mu}_{ae}\cdot\left[\text {i}\mathbf{B}_{j}\text{e}^{(-1)^{j}\text{i}k_{L}\hat{Z}(t)}\text{e}^{-\text{i }\omega t}+\text{h.c.}\right]\mid a\rangle\langle e|\right.\] \[\qquad+\text{h.c.}\left.\right\}+\frac{\hat{\mathbf{p}}^{2}}{2M}+ \hbar\Delta\left|a\rangle\langle a\right|+\hbar\delta\left|e\rangle\langle e \right|, \tag{25}\]
where a time-dependent energy shift acting on the identities of the Hilbert spaces is omitted. The quadratic time-dependency of the phase of the electromagnetic fields [95] via \(k_{L}\hat{Z}(t)=k_{L}\hat{Z}-k_{L}gt^{2}/2\) will be compensated through chirping in the following.
### Adiabatic Elimination
Next, we wish to reduce the atomic three-level system to an effective two-level system by adiabatic elimination of the ancilla state \(\left|a\right\rangle\). The idea behind it is that if the detuning \(\Delta\) is large compared to the coupling frequencies, i.e. the single-photon Rabi frequencies, and the overall detuning \(\delta\), the ancilla state gets populated by the electric dipole transition and depopulated by the magnetic dipole transition so fast that the ancilla state is only virtually populated, i.e. the probability of finding the atom in the state \(\left|a\right\rangle\) is vanishingly small. To see this, we are forcing the atomic three-level system into a form where the ancilla state is separate from the other two by writing the Schrodinger equation as
\[\text{i}\frac{\text{d}}{\text{d}t}\begin{pmatrix}\left|\psi_{a}\right\rangle\\ \left|\psi_{e}\right\rangle\\ \left|\psi_{g}\right\rangle\end{pmatrix}=\text{i}\frac{\text{d}}{\text{d}t} \begin{pmatrix}\left|\psi_{a}\right\rangle\\ \left|\psi\right\rangle\end{pmatrix}=\begin{pmatrix}\Delta(\hat{\mathbf{P}})& \mathbf{\Omega}^{\dagger}(\hat{Z})\\ \mathbf{\Omega}(\hat{Z})&\delta(\hat{\mathbf{P}})\end{pmatrix}\begin{pmatrix}\left| \psi_{a}\right\rangle\\ \left|\psi\right\rangle\end{pmatrix}, \tag{26}\]
where we have collected the excited and ground state into the vector \(\left|\psi\right\rangle\) and defined the detuning operators
\[\Delta(\hat{\mathbf{P}})=\frac{\hat{\mathbf{p}}^{2}}{2M\hbar}+\Delta\text{ \ and \ }\delta(\hat{\mathbf{P}})=\begin{pmatrix}\frac{\hat{\mathbf{p}}^{2}}{2M\hbar}+ \delta&0\\ 0&\frac{\hat{\mathbf{p}}^{2}}{2M\hbar}\end{pmatrix}. \tag{27}\]
Furthermore we defined the transition operator between the ancilla state and the two-level system as
\[\mathbf{\Omega}(\hat{Z}) =\frac{\text{i}}{\hbar}\left(\begin{array}{c}\left(\mathbf{\mu}_{ae} ^{*}\cdot\mathbf{B}_{0}\right)\text{e}^{\text{i}k_{L}\hat{Z}}+\left(\mathbf{\mu}_{ae }^{*}\cdot\mathbf{B}_{1}\right)\text{e}^{-\text{i}k_{L}\hat{Z}}\\ \left(\mathbf{\mathsf{d}}_{ag}^{*}\cdot\mathbf{E}_{0}^{*}\right)\text{e}^{-\text{i} k_{L}\hat{Z}}+\left(\mathbf{\mathsf{d}}_{ag}^{*}\cdot\mathbf{E}_{1}^{*}\right)\text{e}^{ \text{i}k_{L}\hat{Z}}\end{array}\right)\] \[-\frac{\text{i}}{\hbar}\left(\begin{array}{c}\left(\mathbf{\mu}_{ae }^{*}\cdot\mathbf{B}_{0}^{*}\right)\text{e}^{-\text{i}k_{L}\hat{Z}}\text{e}^{2 \text{i}\omega t}+\left(\mathbf{\mu}_{ae}^{*}\cdot\mathbf{B}_{1}^{*}\right)\text{e}^{ \text{i}k_{L}\hat{Z}}\text{e}^{2\text{i}\omega t}\\ \left(\mathbf{\mathsf{d}}_{ag}^{*}\cdot\mathbf{E}_{0}\right)\text{e}^{\text{i}k_{L} \hat{Z}}\text{e}^{-2\text{i}\omega t}+\left(\mathbf{\mathsf{d}}_{ag}^{*}\cdot \mathbf{E}_{1}\right)\text{e}^{-\text{i}k_{L}\hat{Z}}\text{e}^{-2\text{i} \omega t}\end{array}\right). \tag{28}\]
The population of the ancilla state \(\left|\psi_{a}\right\rangle\) can be expressed in terms of the two-level system \(\left|\psi\right\rangle\) by defining the quasi-projector \(\hat{\Pi}\) which projects the two-level system onto the ancilla state via
\[\left|\psi_{a}\right\rangle=\hat{\Pi}\left|\psi\right\rangle. \tag{29}\]
Next, we shall derive an explicit expression for \(\hat{\Pi}\). For simplicity, let us assume that this projector does not depend on time, i.e. \(\hat{\Pi}\neq\hat{\Pi}(t)\). Note that corrections due to the time dependence will not be present to our order of expansion. We then obtain from Eq. (26) the differential equation
\[\text{i}\frac{\text{d}}{\text{d}t}\hat{\Pi}\left|\psi\right\rangle \approx \text{i}\hat{\Pi}\frac{\partial}{\partial t}\left|\psi\right\rangle\] \[= \Delta(\hat{\mathbf{P}})\left|\psi_{a}\right\rangle+\mathbf{\Omega}^{ \dagger}(\hat{Z})\left|\psi\right\rangle=\left(\Delta(\hat{\mathbf{P}})\hat{ \Pi}+\mathbf{\Omega}^{\dagger}(\hat{Z})\right)\left|\psi\right\rangle \tag{30}\]
for the ancilla state and
\[\text{i}\frac{\text{d}}{\text{d}t}\left|\psi\right\rangle=\delta(\hat{ \mathbf{P}})\left|\psi\right\rangle+\mathbf{\Omega}(\hat{Z})\left|\psi_{a}\right\rangle= \left(\delta(\hat{\mathbf{P}})+\mathbf{\Omega}(\hat{Z})\hat{\Pi}\right)\left| \psi\right\rangle \tag{31}\]
for the two-level system. Comparing these two equations - where the latter has to be multiplied by \(\hat{\Pi}\), i.e. projecting the two-level system onto the ancilla state - leads to the so-called Bloch equation [97; 96] given by
\[\Delta(\hat{\mathbf{P}})\hat{\Pi}+\mathbf{\Omega}^{\dagger}(\hat{Z})= \hat{\Pi}\delta(\hat{\mathbf{P}})+\hat{\Pi}\mathbf{\Omega}(\hat{Z})\hat{\Pi} \tag{32}\] \[\Leftrightarrow\hat{\Pi}=\Delta(\hat{\mathbf{P}})^{-1}\left(-\mathbf{ \Omega}^{\dagger}(\hat{Z})+\hat{\Pi}\delta(\hat{\mathbf{P}})+\hat{\Pi}\mathbf{ \Omega}(\hat{Z})\hat{\Pi}\right).\]
Assuming that the single-photon detuning \(\Delta\) is much larger than the coupling frequencies and the overall detuning \(\delta\) we can thus define adiabaticity parameters
\[\epsilon_{\Omega}=\frac{\left\|\mathbf{\Omega}(\hat{Z})\right\|}{\|\Delta(\hat{ \mathbf{P}})\|}\ll 1\text{ \ and \ }\epsilon_{\delta}=\frac{\|\delta(\hat{\mathbf{P}})\|}{\|\Delta(\hat{\mathbf{P}})\|} \ll 1, \tag{33}\]
where
\[\left\|\hat{A}\right\|=\left\langle\Psi\right|\hat{A}\left|\Psi\right\rangle/\langle\Psi\rangle \tag{34}\]
is the norm of an operator \(\hat{A}\) conditioned on the state \(|\Psi\rangle\) of our wave packet. Solving the Bloch equation, Eq. (32), analytically is in most cases intractable and exact solutions are in general not known [97]. Hence, we use a perturbative ansatz
\[\hat{\Pi}=\sum_{k=0}\hat{\Pi}_{k}\text{ with }\hat{\Pi}_{k}\sim\Delta^{-(k+1)}( \hat{\mathbf{P}}), \tag{35}\]
where we expand in powers of the inverse operator-valued detuning \(\Delta(\hat{\mathbf{P}})\), which can be approximated by \(\Delta(\hat{\mathbf{P}})^{-1}=\Delta^{-1}\left(1+\hat{\mathbf{P}}^{2}/(2M\hbar \Delta)\right)^{-1}\approx\Delta^{-1}\) for sufficiently non-relativistic COM motion, i.e. \(\|\hat{\mathbf{P}}^{2}/(2M\hbar)\|\ll|\Delta|\). The \(\hat{\Pi}_{k}\) can then be determined recursively by
\[\begin{split}\hat{\Pi}_{k+1}=&\Delta(\hat{ \mathbf{P}})^{-1}\hat{\Pi}_{k}\delta(\hat{\mathbf{P}})+\Delta(\hat{\mathbf{P} })^{-1}\sum_{j=0}^{k-1}\hat{\Pi}_{k-j-1}\Delta(\hat{Z})\hat{\Pi}_{j},\\ \hat{\Pi}_{0}=&-\hat{\Delta}^{-1}\boldsymbol{\Omega} ^{\dagger}(\hat{Z}),\end{split} \tag{36}\]
where \(\hat{\Pi}_{0}\) follows directly from Eq. (32) since it has to solve the equation to the order \(\mathcal{O}(\Delta(\hat{\mathbf{P}})^{-1})\). For large detunings \(\Delta\) we can truncate this expansion after the first order, i.e., only keeping the lowest order term \(\hat{\Pi}_{0}\). Thus, the slowly evolving dynamics of Eq. (31) become an effective two-level transition:
\[\mathrm{i}\frac{\mathrm{d}}{\mathrm{d}t}\left|\psi\right\rangle=\left(\delta- \frac{1}{\Delta}\boldsymbol{\Omega}(\hat{Z})\boldsymbol{\Omega}^{\dagger}( \hat{Z})\right)\left|\psi\right\rangle. \tag{37}\]
Finally, Eq. (28) will be inserted into Eq. (37). The internal states' dynamics contain then position-dependent terms that correspond to two-photon transitions where the atom absorbs two photons from the same direction. These terms lead to unwanted momentum kicks. Here, the rotating-wave-approximation (RWA) can be applied by neglecting all terms involving \(\mathrm{e}^{\mathrm{e}i\omega t}\) since the (rapidly) rotating terms average out during the pulse. Note, it is important however that the adiabatic elimination is carried out before the RWA [98], otherwise important terms of the form \(\mathbf{id}_{ag}\cdot\hat{\mathbf{E}}_{i}^{*}\) and \(\mathbf{i}\boldsymbol{\mu}_{ae}\cdot\hat{\mathbf{B}}_{i}\) will be lost. Later, we will see that in a retro-reflective geometry and for the \(\sigma^{+}\)- \(\sigma^{-}\) polarization scheme these terms lead to a doubling of the AC Stark shift.
### Doppler-Free Two-Photon Transitions
In order to obtain a Doppler-free interaction without momentum kicks, one has to eliminate the position-dependent terms which can be done formally by setting the Rabi frequencies \(\Omega_{B0}=\Omega_{E1}=0\) (or vice versa). This means, recalling Fig. 2 and the field configuration shown there, that the two-photon transition is driven by counter-propagating photons. Practically, this can be done by using a certain polarization scheme suppressing the unwanted single-photon transitions [55; 57; 58]. To find the right polarization configuration, one has to apply the selection rules of single-photon dipole transitions. The selection rules for two-photon transitions can then be obtained by interpolating the sequential single-photon transitions.
#### iii.3.1 Selection Rules and Polarization Scheme
In the following, we make use of the well-known dipole selection rules [99; 100; 101; 102; 103; 104; 105]:
Electric Dipole TransitionsE1 transitions can only take place between two internal states with different parity and the change of angular momentum has to be \(\Delta L=\pm 1\).
Magnetic Dipole TransitionsM1 transitions can only take place between two internal states with the same parity. Therefore, the change of angular momentum has to be \(\Delta L=0\). However, in both cases (E1 and M1 transitions) the total angular momentum \(J=L+S\) has to change via \(\Delta J=0,\pm 1\) while transitions from \(J=0\) to \(J^{\prime}=0\) are forbidden. Furthermore, conservation of angular momentum leads us to selection rules for the magnetic quantum number \(\mathcal{M}\), which changes depending on the polarization of the light: linearly polarized light does not change the magnetic quantum number, i.e. \(\Delta\mathcal{M}=0\), while positive (negative) circularly polarized light changes the magnetic quantum number via \(\Delta\mathcal{M}=+1\) (\(\Delta\mathcal{M}=-1\)). Note that the distinction whether it is positive circular (\(\boldsymbol{\sigma^{+}}\)) or negative circular (\(\boldsymbol{\sigma^{-}}\)) depends on the propagation direction and the quantization axis. Coming back to our setup displayed in Fig. 2, the selection rules for the change of angular momentum \(\Delta L\) are fulfilled since between \(\left|g\right\rangle=^{1}\!\!S_{0}\) and \(\left|a\right\rangle=^{3}\!\!P_{1}\) (the E1 transition) we have \(\Delta L=1\) and between \(\left|a\right\rangle=^{3}\!\!P_{1}\) and \(\left|e\right\rangle=^{3}\!\!P_{0}\) (the M1 transition) we have \(\Delta L=0\).
To suppress unwanted transitions, i.e. ensuring that the atom absorbs two counter-propagating photons, we use now a \(\sigma^{+}\)- \(\sigma^{-}\) scheme, where the two counter-propagating laser beams have positive (negative) circular polarization, respectively. The above selection rules together with this polarization scheme require \(\mathbf{d}_{ag}\cdot\mathbf{E}_{0}\neq 0\) while \(\mathbf{d}_{ag}\cdot\mathbf{E}_{1}=0\), and \(\boldsymbol{\mu}_{ae}\cdot\mathbf{B}_{0}^{*}=0\) while \(\boldsymbol{\mu}_{ae}\cdot\mathbf{B}_{1}^{*}\neq 0\), given the electric field satisfies \(\mathbf{E}_{0}\propto\boldsymbol{\sigma^{+}}\) and \(\mathbf{E}_{1}\propto\boldsymbol{\sigma^{-}}\). Note that if the electric field has positive circular polarization, the corresponding magnetic field has negative circular polarization and vice versa.
In experiments one would typically use a retro-reflective geometry. The circular polarization of the laser beam can then be rotated by a quarter-wave plate. The laser beam traverses the quarter-wave plate twice resulting in an effective half-wave plate. However, the intensity of the two counter-propagating laser beams stays the same, i.e., \(|\mathbf{E}_{0}|=|\mathbf{E}_{1}|\) and \(|\mathbf{B}_{0}|=|\mathbf{B}_{1}|\). We can then define the single-photon Rabi frequencies
\[\frac{\hbar\Omega_{Ei}}{2}:=-\mathrm{i}\mathbf{d}_{ag}\cdot\mathbf{E}_{i} \text{ and }\frac{\hbar\Omega_{Bi}}{2}:=-\mathrm{i}\boldsymbol{\mu}_{ae}\cdot\mathbf{B}_{i}^ {*}, \tag{38}\]
which describe the corresponding dipole transitions. With the above considerations and using the \(\sigma^{+}\)- \(\sigma^{-}\) polarization scheme, Eq. (37) reduces to (having applied the RWA)
\[\mathrm{i}\frac{\mathrm{d}}{\mathrm{d}t}\left|\psi\right\rangle=\left(\begin{array} []{cc}\frac{\hat{\mathbf{P}}^{2}}{2M\hbar}+\delta-\frac{1}{2\Delta}| \Omega_{B1}|^{2}&\frac{\Omega}{2}\\ \frac{\Omega^{*}}{2}&\frac{\hat{\mathbf{P}}^{2}}{2M\hbar}-\frac{1}{2\Delta}| \Omega_{E0}|^{2}\end{array}\right)\left|\psi\right\rangle, \tag{39}\]
where we have defined the two-photon Rabi frequency
\[\Omega=-\frac{\Omega_{Bi}^{*}\Omega_{E0}}{2\Delta}. \tag{40}\]
Evidently, Eq. (39) has no longer any dependence on the atomic COM position. Consequently, there is no effective momentum kick caused by the two-photon transition on the atom. In momentum space, with
\[\psi_{e,g}(\mathbf{P})=\left\langle\mathbf{P}|\psi_{e,g}\right\rangle, \tag{41}\]
the dynamics of the effective two-level system is described by
\[\mathrm{i}\frac{\partial}{\partial t}\left(\begin{array}{c}\psi_{e}(\mathbf{ P})\\ \psi_{g}(\mathbf{P})\end{array}\right)=\frac{1}{2}\left(\begin{array}{cc} \tilde{\gamma}+\gamma&\Omega\\ \Omega^{*}&\tilde{\gamma}-\gamma\end{array}\right)\left(\begin{array}{c}\psi_ {e}(\mathbf{P})\\ \psi_{g}(\mathbf{P})\end{array}\right), \tag{42}\]
where
\[\tilde{\gamma} =\frac{\mathbf{P}^{2}}{M\hbar}+\delta-\frac{|\Omega_{E0}|^{2}+| \Omega_{B1}|^{2}}{2\Delta}=\frac{\mathbf{P}^{2}}{M\hbar}+\delta-\omega_{\text {AC}}^{(+)}, \tag{43a}\] \[\gamma =\delta+\frac{|\Omega_{E0}|^{2}-|\Omega_{B1}|^{2}}{2\Delta}= \delta+\omega_{\text{AC}}^{(-)} \tag{43b}\]
are the mean detuning \(\tilde{\gamma}\) and relative detuning \(\gamma\) as well as \(\omega_{\text{AC}}^{(+)}=\left(|\Omega_{E0}|^{2}+|\Omega_{B1}|^{2}\right)/(2\Delta)\) the mean AC Stark shift and \(\omega_{\text{AC}}^{(-)}=\left(|\Omega_{E0}|^{2}-|\Omega_{B1}|^{2}\right)/(2\Delta)\) the differential AC Stark shift. Note that the relative detuning \(\gamma\) does not depend on the COM momentum but on the overall detuning \(\delta\) and the AC Stark shift. Thus, the overall detuning can be set in such a way that it compensates the AC Stark shift \(\omega_{\text{AC}}^{(-)}\). After going into another interaction picture with respect to the mean detuning \(\tilde{\gamma}\), the new time evolution operator can be easily obtained by calculating the corresponding matrix exponential such that
\[\hat{\hat{U}}(t)=\cos\frac{\Omega_{\text{eff}}t}{2}\mathds{1}-\frac{\mathrm{i }}{\Omega_{\text{eff}}}\sin\frac{\Omega_{\text{eff}}t}{2}\left(\begin{array}[] {cc}\gamma&\Omega\\ \Omega^{*}&-\gamma\end{array}\right), \tag{44}\]
where we have defined the effective two-photon Rabi frequency \(\Omega_{\text{eff}}=\sqrt{|\Omega|^{2}+\gamma^{2}}\) which depends on the relative detuning \(\gamma\). Since the transformations leading to this result are unitary transformations on the diagonal of the Hamiltonian, the transformed states are physically equivalent to the old ones.
Depending on the initial state we observe the well-known Rabi oscillations between the ground and excited state. For instance if the atom is initially in the ground state, the probability to find the atom in the excited or in the ground state at time \(t\) is given, respectively, by
\[P_{e}(t)=\left(\frac{|\Omega|}{\Omega_{\text{eff}}}\right)^{2} \sin^{2}\frac{\Omega_{\text{eff}}t}{2}, \tag{45a}\] \[P_{g}(t)=\cos^{2}\frac{\Omega_{\text{eff}}t}{2}+\left(\frac{ \gamma}{\Omega_{\text{eff}}}\right)^{2}\sin^{2}\frac{\Omega_{\text{eff}}t}{2}. \tag{45b}\]
In Fig. **3** we plot the ground and excited state probabilities for different values of the relative detuning \(\gamma\). The highest amplitude is achieved for a vanishing relative detuning, i.e. when the detuning \(\delta\) compensates the AC Stark shift. Increasing the relative detuning \(\gamma\) leads to a decreasing amplitude and an increasing effective Rabi frequency \(\Omega_{\text{eff}}\). For \(\gamma>|\Omega|\), it is no longer possible to achieve a 50:50 superposition of excited and ground state.
## IV Finite pulse-time effects
Since electromagnetic fields have to satisfy Maxwell's equations, the M1 couplings are suppressed by a factor of the inverse of the speed of light \(c^{-1}\). Thus, they are much weaker than E1 transitions at typical laser intensities, and the two-photon Rabi frequency, Eq. (40), is quite small when comparing to the Rabi frequency associated with two E1 transitions. Accordingly, one needs pretty long or relatively intense laser pulses to achieve \(\pi\)- or \(\pi/2\)-pulses. That is why finite pulse-time effects become important for E1-M1 transitions. Since the transition between \(|g\rangle\) and \(|e\rangle\) is forbidden for single-photon transitions, however, we can still neglect spontaneous emission. In the idealized scenario of Sec. III we considered plane waves for the electromagnetic fields. However, a realistic laser beam has a position-dependent intensity, e.g. a Gaussian beam profile. At the same time, the Rabi frequencies from Eq. (38) depend on the amplitudes of the electric and magnetic fields, thereby also on the intensity. As a consequence, and due to the operator-valued nature of the atomic COM, the atoms might experience small, possibly state-dependent potentials due to the position dependency of the laser intensity while falling during a laser pulse. In particular, small perturbations in the already small magnetic field amplitude might have large effects. In order to find the effective time-evolution operator \(\hat{U}_{\text{eff}}\) for the atomic wave packet in weakly position-dependent pulses we replace
\[-\frac{\Omega_{B1}^{*}\Omega_{E0}}{2\Delta}\rightarrow-\frac{\Omega_{B1}^{*}( \hat{\mathbf{R}})\Omega_{E0}(\hat{\mathbf{R}})}{2\Delta}=:\Omega(\hat{\mathbf{ R}})\mathrm{e}^{\mathrm{i}\phi(\hat{\mathbf{R}})}, \tag{46a}\]
Figure 3: Time evolution of the probability (density) \(P_{j}\) with \(j=g,e\) of finding an atom in the ground state **(a)** and excited state **(b)** during the laser pulse (plane waves) driving the ideal E1-M1 transitions. Initially the atom is assumed to be prepared in the ground state and the probability density is plotted as a function of relative detuning \(\gamma\) in units of the Rabi frequency \(|\Omega|=\Omega_{\text{eff}}|_{\gamma=0}\) at vanishing detuning. Increasing the relative detuning \(\gamma\) increases the frequency of the Rabi oscillations, changes the amplitudes and thus shifts them to a different location in time at fixed relative detuning \(\gamma\).
Finite Pulse-Time Effects in Long-Baseline Quantum Clock
\[\frac{|\Omega_{E0}|^{2}}{2\Delta} \rightarrow\frac{|\Omega_{E0}(\hat{\mathbf{R}})|^{2}}{2\Delta}=: \omega_{\text{AC},0}(\hat{\mathbf{R}}), \tag{46b}\] \[\frac{|\Omega_{B1}|^{2}}{2\Delta} \rightarrow\frac{|\Omega_{B1}(\hat{\mathbf{R}})|^{2}}{2\Delta}=: \omega_{\text{AC},1}(\hat{\mathbf{R}}). \tag{46c}\]
Adding the atomic rest energy, the Hamiltonian describing the effective two-level atom via Eq. (37) becomes
\[\hat{H}=\left(\frac{\hat{\mathbf{P}}^{2}}{2M}+Mc^{2}\right)\mathds{1}+\hbar \begin{pmatrix}\delta-\omega_{\text{AC},1}(\hat{\mathbf{R}})&\frac{\Omega( \hat{\mathbf{R}})}{2}\text{e}^{\text{i}\Phi(\hat{\mathbf{R}})}\\ \frac{\Omega(\hat{\mathbf{R}})}{2}\text{e}^{-\text{i}\Phi(\hat{\mathbf{R}})} &-\omega_{\text{AC},0}(\hat{\mathbf{R}})\end{pmatrix}. \tag{47}\]
The particular effects of the position dependency of the laser intensity can be separated by unitary transformations that cancel specific operator-valued terms in the Hamiltonian. First of all, let us cancel out the phase in the off-diagonal part in the Hamiltonian. This can be achieved by the unitary displacement transformation \(\ket{\psi}\rightarrow\hat{U}_{1}^{\dagger}\ket{\psi}\), where
\[\hat{U}_{1}^{\dagger}=\left(\begin{array}{cc}\hat{D}^{\dagger}&0\\ 0&1\end{array}\right)\ \ \text{and}\ \ \hat{D}^{\dagger}=\exp\left(-\frac{\text{i}}{\hbar}\left[ \eta\hat{\mathbf{R}}-\boldsymbol{\xi}\hat{\mathbf{P}}+\alpha\right]\right) \tag{48}\]
is the displacement operator. Assuming the wave packet to be, without loss of generality, initially centered around \(\mathbf{R}=0\), the phase in Eq. (46a) can thus be expanded around the origin in the COM position via
\[\Phi(\hat{\mathbf{R}})=\Phi(0)+\hat{\mathbf{R}}\cdot\nabla\Phi|_{\mathbf{R}=0 }+\varphi(\hat{\mathbf{R}}), \tag{49}\]
where \(\varphi(\hat{\mathbf{R}})=\mathcal{O}(\hat{\mathbf{R}}^{2})\), provided that the spatial extension of the wave packet is small enough compared to characteristic scales of the laser beam, e.g. the beam waist and the Rayleigh length for a Gaussian laser beam, and the laser pulse time is sufficiently small as the atom is falling, i.e. moving away from the initial position. This phase can then easily be eliminated by choosing the specific transformation parameters
\[\alpha=\hbar\Phi(0),\ \ \boldsymbol{\xi}=0\ \ \text{and}\ \ \boldsymbol{\eta}=\hbar\nabla\Phi|_{\mathbf{R}=0}. \tag{50}\]
Thus, the unitary transformation, Eq. (48), corresponds to a small momentum kick
\[\hbar\mathbf{k}=\hbar\nabla\Phi|_{\mathbf{R}=0}. \tag{51}\]
Consequently, one can identify the recoil frequency \(\omega_{k}\), the Doppler detuning \(\nu(\hat{\mathbf{P}})\) and the (position-dependent) mean AC Stark shift \(\omega_{\text{AC}}^{(+)}(\hat{\mathbf{R}})\) (expanded around the origin) via the definitions
\[\omega_{k} =\frac{\hbar\mathbf{k}^{2}}{2M},\ \ \nu(\hat{\mathbf{P}})=\frac{ \mathbf{k}\cdot\hat{\mathbf{P}}}{M}, \tag{52}\] \[\text{and}\ \ \omega_{\text{AC}}^{(+)}(\hat{\mathbf{R}}) =\omega_{\text{AC},0}(\hat{\mathbf{R}})+\omega_{\text{AC},1}( \hat{\mathbf{R}})\] \[=\omega_{\text{AC},0}^{(+)}+\hat{\mathbf{R}}\cdot\nabla\omega_{ \text{AC}}^{(+)}|_{\mathbf{R}=0}+\mathcal{S}(\hat{\mathbf{R}}),\]
where \(\mathcal{S}(\hat{\mathbf{R}})=\mathcal{O}(\hat{\mathbf{R}}^{2})\) is the second (and higher) order part of the expansion of the mean AC Stark shift \(\omega_{\text{AC}}^{(+)}(\hat{\mathbf{R}})\). The transformed Hamiltonian reads then
\[\hat{H}^{\prime}=\left(\begin{array}{cc}\hat{H}+\frac{\Delta(\hat{\mathbf{ R}})}{2}-\hbar\mathcal{S}(\hat{\mathbf{R}})&\hat{H}_{\text{off}}\\ \hat{H}_{\text{off}}^{\dagger}&\hat{H}-\frac{\Delta(\hat{\mathbf{R}})}{2}- \hbar\mathcal{S}(\hat{\mathbf{R}})\end{array}\right), \tag{53}\]
where
\[\hat{H}= \frac{\hat{\mathbf{P}}^{2}}{2M}+Mc^{2}+\frac{\hbar}{2}\Big{[} \hat{\nu}+\omega_{k}+\delta \tag{54}\] \[-\left(\omega_{\text{AC},0}^{(+)}+\nabla\omega_{\text{AC}}^{(+)}| _{\mathbf{R}=0}\hat{\mathbf{R}}\right)+\hat{\mathbf{k}}\hat{\mathbf{R}}+\hat{ \Phi}(0)\Big{]}\]
is the mean Hamiltonian,
\[\Delta(\hat{\mathbf{R}})=\hbar\left(\hat{\nu}+\omega_{k}+\delta+\omega_{\text{AC }}^{(-)}(\hat{\mathbf{R}})+\hat{\mathbf{k}}\hat{\mathbf{R}}+\Phi(0)\right) \tag{55}\]
is the detuning operator and
\[\hat{H}_{\text{off}}=\hbar\frac{\Omega(\hat{\mathbf{R}})}{2}\text{e}^{\text{i} \varphi(\hat{\mathbf{R}})} \tag{56}\]
is the off-diagonal part of the Hamiltonian; we denoted partial derivatives in time with a dot. Let us now transform into the interaction picture where we cancel out the dynamics of the mean Hamiltonian \(\hat{H}\), i.e. a unitary transformation with
\[\hat{U}=\mathcal{T}\exp\left(-\frac{\text{i}}{\hbar}\int_{0}^{t}\hat{H}\ \text{d}t^{\prime}\right), \tag{57}\]
where \(\mathcal{T}\) is the time ordering operation. Since the Hamiltonian \(\hat{H}^{\prime}\), Eq. (53), is a function of the COM momentum \(\hat{\mathbf{P}}\) and the COM position \(\hat{\mathbf{R}}\), the remaining operators of the transformed Hamiltonian are evaluated on the Heisenberg trajectories generated by \(\hat{\hat{H}}\) via the Heisenberg equations of motion:
\[\frac{\text{d}\hat{\mathbf{R}}_{H}}{\text{d}t}=\frac{\text{i}}{\hbar}[\hat{H}, \hat{\mathbf{R}}_{H}]\ \ \text{and}\ \ \frac{\text{d}\hat{\mathbf{P}}_{H}}{\text{d}t}=\frac{\text{i}}{\hbar}[\hat{H}, \hat{\mathbf{P}}_{H}]\,. \tag{58}\]
We denote the Heisenberg picture with a subscript \(H\) and obtain the new Hamiltonian
\[\hat{H}^{\prime\prime}=\left(\begin{array}{cc}\frac{\Delta_{H}(\hat{\mathbf{R}},t)}{2}-\hbar\mathcal{S}_{H}(\hat{\mathbf{R}},t)&\hat{H}_{\text{off},H}\\ \hat{H}_{\text{off},H}^{\dagger}&-\frac{\Delta_{H}(\hat{\mathbf{R}},t)}{2}- \hbar\mathcal{S}_{H}(\hat{\mathbf{R}},t)\end{array}\right). \tag{59}\]
Note that terms of quadratic or higher order in COM position in the mean AC Stark shift \(\omega_{\text{AC}}^{(+)}(\hat{\mathbf{R}})\) are encapsulated by \(\mathcal{S}(\hat{\mathbf{R}})\) so that the mean Hamiltonian \(\hat{\hat{H}}\) of Eq. (54) is at most linear in atomic position \(\hat{\mathbf{R}}\). This will facilitate later on the back-transformation to the full unitary time evolution. From the calculations in Sec. III we know that in the idealized case without position dependency of the laser amplitude the internal dynamics of the atom is described by Rabi oscillations, see Eq. (45). These internal transitions can be canceled out of our Hamiltonian by the unitary transformation
\[\hat{U}_{\Omega}=\exp\left(-\text{i}\frac{\Omega(0)}{2}t\hat{\sigma}_{x}\right), \tag{60}\]
with \(\hat{\sigma}_{x}\) being the Pauli operator for the \(x\)-direction. After this final unitary transformation we are left with the Hamiltonian
\[\hat{H}_{3}(t)=\hat{H}_{0}(t)\,\mathds{1}+\sum_{j=\{x,y,z\}}\hat{H}_{j}(t)\hat{ \sigma}_{j}\]
\[=-\hbar\mathcal{S}_{H}(\hat{\mathbf{R}},t)\mathds{1}+\frac{\hbar}{2} \left[\Omega_{H}(\hat{\mathbf{R}},t)\cos\left(\varphi_{H}(\hat{\mathbf{R}},t) \right)-\Omega(0)\right]\hat{\sigma}_{x}\] \[\quad+\frac{1}{2}\left[\Delta_{H}(\hat{\mathbf{R}},t)\sin\left( \Omega(0)t\right)\right.\] \[\quad\left.-\hbar\Omega_{H}(\hat{\mathbf{R}},t)\sin\left(\varphi _{H}(\hat{\mathbf{R}},t)\right)\cos\left(\Omega(0)t\right)\right]\hat{\sigma}_ {y}\] \[\quad+\frac{1}{2}\left[\Delta_{H}(\hat{\mathbf{R}},t)\cos\left( \Omega(0)t\right)\right.\] \[\quad\left.+\hbar\Omega_{H}(\hat{\mathbf{R}},t)\sin\left(\varphi _{H}(\hat{\mathbf{R}},t)\right)\sin\left(\Omega(0)t\right)\right]\hat{\sigma} _{z}, \tag{61}\]
where \(\hat{\sigma}_{y}\) and \(\hat{\sigma}_{z}\) are the remaining Pauli operators. This Hamiltonian can be treated perturbatively to find the effective time evolution operator which allows us to provide the full evolution of the system after transforming back to the original picture.
### Example: Fundamental Gaussian Laser Beam
Let us continue with the simplest, most basic example for a position-dependent laser intensity profile: the Gaussian laser beam. Assuming the atom to be falling along the optical axis of the laser beam and being located near the beam waist \(w_{0}\) during the laser pulse, i.e. the ratio \(\|\hat{\varrho}/w_{0}\|\) and \(\|\hat{Z}/z_{R}\|\) is small, where
\[\hat{\varrho}=\sqrt{\hat{X}^{2}+\hat{Y}^{2}}. \tag{62}\]
is the radial position operator (\(z_{R}\) being the Rayleigh length), as well as cylindrical symmetry, we can expand the fundamental Gaussian [106] electromagnetic field in cylindrical coordinates. Thus [107]
\[\mathbf{E}_{0}(\hat{\mathbf{R}}) =\frac{\mathbf{E}_{0}w_{0}}{w(\hat{Z})}\exp\left[-\frac{\hat{ \varrho}^{2}}{w^{2}(\hat{Z})}-\mathrm{i}k_{L}\frac{\hat{\varrho}^{2}}{2R( \hat{Z})}+\mathrm{i}\zeta(\hat{Z})\right], \tag{63a}\] \[\mathbf{B}_{1}(\hat{\mathbf{R}}) =\frac{\mathbf{B}_{1}w_{0}}{w(\hat{Z})}\exp\left[-\frac{\hat{ \varrho}^{2}}{w^{2}(\hat{Z})}+\mathrm{i}k_{L}\frac{\hat{\varrho}^{2}}{2R(\hat {Z})}+\mathrm{i}\zeta(\hat{Z})\right], \tag{63b}\]
to second order in COM position, where (introducing the rescaled operator \(\hat{\mathcal{Z}}=\hat{Z}/z_{R}\))
\[w^{-1}(\hat{Z})=\left[w_{0}\sqrt{1+\hat{\mathcal{Z}}^{2}}\right]^{-1}=w_{0}^{- 1}\left[1-\frac{1}{2}\hat{\mathcal{Z}}^{2}+\mathcal{O}\left(\hat{\mathcal{Z}} ^{4}\right)\right] \tag{64}\]
is the inverse of the spot size parameter,
\[R^{-1}(\hat{Z})=\hat{Z}^{-1}\left[1+\hat{\mathcal{Z}}^{-2}\right]^{-1}=\frac{ \hat{\mathcal{Z}}}{z_{R}}\left[1-\hat{\mathcal{Z}}^{2}+\mathcal{O}\left(\hat{ \mathcal{Z}}^{4}\right)\right] \tag{65}\]
the inverse of the radius of curvature and
\[\zeta(\hat{Z})=\arctan\hat{\mathcal{Z}}=\hat{\mathcal{Z}}+\mathcal{O}\left( \hat{\mathcal{Z}}^{3}\right) \tag{66}\]
is the Gouy phase. Introducing further the dimensionless operator \(\hat{\rho}=\hat{\varrho}/w_{0}\) as well as using the definitions of the single-photon Rabi frequencies, Eq. (38), we can now determine the operators that are present in the final Hamiltonian, Eq. (61), via their definitions, Eqs. (46), (52) and (55), expanded to the second order in \(\hat{\rho}\) and \(\hat{\mathcal{Z}}\) in terms of the Heisenberg trajectories of \(\hat{H}\):
\[\Omega_{H}(\hat{\mathbf{R}},t) \approx\Omega(0)\left[1-\hat{\mathcal{Z}}_{H}^{2}(t)-2\;\rho_{H}^ {2}(t)\right], \tag{67a}\] \[\varphi_{H}(\hat{\mathbf{R}},t) \approx 0,\] (67b) \[\Delta_{H}(\hat{\mathbf{R}},t) \approx\hbar\left[\nu(\hat{\mathbf{P}}_{H}(t))+\omega_{k}+\delta\right.\] \[\quad\left.+\left(1-\hat{\mathcal{Z}}_{H}^{2}(t)-2\;\rho_{H}^{2} (t)\right)\omega_{\mathrm{AC}}^{(-)}(0)\right],\] (67c) \[\mathcal{S}_{H}(\hat{\mathbf{R}},t) \approx-\frac{1}{2}\omega_{\mathrm{AC}}^{(+)}(0)\left(\hat{ \mathcal{Z}}_{H}^{2}(t)+2\;\rho_{H}^{2}(t)\right), \tag{67d}\]
where we used \(\Omega(0)=-\Omega_{E}\Omega_{B}^{*}/(2\Delta)\) and \(\omega_{\mathrm{AC}}^{(\pm)}(0)=(|\Omega_{E}|^{2}\pm|\Omega_{B}|^{2})/(2\Delta)\). Note, that we have defined the position-independent Rabi frequencies \(\Omega_{E}\) and \(\Omega_{B}\) in total analogy to Eq. (38). Recalling Eqs. (38), (46) and (51), the effective kick due to the Gaussian beam shape is then given by
\[\hbar\mathbf{k}=\hbar\nabla\Phi|_{\mathbf{R}=0}=\frac{2\hbar}{z_{R}}\mathbf{e} _{Z}. \tag{68}\]
In App. A we show that for this setup the Hamiltonian \(\hat{H}_{3}\) of Eq. (61) is (quasi-)commuting at different times within the time scale we are interested in. Calculating the time-evolution operator,
\[\hat{U}_{3}(t)=\mathcal{T}\exp\left(-\frac{\mathrm{i}}{\hbar}\int_{0}^{t}\hat{H} _{3}(t^{\prime})\;\mathrm{d}t^{\prime}\right), \tag{69}\]
the time-ordering operation can thus be ignored. Instead, we can directly compute the integral in the exponent approximating all radial components up to the order \(\mathcal{O}(\hat{\rho}^{2})\) and all \(Z\)-components to the order \(\mathcal{O}(\hat{\mathcal{Z}}^{2})\), i.e. the wave packet is sufficiently localized with respect to \(w_{0}\) in radial and \(z_{R}\) in \(Z\)-direction. Introducing further the dimensionless time \(\tau=\Omega(0)t\), we obtain the time-evolution operator
\[\hat{U}_{3}(\tau) =\exp\Biggl{\{}\frac{-\mathrm{i}\tau}{2}\frac{\omega_{\mathrm{AC} }^{(+)}(0)}{\Omega(0)}\left(\hat{\mathcal{Z}}^{2}+2\,\hat{\rho}^{2}\right) \mathds{1}+\frac{\mathrm{i}\,\tau}{2}\left(\hat{\mathcal{Z}}^{2}+2\,\hat{\rho}^ {2}\right)\hat{\sigma}_{x}\] \[\quad-\frac{\mathrm{i}}{2\Omega(0)}\left[\nu(\hat{\mathbf{P}})- \omega_{\mathrm{AC}}^{(-)}(0)\left(\hat{\mathcal{Z}}^{2}+2\,\hat{\rho}^{2} \right)\right]\] \[\quad\times\left[\left(1-\cos(\tau)\right)\hat{\sigma}_{y}+\sin( \tau)\,\hat{\sigma}_{z}\right]\Biggr{\}}. \tag{70}\]
Furthermore, we chose the overall detuning \(\delta=-\omega_{k}-\omega_{\mathrm{AC}}^{(-)}(0)\) to compensate the differential AC Stark shift \(\omega_{\mathrm{AC}}^{(-)}(0)\) and the recoil frequency \(\omega_{k}\). Finally, we assume that the atoms are near the origin, i.e. \(\|\hat{\mathcal{Z}}^{2}\|_{|\psi}\ll 1\) and \(\|\hat{\rho}^{2}\|_{|\psi}\ll 1\) for our wave packets. Thus, all terms of the order \(\mathcal{O}(\hat{\mathcal{Z}}^{2})\) and \(\mathcal{O}(\hat{\rho}^{2})\) can be neglected and we obtain the time evolution operator
\[\hat{U}_{3}(\tau) =\exp\left\{-\frac{\mathrm{i}\nu(\hat{\mathbf{P}})}{2\Omega(0)} \left[\left(1-\cos(\tau)\right)\hat{\sigma}_{y}+\sin(\tau)\hat{\sigma}_{z} \right]\right\}\] \[=\cos\left[\frac{\nu(\hat{\mathbf{P}})}{\Omega(0)}\sin\left( \frac{\tau}{2}\right)\right]\mathds{1}+\mathrm{i}\sin\left[\frac{\nu(\hat{\mathbf{P}})}{ \Omega(0)}\sin\left(\frac{\tau}{2}\right)\right] \tag{71}\] \[\quad\times\left(\sin\left(\frac{\tau}{2}\right)\hat{\sigma}_{y}+ \cos\left(\frac{\tau}{2}\right)\hat{\sigma}_{z}\right).\]
To consider now all finite pulse-time effects for a Gaussian laser beam, the unitary transformations done in this section as well as the displacement transformation, Eq. (24), have to be reversed. Doing this subsequently and using \(\hat{U}^{\dagger}(0)=\hat{U}^{\dagger}_{\Omega}(0)=\mathds{1}\), we end up with
\[\hat{U}(\tau)=\hat{D}(\tau)\hat{U}_{1}\hat{\hat{D}}(\tau)\hat{U}_{\Omega}(\tau) \hat{U}_{3}(\tau)\hat{U}^{\dagger}_{1}\hat{D}^{\dagger}(0) \tag{72}\]
being the total time evolution in the initial picture, cf. Eq. (37). Inserting \(\tau_{\pi}=\pi\) (\(\tau_{\pi/2}=\pi/2\)) for the duration of a \(\pi\)-pulse (\(\pi/2\)-pulse) we obtain the generalized \(\pi\)-pulse and \(\pi/2\)-pulse operators:
\[\hat{U}_{\pi}=\left(\begin{array}{cc}\hat{U}_{\pi,ee}&\hat{U}_{\pi,ge}\\ \hat{U}_{\pi,eg}&\hat{U}_{\pi,gg}\end{array}\right)\;\;\text{and}\;\;\;\hat{U} _{\frac{\pi}{2}}=\left(\begin{array}{cc}\hat{U}_{\frac{\pi}{2},ee}&\hat{U}_{ \frac{\pi}{2},ge}\\ \hat{U}_{\frac{\pi}{2},eg}&\hat{U}_{\frac{\pi}{2},gg}\end{array}\right) \tag{73}\]
with
\[\hat{U}_{\pi,ee} =\mathrm{i}\hat{D}(\tau)\hat{\mathcal{D}}\hat{U}(\pi)\sin\left( \frac{\nu(\hat{\mathbf{P}})}{\Omega(0)}\right)\hat{\mathcal{D}}^{\dagger}\hat {D}^{\dagger}(0), \tag{74a}\] \[\hat{U}_{\pi,ge} =-\mathrm{i}\hat{D}(\tau)\hat{\mathcal{D}}\hat{U}(\pi)\cos\left( \frac{\nu(\hat{\mathbf{P}})}{\Omega(0)}\right)\hat{D}^{\dagger}(0),\] (74b) \[\hat{U}_{\pi,eg} =-\mathrm{i}\hat{D}(\tau)\hat{U}(\pi)\cos\left(\frac{\nu(\hat{ \mathbf{P}})}{\Omega(0)}\right)\hat{\mathcal{D}}^{\dagger}\hat{D}^{\dagger}(0),\] (74c) \[\hat{U}_{\pi,gg} =-\mathrm{i}\hat{D}(\tau)\hat{U}(\pi)\sin\left(\frac{\nu(\hat{ \mathbf{P}})}{\Omega(0)}\right)\hat{D}^{\dagger}(0), \tag{74d}\]
and
\[\hat{U}_{\frac{\pi}{2},ee} =\frac{1}{\sqrt{2}}\hat{D}(\tau)\hat{\mathcal{D}}\hat{U}\left( \frac{\pi}{2}\right)\left[\cos\left(\frac{\nu(\hat{\mathbf{P}})}{\sqrt{2} \Omega(0)}\right)\right. \tag{75a}\] \[\left.+\sqrt{2}\mathrm{i}\sin\left(\frac{\nu(\hat{\mathbf{P}})}{ \sqrt{2}\Omega(0)}\right)\right]\hat{\mathcal{D}}^{\dagger}\hat{D}^{\dagger}(0),\] \[\hat{U}_{\frac{\pi}{2},ge} =-\frac{\mathrm{i}\hat{D}(\tau)\hat{\mathcal{D}}\hat{U}\left( \frac{\pi}{2}\right)\cos\left(\frac{\nu(\hat{\mathbf{P}})}{\sqrt{2}\Omega(0)} \right)\hat{D}^{\dagger}(0)}{\sqrt{2}},\] (75b) \[\hat{U}_{\frac{\pi}{2},eg} =-\frac{\mathrm{i}\hat{D}(\tau)\hat{U}\left(\frac{\pi}{2}\right) \cos\left(\frac{\nu(\hat{\mathbf{P}})}{\sqrt{2}\Omega(0)}\right)\hat{D}^{ \dagger}\hat{D}^{\dagger}(0)}{\sqrt{2}},\] (75c) \[\hat{U}_{\frac{\pi}{2},gg} =\frac{1}{\sqrt{2}}\hat{D}(\tau)\hat{U}\left(\frac{\pi}{2}\right) \left[\cos\left(\frac{\nu(\hat{\mathbf{P}})}{\sqrt{2}\Omega(0)}\right)\right.\] (75d) \[\left.-\sqrt{2}\mathrm{i}\sin\left(\frac{\nu(\hat{\mathbf{P}})}{ \sqrt{2}\Omega(0)}\right)\right]\hat{D}^{\dagger}(0).\]
Recall the displacement operators \(\hat{D}(t)\), Eq. (24), and \(\hat{\mathcal{D}}\), Eq. (48), and the time evolution operator \(\hat{\tilde{U}}\) corresponding to the mean Hamiltonian \(\hat{H}\), Eq. (54). In the limit \(z_{R}\to\infty\) (accordingly also \(w_{0}\to\infty\) due to the relation \(z_{R}=\pi w_{0}^{2}/\lambda\), where \(\lambda\) is the wavelength of the laser beam, and \(\mathbf{k}=2\mathbf{e}_{Z}/z_{R}\to 0\)) they reduce to the well-known ideal \(\pi\) and \(\frac{\pi}{2}\)-pulse operators, respectively,
\[\hat{U}_{\pi,\mathrm{ideal}}=\left(\begin{array}{cc}0&-\mathrm{i}\\ -\mathrm{i}&0\end{array}\right),\quad\hat{U}_{\frac{\pi}{2},\mathrm{ideal}}= \frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&-\mathrm{i}\\ -\mathrm{i}&1\end{array}\right), \tag{76}\]
i.e. the plane wave solution of the previous section where the CQM momentum \(\hat{\mathbf{P}}\) has no effect on the E1-M1 transitions.
#### v.1.1 Discussion of the Generalized \(\pi\) and \(\pi/2\)-Pulse Operators
Comparing the generalized \(\pi\) and \(\pi/2\)-pulse operators, Eqs. (74) and (75), with the ideal operators, Eq. (76), one can observe some additional effects due to the finite pulse time and the position dependency of the intensity of a Gaussian laser beam:
Additional momentum kicksThe displacement operators \(\hat{\mathcal{D}}\) and \(\hat{\mathcal{D}}^{\dagger}\) correspond to a transfer of momentum \(\pm\hbar\mathbf{k}=\pm\hbar\frac{\mathbf{e}_{Z}}{2z_{R}}\), i.e. the transitions from ground to excited state and vice versa are accompanied by small additional momentum kicks. Note, that although there are displacement operators present in the terms \(\hat{U}_{\pi,ee}\) and \(\hat{U}_{\frac{\pi}{2},ee}\), i.e. the atoms remaining in the excited state during the laser pulse, the momentum of the atom is identical before and after the pulse. There is only a momentum shift occuring during the interaction with the laser.
Action of \(\hat{\hat{U}}(\tau)\)The time-evolution operator
\[\hat{\hat{U}}(\tau)=\exp\left[\frac{-\mathrm{i}\tau}{\hbar\Omega(0)}\left( \frac{\hat{\mathbf{p}}^{2}}{2M}+\frac{\hbar\nu(\hat{\mathbf{P}})}{2}+Mc^{2}- \frac{\hbar|\Omega_{E}|^{2}}{2\Delta}\right)\right] \tag{77}\]
associated with the mean Hamiltonian, Eq. (54), corresponds to a displacement \(-\hbar\tau/(Mz_{R}\Omega(0))\) in \(Z\)-direction (recall Eqs. (52) and (68)) and a laser phase. Recall that we chose the overall detuning \(\delta=-\omega_{k}-\omega_{AC}^{(-)}(0)\) so that the recoil frequency \(\omega_{k}\) and the part of the mean AC Stark shift \(\omega_{AC}^{(*)}(0)\) corresponding to the M1 transition in zeroth order, i.e. \(|\Omega_{B}|^{2}/(2\Delta)\), is compensated in the mean Hamiltonian. Furthermore, the first order of the mean AC Stark shift vanishes for the TEM\({}_{00}\) laser mode.
Additional branchesThe \(\pi\) and \(\pi/2\)-pulses further contain operators of the form \(\sin(\xi\hat{P}_{z}/\hbar)\) and \(\cos(\xi\hat{P}_{z}/\hbar)\), i.e. a splitting of the branches in opposite directions. Moreover, we see that the \(\pi\) and \(\pi/2\)-pulse operators do not transform all the atoms to the appropriate internal state (in contrast to the ideal case). Both, the \(\pi\)- and \(\pi/2\)-pulses lead therefore to a splitting into four branches.
However, the order of magnitude of the displacements in space during the pulse time, i.e. \(\mathcal{O}(\hbar/(z_{R}M\Omega(0)))\), is much smaller than the displacement due to the momentum kick which is of the order \(\mathcal{O}(\hbar T/(Mz_{R}))\), since \(\Omega(0)T\gg 1\) (cf. Table 1 in App. A), and where \(T\) is a characteristic time of the interferometer sequence, e.g. \(T=T_{4}-T_{2}\) in Fig. **4** or \(T=T_{3}-T_{2}\) in Fig. **5**. Therefore we will neglect branch splitting from now on and continue with the \(\pi\)- and \(\pi/2\)-pulse operators given by
\[\hat{U}_{\pi} =-\mathrm{i}\hat{D}(\tau)\left(\begin{array}{cc}0&\hat{ \mathcal{D}}\hat{U}^{\prime}(\pi)\\ \hat{U}^{\prime}(\pi)\hat{D}^{\dagger}&0\end{array}\right)\hat{D}^{\dagger}(0), \tag{78a}\] \[\hat{U}_{\frac{\pi}{2}} =\frac{1}{\sqrt{2}}\hat{D}(\tau)\left(\begin{array}{cc}\hat{ \hat{U}}^{\prime}(\frac{\pi}{2})&-\mathrm{i}\hat{\mathcal{D}}\hat{U}^{\prime}( \frac{\pi}{2})\\ -\mathrm{i}\hat{U}^{\prime}(\frac{\pi}{2})\hat{D}^{\dagger}&\hat{U}^{\prime}( \frac{\pi}{2})\end{array}\right)\hat{D}^{\dagger}(0), \tag{78b}\]
where
\[\hat{\hat
is the time-evolution due to the mean Hamiltonian Eq. (54) without the term inducing spatial translations since
\[\left\|\frac{\hat{P}_{z}}{Mz_{R}\Omega(0)}\right\|\ll 1. \tag{80}\]
## V Effects on the interferometer phase from additional momentum kicks
In this section we investigate the main finite pulse-time effects for E1-M1 transitions, namely the falling of the atom during the laser pulse and the additional momentum kicks, for UGR and UFF tests using the interferometer schemes (A) [49] and (B) [52] (recall Sec. II). We assume ideal momentum kick operators
\[\hat{\mathcal{D}}_{p}=\frac{1}{\sqrt{2}}\mathrm{e}^{\mathrm{i}k_{p}}\hat{Z} \tag{81}\]
for the (magic) Bragg pulses, i.e. momentum transfer pulses that do not change the internal state, as well as for the \(\pi\) and \(\pi/2\)-pulses, see Eq. (78) in Sec. IV. The evolution of the atom in the gravitational field from time \(T_{k}\) to \(T_{i}\) in between laser pulses in its ground/excited state can be described via [47; 49; 52]
\[\hat{U}(T_{i},T_{k})= \sum_{n=g,e}\mathrm{e}^{-\frac{\mathrm{i}}{2}\hat{H}_{\mathrm{an }}^{\mathrm{(MB)}}(T_{i}-T_{k})}\left|n\right\rangle\!\!\left\langle n\right|= \sum_{n=g,e}\!\hat{U}_{n}(T_{i},T_{k})\left|n\right\rangle\!\!\left\langle n \right|. \tag{82}\]
Since this Hamiltonian is diagonal in the internal states, calculating the evolution during the free fall of the atoms is particularly easy, and reduces to finding the time evolution operators \(\hat{U}_{n}(T_{i},T_{k})\) corresponding to the Hamiltonian \(\hat{H}_{n}^{\mathrm{(ME)}}=H(\hat{\mathbf{R}},\hat{\mathbf{P}};M_{n})\). On the other hand, one could also use the mass defect representation of the Hamiltonian \(\hat{H}^{\mathrm{(MD)}}\) and calculate the evolution via \(\hat{U}(T_{i},T_{k})=\mathrm{e}^{-\frac{\mathrm{i}}{2}\hat{H}^{\mathrm{(MD)}} (T_{i}-T_{k})}\) which is however much more complicated than rewriting the previous result.
We assume further that the initial COM state \(\left|\psi_{0}\right\rangle\) of the atom corresponds to a \(L^{2}\)-normalized Gaussian wave packet
\[\psi_{0}(\mathbf{P})=\frac{\exp\left(-\frac{1}{4}(\mathbf{P}-\mathbf{P}_{0})^ {T}\mathbf{\sigma}^{-1}(\mathbf{P}-\mathbf{P}_{0})\right)}{(2\pi)^{3/4}\det^{1/4} \mathbf{\sigma}}, \tag{83}\]
with covariance matrix \(\mathbf{\sigma}=\mathrm{diag}((\Delta p_{x})^{2},(\Delta p_{y})^{2},(\Delta p_{z} )^{2})\) and mean momentum \(\mathbf{P}_{0}\). We can thus describe the full initial atomic state by a product state
\[\left|\Psi_{0}\right\rangle=\left|\phi_{i}\right\rangle\otimes\left|\psi_{0} \right\rangle, \tag{84}\]
where \(\left|\phi_{i}\right\rangle\) is the initial internal state of the atom. Using the exit port projection operator
\[\hat{\Pi}_{\mathrm{exit}}=\left|\phi_{\mathrm{f}}\right\rangle\!\!\left\langle \phi_{\mathrm{f}}\right|\otimes\hat{\Pi}_{\mathrm{exit}}^{\mathrm{(COM)}}, \tag{85}\]
the measured intensity in the respective exit port (excited or ground state) is then described by
\[I_{\phi_{\mathrm{f}}}=\left\langle\psi_{0}\right|\left\{\,\langle\phi_{ \mathrm{f}}|\hat{\Pi}_{\mathrm{exit}}^{\mathrm{(COM)}}\hat{U}_{\mathrm{tot}}| \phi_{i}\rangle\right\}^{+}\left\langle\phi_{\mathrm{f}}|\hat{\Pi}_{\mathrm{ exit}}^{\mathrm{(COM)}}\hat{U}_{\mathrm{tot}}|\phi_{i}\rangle|\psi_{0}\right\rangle, \tag{86}\]
where
\[\langle\phi_{\mathrm{f}}|\hat{\Pi}_{\mathrm{exit}}^{\mathrm{(COM)}}\hat{U}_{ \mathrm{tot}}|\phi_{i}\rangle=\hat{U}_{l,\phi_{i}\phi_{\mathrm{f}}}+\hat{U}_{u,\phi_{i}\phi_{\mathrm{f}}} \tag{87}\]
is the sum of the evolution operators along the lower and the upper branches leading to the exit port of the interferometer characterized by the internal state \(\left|\phi_{\mathrm{f}}\right\rangle\) and the projector \(\hat{\Pi}_{\mathrm{exit}}^{\mathrm{(COM)}}\) on the COM degrees of freedom corresponding to the exit port.
### UGR Tests Using Superpositions of Internal States
In the interferometer scheme (A) [49] the atoms entering the interferometer in the ground state are divided into two branches, and in the middle segment a Doppler-free \(\pi/2\)-pulse is applied simultaneously on both branches to get a 50:50 superposition of excited and ground state atoms, i.e. the initialization of an atomic clock.
However, considering finite pulse-time effects, the E1-M1 transitions are not Doppler-free anymore. The modified trajectories of this interferometer are shown in Fig. **4**. Nevertheless, we can still measure the intensity in the ground state and the excited state exit ports. Describing the evolution along the lower and upper trajectories, respectively, by
\[\hat{U}_{l,gg}=\frac{1}{2}\hat{\mathcal{D}}_{p}^{\dagger}\hat{U}_{g}(T_{4},T_{3 })\hat{\mathcal{D}}_{p}\hat{U}_{g}(T_{3},T_{2}+t_{\frac{g}{2}})\hat{U}_{\frac{ g}{2},gg}\hat{U}_{g}(T_{2},T_{0}), \tag{88a}\]
Figure 4: Interferometer scheme (A) of Roura [49] in the freely falling frame modified by finite pulse-time effects during the E1-M1 \(\frac{g}{2}\)-pulse. Initially the atomic wave packet is prepared in the ground state \(\left|g\right\rangle\) (green) before being diffracted by (magic) Bragg pulses at times \(T_{0},\ T_{1},\ T_{3}\) and \(T_{4}\). During each of the Bragg diffraction light pulses (red) the momentum of the atoms changes only for parts of the atoms. At all but the first Bragg pulse some of the atoms are not diffracted along the branches of the interferometer and hence do not propagate into the exit port. We do not illustrate these additional paths for clarity. At time \(T_{2}\) an E1-M1 pulse of duration \(t_{\frac{g}{2}}\) at time \(T_{2}\) (violet pulse and shading) initialized a delocalized superposition of ground (green) and excited state (blue) on both branches. Additional momentum kicks \(\hbar\mathbf{k}\) originating from the optical potentials acting during the transition at time \(T_{2}\) lead to an additional spatial delocalization of the clock states along the interferometer branches (illustrated unrealistically large). At the end of the interferometer this kick-induced delocalization along the branches between the clock states leads to slightly displaced detection locations for the ground (green) and excited state (blue) atoms.
Finite Pulse-Time Effects in Long-Baseline Quantum Clock Interferometry
\[\hat{U}_{u,gg}=\frac{1}{2}\hat{U}_{g}(T_{4},T_{2}+t_{\frac{g}{2}})\hat{U}_{\frac{ g}{2},gg}\hat{U}_{g}(T_{2},T_{1})\hat{D}_{p}^{\dagger}\hat{U}_{g}(T_{1},T_{0}) \hat{D}_{p}, \tag{88b}\]
which leads to
\[\hat{U}_{l,gg}^{\dagger}\hat{U}_{l,gg}=\hat{U}_{u,gg}^{\dagger}\hat{U}_{u,gg}= \frac{1}{32}, \tag{89}\]
the intensity in the ground state exit port is given by
\[I_{g} =\frac{1}{16}+\langle\psi_{0}|\left(\hat{U}_{u,gg}^{\dagger}\hat{ U}_{l,gg}+\text{h.c.}\right)|\psi_{0}\rangle\] \[=\frac{1}{32}\left[2+\left(\mathcal{V}_{g}\text{e}^{\text{i}\Delta \phi_{g}}+\text{c.c.}\right)\right], \tag{90}\]
where we defined the visibility \(\mathcal{V}_{g}\) and the phase difference \(\Delta\phi_{g}\) in the ground state exit port. Analogously we obtain for the excited state exit port the lower and upper branch evolution operators,
\[\hat{U}_{l,ge} =\frac{1}{2}\hat{D}_{p}^{\dagger}\hat{U}_{e}(T_{4},T_{3})\hat{D}_ {p}\hat{U}_{e}(T_{3},T_{2}+t_{\frac{g}{2}})\hat{U}_{\frac{g}{2},ge}\hat{U}_{g} (T_{2},T_{0}), \tag{91a}\] \[\hat{U}_{u,ge} =\frac{1}{2}\hat{U}_{e}(T_{4},T_{2}+t_{\frac{g}{2}})\hat{U}_{ \frac{g}{2},ge}\hat{U}_{g}(T_{2},T_{1})\hat{D}_{p}^{\dagger}\hat{U}_{g}(T_{1},T_{0})\hat{D}_{p}, \tag{91b}\]
leading to the intensity in the excited state exit port
\[I_{e} =\frac{1}{16}+\langle\psi_{0}|\left(\hat{U}_{u,ge}^{\dagger}\hat {U}_{l,ge}+\text{h.c.}\right)|\psi_{0}\rangle\] \[=\frac{1}{32}\left[2+\left(\mathcal{V}_{e}\text{e}^{\text{i} \Delta\phi_{e}}+\text{h.c.}\right)\right] \tag{92}\]
with the visibility \(\mathcal{V}_{e}\) and the phase difference \(\Delta\phi_{e}\) in the excited state exit port. Inserting the Gaussian wave packet from Eq. (83) we can calculate the visibility and the phase
\[\mathcal{V}_{g}=1,\quad\Delta\phi_{g}=\frac{1}{2}gk_{p}\delta T \left(T_{4}+T_{3}-\delta T\right)+\frac{\epsilon}{2}gk_{p}t_{\frac{g}{2}} \delta T \tag{93}\]
in the ground state exit port and
\[\mathcal{V}_{e} =\exp\left(-\frac{k_{p}^{2}\epsilon^{2}\Delta p_{z}^{2}\delta T^ {2}}{2M^{2}}\right)=1+\mathcal{O}\left(\epsilon^{2}\right),\] \[\Delta\phi_{e} =\frac{1}{2}gk_{p}\delta T\left(T_{4}+T_{3}-\delta T\right)- \frac{\hbar kk_{p}\delta T}{M} \tag{94}\] \[\quad+\epsilon\left(\frac{\hbar(k+k_{p})k_{p}\delta T}{2M}- \frac{1}{2}gk_{p}\delta T(t_{\frac{g}{2}}+2T_{2})\right)\]
for the excited state exit port, and where we expanded everything up to the first order in \(\epsilon=\Delta M/M\). Accordingly, the visibility is one to our order in the approximations for both channels. The differential phase is then given by
\[\Delta\phi_{-}= \Delta\phi_{g}-\Delta\phi_{e}\] \[= \frac{\hbar kk_{p}\delta T}{M}-\epsilon\left(\frac{\hbar(k+k_{p}) k_{p}\delta T}{2M}-gk_{p}\delta T(t_{\frac{g}{2}}+T_{2})\right), \tag{95}\]
where we still observe finite pulse-time effects, i.e. the terms containing the additional momentum kick \(\hbar k\) and the \(\pi/2\) pulse time \(t_{\frac{g}{2}}\). However, the double differential phase, i.e. the difference of the differential phase, Eq. (95), with different initialization times of the atomic clock \(T_{2}\), is given by
\[\Delta\phi_{-}(T_{2})-\Delta\phi_{-}(T_{2}+\tau)=-\epsilon gk_{p}\delta T\tau. \tag{96}\]
Therefore, the finite pulse-time effects cancel each other between different runs of the experiment to the leading order in the expansion parameters.
### UGR and UFF Tests Without Superposition
The interferometer scheme (B) [52] does not require superpositions of different internal states but a change of the internal state in the middle segment, i.e. a recoilless \(\pi\)-pulse. Again, we need two runs of the experiment: one with the initial ground state and
Figure 5: Interferometer scheme (B) of Ufrecht et al. [52] in the freely falling frame for different initial states of the atoms. Panel **(a)** showcases the interferometer sequence with the atoms initially and at detection in the ground state \(|g\rangle\). Panel **(b)** shows the corresponding sequence with atoms initially and at detection in the excited state \(|e\rangle\). The E1-M1 \(\pi\)-pulses of duration \(t_{\pi}\) are indicated by the violet lasers and subsequent violet shading after the beginning of the pulses at times \(T_{2}\) and \(T_{3}\). We exaggerated the separation between the momentum-kick pulses and the E1-M1 pulses as well as the effect of the modified momentum kicks \(\pm\hbar\)k due to the additional optical potentials acting during the finite-duration E1-M1 transitions starting at times \(T_{2}\) and \(T_{3}\) for better visualization.
one with the initial excited state. We found in Sec. IV A that the E1-M1 transitions are not perfectly recoilless for realistic beam shapes and finite pulse times. Likewise, the interferometer scheme in this section is modified by additional momentum kicks during the \(\pi\)-pulses, see Fig. 5.
We can calculate the final intensity analogously to the previous section and set \(T_{3}-T_{2}=T_{4}-T_{1}=T\), so that we obtain the visibilities and phases
\[\mathcal{V}_{g}=1,\quad\Delta\phi_{g}=2gk_{\,p}\delta T\big{(}\delta T+T+ \epsilon T\big{)} \tag{97}\]
for an initial ground state and
\[\mathcal{V}_{e}=1,\quad\Delta\phi_{e}=2gk_{\,p}\delta T\big{(}\delta T+T- \epsilon T\big{)} \tag{98}\]
for an initial excited state. For this interferometer scheme we find that the final pulse-time effects cancel each other already in the non-differential phase due to the symmetry of the interferometer. The results [52] can thus be reproduced immediately, i.e. the differential signals read
\[\Delta\phi_{+}=\Delta\phi_{g}+\Delta\phi_{e}=4gk_{\,p}(\delta T+T) \delta T, \tag{99a}\] \[\Delta\phi_{-}=\Delta\phi_{g}-\Delta\phi_{e}=4\epsilon gk_{\,p}T \delta T, \tag{99b}\]
separating the effects of UFF and UGR.
## VI Conclusion
Very-large baseline atom interferometry is built upon exploiting the beneficial scaling of the interferometer signal in terms of the enclosed space-time area. However, due to the resulting long free-fall and interaction times, imperfections and perturbations act over much longer timescales and even small, accumulated effects can lead to a loss of visibility in the interference pattern in such devices. In case of (local) magnetic and gravitational field gradients [108; 109] these effects have to be studied in detail for upcoming large baseline setups in addition to the already available results for e.g. gravity gradients or rotations [110; 111; 112; 8; 8; 89; 100; 8;
## Author Declarations
### Conflict of Interest Statement
The authors have no conflicts to disclose.
### Author Contributions
**Gregor Janson** Conceptualization (support); Formal analysis (lead); Validation (equal); Investigation (lead); Methodology (equal); Visualization (equal); Writing - original draft (lead); Writing - review and editing (equal). **Alexander Friedrich** Conceptualization (lead); Formal analysis (support); Validation (equal); Investigation (support); Methodology (equal); Visualization (equal); Writing - review and editing (equal); Supervision (equal). **Richard Lopp** Conceptualization (support); Formal analysis (support); Validation (equal); Investigation (support); Methodology (equal); Writing - original draft (Support); Writing - review and editing (equal); Supervision (equal).
## Data Availability
The data that support the findings of this study are available within the article.
## Appendix A Different-Time Commutators of \(\hat{H}_{3}\)
The Hamiltonian derived in Sec. IV.1 is of the form
\[\hat{H}_{3}(t)=\hat{H}_{0}(t)\,\mathds{1}+\sum_{j=\{x,y,z\}}\hat{H}_{j}(t) \hat{\sigma}_{j} \tag{10}\]
with
\[\hat{H}_{0}(t)= \frac{\hbar}{2}\omega_{\text{AC}}^{(+)}(0)\left(\frac{\hat{Z}_{H}^ {2}(t)}{z_{R}^{2}}+2\frac{\hat{\varrho}_{H}^{2}(t)}{w_{0}^{2}}\right), \tag{11a}\] \[\hat{H}_{x}(t)= -\frac{\hbar\Omega(0)}{2}\left(\frac{\hat{Z}_{H}^{2}(t)}{z_{R}^{2 }}+2\frac{\hat{\varrho}_{H}^{2}(t)}{w_{0}^{2}}\right),\] (11b) \[\hat{H}_{y}(t)= \frac{\hbar}{2}\left[\frac{\mathbf{k}\cdot\hat{\mathbf{P}}_{H}(t) }{M}-\omega_{\text{AC}}^{(-)}(0)\left(\frac{\hat{Z}_{H}^{2}(t)}{z_{R}^{2}}+2 \frac{\hat{\varrho}_{H}^{2}(t)}{w_{0}^{2}}\right)\right]\] \[\times\sin\left(\Omega(0)t\right),\] (11c) \[\hat{H}_{z}(t)= \frac{\hbar}{2}\left[\frac{\mathbf{k}\cdot\hat{\mathbf{P}}_{H}(t) }{M}-\omega_{\text{AC}}^{(-)}(0)\left(\frac{\hat{Z}_{H}^{2}(t)}{z_{R}^{2}}+2 \frac{\hat{\varrho}_{H}^{2}(t)}{w_{0}^{2}}\right)\right]\] \[\times\cos\left(\Omega(0)t\right), \tag{11d}\]
where we used \(\Omega(0)=-\Omega_{E}\Omega_{B}^{*}/(2\Delta)\) and \(\omega_{\text{AC}}^{(*)}(0)=(|\Omega_{E}|^{2}\pm|\Omega_{B}|^{2})/(2\Delta)\). Furthermore, we set the overall detuning \(\delta=-\omega_{k}-\omega_{\text{AC}}^{(-)}(0)\) compensating the differential AC Stark shift \(\omega_{\text{AC}}^{(-)}(0)\) and the recoil frequency \(\omega_{k}\). Because we can neglect the time ordering in the time-evolution operator \(\hat{U}_{3}(t)\) if the Hamiltonian \(\hat{H}_{3}\) commutes at different times \(t_{1}\) and \(t_{2}\), we consider the different-time commutator
\[\left[\hat{H}_{3}(t_{1}),\hat{H}_{3}(t_{2})\right]=\left[\hat{H} _{0}(t_{1}),\hat{H}_{0}(t_{2})\right]\mathds{1}\] \[+\sum_{j=\{x,y,z\}}\left[\hat{H}_{j}(t_{1}),\hat{H}_{j}(t_{2}) \hat{\sigma}_{j}\right]\] \[+\sum_{j,k=\{x,y,z\}}\left[\hat{H}_{j}(t_{1})\hat{\sigma}_{j}, \hat{H}_{k}(t_{2})\hat{\sigma}_{k}\right]\] \[= \sum_{\mu=\{0,x,y,z\}}\left[\hat{H}_{\mu}(t_{1}),\hat{H}_{\mu}(t _{2})\right]\mathds{1} \tag{12}\] \[+\sum_{j=\{x,y,z\}}\left[\hat{H}_{0}(t_{1}),\hat{H}_{j}(t_{2}) \hat{\sigma}_{j}\right]\] \[+\sum_{j=\{x,y,z\}}\left[\hat{H}_{j}(t_{1})\hat{\sigma}_{j}, \hat{H}_{0}(t_{2})\right]\] \[+\sum_{j,k=\{x,y,z\}}\left\{\hat{H}_{j}(t_{1}),\hat{H}_{k}(t_{2} )\right\}_{+}\mathsf{i}\epsilon_{jkl}\hat{\sigma}_{l},\]
where we have used \(\hat{\sigma}_{j}\hat{\sigma}_{k}=\delta_{jk}\mathds{1}+\mathsf{i}\epsilon_{jkl} \hat{\sigma}_{l}\) and \(\epsilon_{jkl}\) is the Levi-Civita symbol. The Heisenberg trajectories for the mean Hamiltonian Eq. (54) can be calculated via the Heisenberg equations of motion
\[\frac{\mathrm{d}\hat{\mathbf{R}}_{H}}{\mathrm{d}t}=\frac{\mathrm{i}}{\hbar}[ \hat{\hat{H}},\hat{\mathbf{R}}_{H}]\ \ \ \text{and}\ \ \frac{\mathrm{d}\hat{\mathbf{P}}_{H}}{\mathrm{d}t}=\frac{\mathrm{i}}{\hbar}[ \hat{\hat{H}},\hat{\mathbf{P}}_{H}], \tag{13}\]
which leads to
\[\hat{\mathbf{R}}_{H}(t)=\left(\frac{\hat{\mathbf{P}}}{M}+\frac{\hbar\mathbf{k}}{2 M}\right)t+\hat{\mathbf{R}},\quad\hat{\mathbf{P}}_{H}(t)=\hat{\mathbf{P}}. \tag{14}\]
The COM position operators are of the form
\[\frac{\hat{X}_{H}(t)}{w_{0}} =\frac{\hat{P}_{x}}{Mw_{0}}t+\frac{\hat{X}}{w_{0}}=\frac{\hat{P}_{ x}}{Mw_{0}\Omega(0)}\tau+\frac{\hat{X}}{w_{0}}, \tag{15a}\] \[\frac{\hat{P}_{H}(t)}{w_{0}} =\frac{\hat{P}_{y}}{Mw_{0}}t+\frac{\hat{Y}}{w_{0}}=\frac{\hat{P}_{ y}}{Mw_{0}\Omega(0)}\tau+\frac{\hat{Y}}{w_{0}},\] (15b) \[\frac{\hat{Z}_{H}(t)}{z_{R}} =\left(\frac{\hat{P}_{z}}{Mz_{R}}+\frac{\hbar kz}{2Mz_{R}}\right) t+\frac{\hat{Z}}{z_{R}}\] (15c) \[=\left(\frac{\hat{P}_{z}}{Mz_{R}\Omega(0)}+\frac{\hbar}{Mz_{R}^ {2}\Omega(0)}\right)\tau+\frac{\hat{Z}}{z_{R}}\]
and the momentum operator writes
\[\frac{\mathbf{k}\cdot\hat{\mathbf{P}}_{H}(t)}{M}=\frac{2\hat{P}_{z,H}(t)}{Mz_{R }}=\frac{2\hat{P}_{z}}{Mz_{R}}, \tag{16}\]
where we have introduced the dimensionless time \(\tau=\Omega(0)t\) and used
\[\mathbf{k}=\nabla\Phi|_{\mathbf{R}=0}=\frac{2}{z_{R}}\mathbf{e}_{z}. \tag{17}\]
We can approximate all radial operators to the order \(\mathcal{O}(\hat{\varrho}^{2}/w_{0}^{2})\) and all \(Z\)-components to the order \(\mathcal{O}(\hat{Z}^{2}/z_{R}^{2})\) under the conditions
\[\begin{split}\left\|\frac{\hat{\varrho}^{2}}{w_{0}^{2}}\right\|\ll 1,&\left\|\frac{\hat{\varrho}^{2}}{z_{R}^{2}}\right\|\ll 1,\\ \left\|\frac{\hat{P}_{\rho}}{Mw_{0}(\Omega(0)}\right\|\ll 1,& \left\|\frac{\hat{P}_{\rho}}{Mz_{R}(0)}\right\|\ll 1.\end{split} \tag{21}\]
That is, the state of the atom has only non-negligible overlap with generalized COM position and momentum eigenstates (in radial and \(Z\)-direction, respectively) that correspond to scales much smaller than the characteristic scales of the laser. In other words, in position and momentum, the atomic wave function is sufficiently localized with respect to the laser beam. Furthermore, the AC Stark shifts \(\omega_{\mathcal{AC}}^{(z)}(0)\) are of the same order of magnitude as the two-photon Rabi frequency \(\Omega(0)\). Table 1 summarizes the order of magnitude of the relevant physical quantities used for the approximations in this study. Inserting the Heisenberg trajectories and omitting all terms that go beyond our approximation we finally obtain
\[\begin{split}\left[\hat{H}_{\mu}(\tau_{1}),\hat{H}_{\mu}(\tau_{2} )\right]\approx 0&\forall\mu\in\{0,x,y,z\},\\ \left[\hat{H}_{0}(\tau_{1}),\hat{H}_{j}(\tau_{2})\hat{\sigma}_{j} \right]\approx 0\approx\left[\hat{H}_{j}(\tau_{1})\hat{\sigma}_{j},\hat{H}_{0}( \tau_{2})\right]\;\forall j\in\{x,y,z\}.\end{split} \tag{22}\]
Note, that for the last sum in Eq. (20) we can use the symmetry of the anti-commutator and the anti-symmetry of the Levi-Civita symbol leading to
\[\left[\hat{H}_{3}(t_{1}),\hat{H}_{3}(t_{2})\right]\approx 0, \tag{23}\]
i.e. the Hamiltonian \(\hat{H}_{3}\) is (quasi-)commuting at different times over the time-scales we are interested in.
|
2309.14340 | Electronic properties, correlated topology and Green's function zeros | There is extensive current interest about electronic topology in correlated
settings. In strongly correlated systems, contours of Green's function zeros
may develop in frequency-momentum space, and their role in correlated topology
has increasingly been recognized. However, whether and how the zeros contribute
to electronic properties is a matter of uncertainty. Here we address the issue
in an exactly solvable model for Mott insulator. We show that the Green's
function zeros contribute to several physically measurable correlation
functions, in a way that does not run into inconsistencies. In particular, the
physical properties remain robust to chemical potential variations up to the
Mott gap as it should be based on general considerations. Our work sets the
stage for further understandings on the rich interplay among topology, symmetry
and strong correlations. | Chandan Setty, Fang Xie, Shouvik Sur, Lei Chen, Maia G. Vergniory, Qimiao Si | 2023-09-25T17:59:55Z | http://arxiv.org/abs/2309.14340v3 | # Electronic properties, correlated topology and Green's function zeros
###### Abstract
There is extensive current interest about electronic topology in correlated settings. In strongly correlated systems, contours of Green's function zeros may develop in frequency-momentum space, and their role in correlated topology has increasingly been recognized. However, whether and how the zeros contribute to electronic properties is a matter of uncertainty. Here we address the issue in an exactly solvable model for Mott insulator. We show that the Green's function zeros contribute to several physically measurable correlation functions, in a way that does not run into inconsistencies. In particular, the physical properties remain robust to chemical potential variations up to the Mott gap as it should be based on general considerations. Our work sets the stage for further understandings on the rich interplay among topology, symmetry and strong correlations.
+
Footnote †: preprint: APS/123-QED
## I Introduction
In noninteracting systems, electronic topology is formulated within band theory. Recent years have seen systematic development on how symmetries of crystalline lattices constrain topology and how they can be utilized to search for new topological materials [1; 2; 3; 4; 5; 6; 7]. In interacting settings, symmetry constraints have been considered in terms of Green's functions, either through a renormalized particle picture [8; 9; 10] in the form of a topological Hamiltonian [11; 12] or by recognizing that the eigenvectors of the exact Green's function in a many-body system form a representation of lattice space group [13]. The latter approach, which was introduced in the context of Weyl-Kondo semimetal [14; 15; 16; 17] and provided the theoretical basis for its robustness [18], has led to the realization [19] that Green's function zeros [20] of an interacting lattice system obey symmetry constraints; accordingly, the Green's function zeros participate in the formation of correlated electronic topology, just as Green's function poles do. Concurrently, the role of Green's function zeros has been studied in the context of the edge spectrum of interacting topological insulators [21].
Quasiparticles represent the low energy excitations of a Fermi liquid. The quasiparticles are conveniently described in terms of a Green's function approach, in which they appear as poles of the single particle Green's function [22]. They are characterized by the quasiparticle weight - a quantity that plays a central role in the microscopic Fermi liquid theory. When electron correlations are strong, the quasiparticle weight becomes very small, and its affect on observables is well documented; exemplary settings can be found in Refs. [23; 24; 25; 26]. In the extreme correlation limit, the quasiparticle weight may vanish leading to a breakdown of the Fermi liquid. The precise manner in which thermodynamic and transport properties are affected by interactions in this limit is a central question in the field of correlation physics.
Mott insulators (MIs) occupy a special place in the physics of strongly correlated systems [27]. In MIs, the quasiparticle weight as well as the single particle Green's function vanish to yield Green's function zeros along certain frequency-momentum contours. There has been considerable debate regarding the role of zeros on the physical charge and correlation functions [28; 29; 20; 22; 20].
Like poles across a Fermi surface, the real part of the Green's function changes sign across a zero surface; hence they contribute to the Luttinger volume [22; 20] and single particle winding numbers [33]. Similarly, zeros are key to the generalization of index theorems [28] to interacting settings, and play an essential role in understanding symmetric mass generation [37; 41]. In interacting topological insulators, it was argued that zeros allow for topological transitions to occur without closing the boundary gap [30; 31; 42].
While these properties raise the prospect of the zeros being experimentally measurable, their relationship to observables has been tenuous at best. For example, it is required that physical properties are independent of chemical potential variations up to the Mott scale at zero temperature [29] despite zeros occurring in the insulating gap. In addition, when determining the zeros' contribution to physically measurable correlations, it is crucial to keep track of conservation laws and the associated Ward identities. It is important to address these issues in order to properly assess the contributions of Green's function zeros to physical properties. Furthermore, Green's function zeros are also difficult to probe experimentally. By definition, the vanishing spectral weight makes their detection challenging. Sharpening the theoretical understanding about how the Green's function zeros affect physical correlation functions is expected to be important for probing the zeros experimentally in the future.
In this work, we argue that certain robust physical properties of electron systems can indeed capture the contributions of the Green's function zeros and, importantly, they do in a way that is consistent with the aforementioned expectations. These properties can be exploited to indirectly gather properties that may other
wise be elusive to conventional probes. We demonstrate our claims by considering an exactly solvable model of a Mott insulator (MI) [43] as a prototypical example where extreme correlation effects are realized [27].
More specifically, we compute here the charge and current response functions in accordance with the Ward identities and demonstrate how zeros manifest in physical observables. We begin by reexamining the relationship between the Luttinger volume and self-energy in generic interacting settings. Using this relation, we provide a simple picture that elucidates how zeros are needed to preserve charge conservation while maintaining robustness of the total charge to changes in the chemical potential of the order of the Mott gap. More specifically, the total charge contains a term associated with the Luttinger volume and a "backflow" term. We then evaluate the Hall response for a MI starting from non-interacting chern bands and show that it likewise contains two contributions. The first is a quantized topological term proportional to the three dimensional winding number \(N_{3}\)[44; 45] with contributions from Green's function zeros. The second is a previously unrecognized non-quantized backflow term essential to preserve charge conservation. In each case, the two terms combine to ensure that the total quantity is independent of changes to the chemical potential within the Mott gap _despite_ containing contributions from zeros.
## II Model
As an exactly solvable model where Green's function zeros occur, we consider a generic multi-band version of the Hatsugai-Kohmoto (HK) model [43]. We write the total Hamiltonian as
\[H = H_{0}+H_{I} \tag{1}\] \[H_{0} = \sum_{\mathbf{k},\alpha\sigma,\beta\sigma^{\prime}}h_{\alpha\sigma, \beta\sigma^{\prime}}(\mathbf{k})c^{\dagger}_{\mathbf{k}\alpha\sigma}c_{\mathbf{k}\beta \sigma^{\prime}}\] (2) \[H_{I} = \frac{U}{2}\sum_{\mathbf{k}\alpha}(n_{\mathbf{k}\alpha\uparrow}+n_{\mathbf{ k}\alpha\downarrow}-1)^{2}\,. \tag{3}\]
Here \(c^{\dagger}_{\mathbf{k}\alpha\sigma}\) is the electron creation operator at momentum \(\mathbf{k}\), orbital \(\alpha\) and spin \(\sigma\). The hopping matrix elements between states with orbital indices \(\alpha,\beta\) and spin indices \(\sigma,\sigma^{\prime}\) are denoted by \(h_{\alpha\sigma,\beta\sigma^{\prime}}(\mathbf{k})\), \(U\) is a four-fermion electron-electron interaction that is local in momentum space but highly non-local in real space, and \(n_{\mathbf{k}\alpha\sigma}\) is the number operator. The Hamiltonian at each \(\mathbf{k}\) point becomes mutually decoupled in the form \(H=\sum_{\mathbf{k}}H_{\mathbf{k}}\), because of the local-in-momentum-space interaction. Later in the paper, we will also use a single band version of Eq. 1 by replacing the kinetic hopping matrix by a single dispersion \(\xi(\mathbf{k})=\epsilon(\mathbf{k})-\mu\) where \(\epsilon(\mathbf{k})\) is the band energy and \(\mu\) the chemical potential. The interaction Hamiltonian then contains a single repulsive term between opposite spins at a specific momentum point. Accordingly, the orbital indices \(\alpha,\beta\) are suppressed in the electron creation and annihilation operators in Eq. 1 for the one band model. We will explicitly specify when this is the case. The electronic and spectral properties of the Hamiltonian Eq. 1 for single [43; 46; 43; 19; 34] and multi-band dispersions [47; 40; 19] have been previously studied but we recall some key properties. First, irrespective of the specific tight binding Hamiltonian at hand, Eq. 1 captures a correlated metal to a fully gapped Mott insulator transition for interaction strength \(U\) comparable or larger than the non-interacting bandwidth (\(W\)). Second, the Green's function can be obtained exactly, and in the limit of strong interactions compared to \(W\) and zero temperature, a key property of the Green's function is the existence of contours of dispersive zeros in the Mott gap. These contours are a consequence of destructive cancellation of electron addition- and removal-like transitions with equal and opposite energy transfers [19]. Further, lattice symmetries of the Hamiltonian \(H\) constrain spectral degeneracies at high symmetry points that operate on both poles and zeros of the Green's function. Hence, Eq. 1 offers a platform to explore topological properties in the presence of interactions non-perturbatively even when there is a loss of quasiparticles. Additional properties of \(H\) for the case when the non-interacting bands have a non-trivial spin-Hall Chern number are discussed in Ref. [19]. Owing to these features, we use Eq. 1 as a starting point for our analysis.
In section III and associated Fig. 2, it will suffice for us to work with a single band version of Eq. 1. We will use a quadratic band dispersion of the form \(\xi(\mathbf{k})=k^{2}-\mu\) where we denote \(k\) as the magnitude of \(\mathbf{k}\). In section IV, we will work with a multi-orbital tight-binding model with chern bands to compute the conductivity. In this case, the matrix elements \(h_{\alpha\sigma,\beta\sigma^{\prime}}(\mathbf{k})\) are defined later in Eq. 41.
In the course of our discussion, we will also use the atomic limit of the Hubbard model to illustrate common features of certain conclusions. To establish notation, let us denote \(n_{i\sigma}\) as the number operator at real space site \(i\) and spin \(\sigma\), and \(u\) as the onsite coulomb interaction. In this limit, we write the Hubbard Hamiltonian as
\[H_{u}=u\sum_{i}n_{i\uparrow}n_{i\downarrow}-\mu\sum_{i\sigma}c^{\dagger}_{i \sigma}c_{i\sigma} \tag{4}\]
where \(\mu\) is the onsite chemical potential, \(c^{\dagger}_{i\sigma}\) is the electron creation operator at site \(i\) and spin \(\sigma\). In this limit, one obtains an atomic Mott insulator where the single site Green's function acquires dispersionless zeros when the probe frequency equals negative of the chemical potential [27]. Thus certain aspects and properties of the Hamiltonian Eq. 1 can be bench marked against the physics of the Hubbard model. We now examine how physical properties are affected by the presence of zeros in the Green's function in Eqs. 1 and 4.
## III Total Charge
Before we study the specific case of Eq. 1, we begin by expressing the total particle number in terms of Green's function singularities [33]. With knowledge of the interacting Green's function \(G(z)\), the total particle number \(N\) can be determined from the following equation [48; 49]:
\[N=\frac{1}{\beta}\sum_{\omega_{n}}\mathrm{Tr}[G(i\omega_{n})]e^{ i\omega_{n}\eta}=\oint\frac{dz}{2\pi i}n_{f}(z)\,\mathrm{Tr}[G(z)]\,, \tag{5}\]
where \(\omega_{n}\) is the fermionic Matsubara frequency, \(\beta\) the inverse temperature, \(\eta\) is an infinitesimally small positive number, \(n_{f}(z)\) is the Fermi function, and the contour of integration encloses the Matsubara frequencies along the imaginary axis. To extract the topological winding characteristics of the particle number [33], we note that the total Green's function can be written in terms of the non-interacting Green's function \(G_{0}(i\omega_{n})\) and self-energy \(\Sigma(i\omega_{n})\) through the Dyson equation \(G(i\omega_{n})^{-1}=G_{0}(i\omega_{n})^{-1}-\Sigma(i\omega_{n})\). Using this, we can rewrite
\[G(z)=G(z)\frac{\partial G(z)^{-1}}{\partial z}+G(z)\frac{ \partial\Sigma(z)}{\partial z}, \tag{6}\]
and as a result the particle number \(N\) takes the form
\[N = \oint\frac{dz}{2\pi i}n_{f}(z)\left[\frac{\partial\ln\mathrm{ det}G(z)^{-1}}{\partial z}+\mathrm{Tr}\left(G(z)\frac{\partial\Sigma(z)}{ \partial z}\right)\right] \tag{7}\] \[\equiv v_{l}-\delta v.\]
The first term in Eq. 7 denoted \(v_{l}\) is traditionally defined as the Luttinger volume, and the second backflow term in Eq. 7 denoted \(\delta v\) is its deviation from the total particle number. The Luttinger theorem states that \(\lim_{\beta\rightarrow\infty}N=\lim_{\beta\rightarrow\infty}v_{l}\). The theorem holds when particle-hole symmetry is preserved and we will see below that away from the particle-hole symmetric filling, the Luttinger theorem can be violated and the backflow term \(\lim_{\beta\rightarrow\infty}\delta v\neq 0\). To further simplify Eq. 7, we utilize analytical properties of the Green's function. In particular, we note that the determinant of the single particle Green's function can be decomposed into products of poles and zeros [30]
\[\mathrm{det}G(z)=\frac{\prod_{i=1}^{nz}(z-\zeta_{i})}{\prod_{i=1}^{nz}(z-\pi_ {i})}, \tag{8}\]
where \(\zeta_{i}\) (\(n_{Z}\)) and \(\pi_{i}\) (\(n_{P}\)) are the locations (number) of zeros and poles of the Green's function determinant. Substituting the factorizaton into the Luttinger volume gives a finite temperature expression
\[v_{l}=\sum_{i=1}^{n_{P}}n_{F}(\pi_{i})-\sum_{i=1}^{nz}n_{F}(\zeta_{i}). \tag{9}\]
At zero temperature, each of the poles (zeros) below the Fermi energy contribute one (negative one) count to the Luttinger volume while those at the Fermi energy contribute a \(\frac{1}{2}(-\frac{1}{2})\) count. Hence the Fermi function is reduced to the Heaviside step (Kronecker delta) function for energies below (at) the Fermi energy, and we can rewrite the Luttinger volume as
\[v_{l} = \left(\sum_{i}^{n_{P}}\Theta(-\pi_{i})-\sum_{i}^{nz}\Theta(- \zeta_{i})\right) \tag{10}\] \[+ \frac{1}{2}\left(\sum_{i}^{n_{P}}\delta_{0,\pi_{i}}-\sum_{i}^{nz }\delta_{0,\zeta_{i}}\right).\]
Figure 1: Illustration of Green’s function zeros’ contribution to the total particle count and failure of pole counting. (a) Splitting of two occupied electronic states (red and green denote spin) at the Fermi energy into upper and lower Hubbard bands (UHB and LHB respectively) due to the repulsive Coulomb interaction \(u\) for the cases of \(\mu>0\), \(\mu=0\) and \(\mu<0\) in the atomic limit of the Hubbard model. The blue (dashed) line denotes the chemical potential (Green’s function zeros). The vertical axis is frequency. (b) Tables showing the difference between the numbers of poles and zeros (# of poles - # of zeros) for the three cases of \(\mu>0\), \(\mu=0\) and \(\mu<0\). The rows \(n_{<}(f)\),\(n_{0}(f)\) label (# of poles - # of zeros) below and at the chemical potential respectively of the matrix \(f\). “#” in the third row denotes the total contribution to the formula \(N=v_{l}-\delta v\) in Eq. 7 using Eqs. 16 11 15. The first two columns label the cases of non-interacting and interacting Green’s function respectively and the third denotes the ratio of their determinants \(R(z)\) (see Eq. 12). In each of the three cases, \(N=v_{l}-\delta v\) is satisfied and the particle number is conserved despite variations of the Luttinger volume \(v_{l}\) and its backflow deviation \(\delta v\).
For notational simplicity and later discussions, we will denote the difference between the number of poles and zeros below [at] the chemical potential for the determinant of the matrix \(f\) as \(n_{<}(f)[n_{0}(f)]\). Thus the \(v_{l}\) can be rewritten succinctly as
\[v_{l}=n_{<}(G)+\frac{1}{2}n_{0}(G). \tag{11}\]
Notice that in the scenario when there are no zeros in the Green's function, the Luttinger volume reduces to its well known expression but with half the contribution from poles at the Fermi energy when compared to those below as must be expected. We can similarly simplify the expression for the deviation from the Luttinger volume \(\delta v\). Defining the ratio of the determinant of the non-interacting and interacting Green's function as
\[R(z)=\frac{\det\!G_{0}(z)}{\det\!G(z)}, \tag{12}\]
we can rewrite the deviation as an expression similar to the Luttinger volume but in terms of analytical properties of \(R(z)\)[33]. Choosing a contour of integration that encloses the Matsubara frequencies along the imaginary axis, we have the backflow term [33]
\[\delta v = -\oint\frac{dz}{2\pi i}n_{f}(z)\operatorname{Tr}\left(G(z)\frac{ \partial\Sigma(z)}{\partial z}\right) \tag{13}\] \[= +\oint\frac{dz}{2\pi i}n_{f}(z)\frac{\partial\ln R(z)}{\partial z}.\]
We now define \(Z_{i}\) (\(N_{Z}\)) and \(\Pi_{i}\) (\(N_{P}\)) as the locations (number) of zeros and poles of \(R(z)\). We then obtain an expression for the backflow \(\delta v\) similar to that of \(v_{l}\) in Eq. 10 as
\[\delta v = \left(\sum_{i}^{N_{P}}\Theta(-\Pi_{i})-\sum_{i}^{N_{Z}}\Theta(-Z_ {i})\right) \tag{14}\] \[+ \frac{1}{2}\left(\sum_{i}^{N_{P}}\delta_{0,\Pi_{i}}-\sum_{i}^{N _{Z}}\delta_{0,Z_{i}}\right),\] \[= n_{<}(R)+\frac{1}{2}n_{0}(R). \tag{15}\]
In non-interacting systems, \(G_{0}(z)=G(z)\) and by definition \(R(z)=1\) leading to \(\delta v=0\). The total particle number is fixed by that of the non-interacting electron Green's function and takes the form
\[N=n_{<}(G_{0})+\frac{1}{2}n_{0}(G_{0}). \tag{16}\]
In a Fermi liquid and related phases, due to the absence of Green's function zeros in its determinant, there exists a one-to-one mapping between the poles of \(G_{0}(z)\) and \(G(z)\). This is because the lack of zeros leaves the pole structure of the Green's function determinant intact. Hence the Luttinger theorem continues to be satisfied and \(\delta v=0\). We are now in a position to calculate the particle number for different cases of interest including the Hamiltonians described in Eqs. 1 and 4.
### Insights from the atomic limit
_Failure of pole counting:_ To elucidate the role of quasiparticle loss on the particle number, we begin with the atomic limit of the Hubbard model in Eq. 4. We argue that in this limit, counting of poles is insufficient to capture the total particle count. This fact is already reflected in Eqs. 7, 10,and 15, but here we give a simplified picture to help demonstrate a key notion - Green's function zeros in the Mott gap contribute to the total particle number while also keeping it invariant under changes to the chemical potential up to order of the gap. We argue that this holds true for any physically measurable property. Later in the paper, we will further reiterate this simple principle in the context of a topological Hall response in the Mott phase obtained from the Hamiltonian in Eq. 1.
We work with the Hubbard Hamiltonian by setting the kinetic energy and chemical potential to zero, i.e.,
\[H_{u}=u\sum_{i}n_{i\uparrow}n_{i\downarrow} \tag{17}\]
where \(i\) runs over the various sites of the lattice. In the absence of interactions, the determinant of the non-interacting Green's function per site is given by \(\det G_{0}(z)=\frac{1}{z^{2}}\) where the quadratic power in the denominator is due to spin degree of freedom. This leads to _two_ poles located exactly at zero energy as displayed in Fig. 1 (a) with a weight of one-half each. In the presence of interactions and particle-hole symmetry, the determinant of the interacting Green's function \(G_{u}(z)\) per site is given by
\[\det G_{u}(z)=\left(\frac{4z}{4z^{2}-u^{2}}\right)^{2}, \tag{18}\]
where again the overall quadratic power is from the spin degree of freedom. Due to the pole in the self-energy \(\Sigma(z)=\frac{u^{2}}{4z}\), the interacting Green's function has _four_ poles - _two_ poles each above and below the chemical potential - and _two_ zeros at the chemical potential. This is shown below the center arrow in Fig. 1(a). As a result, there is a doubling of the number of poles _below_ the Fermi energy each with a weight of unity. Hence counting poles by themselves in the atomic limit of the Hubbard model cannot be sufficient to account for a fixed total particle number. A comparison of the determinants of \(G_{0}(z)\) and \(G_{u}(z)\) readily lays out the reason for the failure of pole counting - a singular self-energy (Green's function zero) changes the order of the Green's function pole structure. Thus, while accounting for singularities of Green's function in the Hubbard model, it is essential to count zeros for preservation of the total particle number [22].
_Role of zeros:_ From dimensionality arguments, the singularity of the self-energy must naturally be involved to account for the total Luttinger count and electron number. Eqs. 7, 10, and 15 precisely capture how Green's
function zeros must be included to preserve the total particle number in the presence of interactions.
To better clarify the role of zeros, we compute the total particle number per site using Eqs. 7, 10, and 15 in the atomic limit of the Hubbard model (Eq. 4). We first work in the zero chemical potential limit and later consider scenarios where it is non-zero. In the absence of interactions, we determine \(v_{l}\) and \(\delta v\) from the determinant of the non-interacting Green's function \(\det G_{0}(z)=z^{-2}\). Since there are two poles at zero energy (Fig. 1 (a)), \(v_{l}=\frac{1}{2}(2)=1\), whereas \(\ln R(z)=0\) leading to \(\delta v=0\); hence \(N=v_{l}-\delta v=1\) per site. In the presence of interactions, there are two poles below the Fermi energy and two zeros at the Fermi energy in the determinant Eq. 18 (Fig. 1 (a)). We therefore have \(v_{l}=2-\frac{1}{2}(2)=1\). Whereas, since \(R(z)^{-1}=\det G_{u}(z)/\det G_{0}(z)=\frac{4z^{4}}{(4z^{2}-u^{2})^{2}}\), we have two poles below and four zeros at the Fermi energy giving \(\delta v=2-\frac{1}{2}(4)=0\); hence again \(N=v_{l}-\delta v=1\) per site. Thus we see that in the atomic limit of the Hubbard model, the Luttinger theorem holds when \(\mu=0\) (particle-hole symmetry), and to recover the correct particle number, Green's function zeros must contribute to the count.
Moving away from the \(\mu=0\) limit, we shift chemical potential within the Mott gap away from the particle-hole symmetric point. For the case when \(\mu<0\) in the presence of interactions, there are two poles (no singularities) below (at) the chemical potential (Fig. 1 (a)). Hence we see that the Luttinger count is \(v_{l}=2+\frac{1}{2}(0)=2\) whereas the deviation is \(\delta v=2+\frac{1}{2}(-2)=1\), hence satisfying the same particle number condition \(N=v_{l}-\delta v=1\). Similarly when \(\mu>0\), there are two poles and zeros each (no singularities) below (at) the chemical potential (Fig. 1 (a)). We therefore have \(v_{l}=0+\frac{1}{2}0=0\) and \(\delta v=0+\frac{1}{2}(-2)=-1\) so that the particle number \(N=v_{l}-\delta v=1\) continues to remain unchanged. A summary of these numerical evaluations for the three cases of \(\mu=0,\mu>0,\mu<0\) appears in Fig. 1 (b).
### The case of Hatsugai-Kohmoto model
We can apply a similar analysis to the dispersive bands of the Hatsugai-Kohmoto model of Eq. 1. It is sufficient to consider a single-band version of the Hamiltonian to illustrate our results. Since the model is local in momentum space where the individual \(\mathbf{k}\) points are decoupled from each other, every such momentum point can be viewed as a single site Hubbard model with a different on-site energy. Hence the results from previous paragraphs for the single site Hubbard model in the atomic limit can be utilized in a straightforward manner.
We focus on the limit where the interaction strength \(U\) is larger than the bandwidth with an occupation \(N=1\) per spin (half-filling) and momentum point. Fig 2 shows a schematic of the spectral function in this limit. The solid lines are the upper and lower 'Hubbard-like' bands and the dashed line is the contour of Green's function zeros. The blue dots mark the intersection of the zero surface with the chemical potential and the red (green) arrow denotes a momentum point inside (outside) the zero surface at zero energy (Luttinger surface, LS). At the particle hole symmetric point (solid blue dot), there exist two poles (two zeros) below (at) the chemical potential. Hence, like in the particle-hole symmetric case described earlier we have \(N=v_{l}=1,\delta v=0\). Away from particle-hole symmetry, the chemical potential can be moved above or below the zero frequency (short black solid lines) for a given momentum corresponding to the blue arrow. Alternatively, the momenta can occur inside or outside the LS. For the case when the chemical potential is above zero, or equivalently, the momentum point lies inside the LS as marked by the red arrow, there are two poles and zeros below the chemical potential. As a result, \(v_{l}=0\) but \(\delta v=-1\) so that the net particle count \(N=1\) continues to be preserved. Similarly, when the chemical potential is below zero, or equivalently, the momentum point lies outside the LS as marked by the green arrow, there are two poles below the chemical potential and we obtain \(v_{l}=2\) but \(\delta v=1\) so that the net particle count is again \(N=1\). Therefore, the particle number remains conserved for each momentum as expected for the Hamiltonian in Eq. 1 regardless of changes in chemical potential.
The result above reconciles two seemingly conflicting notions: (1) that chemical potential changes less than or
Figure 2: Illustration of the upper Hubbard band (UHB), lower Hubbard band (LHB) and contour zero surface (dashed green line) of the model Hamiltonian in Eq. 1 with a single band. The blue dots denote the intersection of zero surface with the Fermi energy (horizontal dashed gray line). The red (green) arrow denotes a \(\mathbf{k}\) point within (outside) the zero surface. The blue arrow denotes a \(\mathbf{k}\) point at the zero surface. The short solid lines above and below the right blue dot denote the cases when the chemical potential is moved slightly above and below the reference value.
order of the Mott gap are not expected to affect physical properties since the ground state is unchanged and (2) that only "occupied" singularities (zeros or poles) contribute to the total particle number. From our analysis above, we can conclude that indeed Green's function zeros contribute to the total particle number while also keeping it invariant to chemical potential changes smaller than \(U\). This is made possible because any variation of a topological quantity due to chemical potential changes, in this case the Luttinger count \(v_{l}\), is offset by an opposite variation of a non-topological one, here the backflow deviation \(\delta v\) in Eq. 7. We will see below that a similar mechanism holds for the transverse Hall response function.
## IV Hall conductivity
### Current operators
In this section, we consider the electron transport properties in the presence of the Green's function zeros using Kubo formula [50]. In particular, we are interested in whether the conductivity tensor can be solely represented by the exact Green's functions. The (optical) conductivity can be evaluated by the correlation functions of current operators, which are derived from the continuity equations \(\partial_{t}\rho+\nabla\cdot\mathbf{j}=0\). Combining the Heisenberg equation of motion with the continuity equation, one is able to obtain the expression for the current operator as follows:
\[\mathbf{q}\cdot\mathbf{j_{q}}=\left[H,\rho_{\mathbf{q}}\right], \tag{19}\]
in which the Fourier transformed density operator takes the following form:
\[\rho_{\mathbf{q}}=\frac{1}{\sqrt{N_{L}}}\sum_{\mathbf{k},\alpha\sigma}c^{\dagger}_{ \mathbf{k}+\frac{\pi}{2}\alpha\sigma}c_{\mathbf{k}-\frac{\pi}{2}\alpha\sigma}\,. \tag{20}\]
Here \(N_{L}\) stands for the size of the lattice. For a non-interacting Hamiltonian, the current operator is indeed the velocity operator of fermions:
\[\mathbf{J_{q}}=\frac{1}{\sqrt{N_{L}}}\sum_{\mathbf{k},\alpha\sigma\beta\sigma^{\prime }}\mathbf{v}_{\alpha\sigma,\beta\sigma^{\prime}}(\mathbf{k})c^{\dagger}_{\mathbf{k}+\frac{ \pi}{2}\alpha\sigma}c_{\mathbf{k}-\frac{\pi}{2}\beta\sigma}\,, \tag{21}\]
in which \(\mathbf{v}_{\alpha\sigma,\beta\sigma^{\prime}}(\mathbf{k})=\nabla_{\mathbf{k}}h_{\alpha \sigma,\beta\sigma^{\prime}}(\mathbf{k})\) is the velocity matrix. However, when the interaction Hamiltonian contains terms which cannot be written as the products of local density operators (such as HK Hamiltonian), it will also contribute to the total current operator, which we will denote as \(\mathbf{J^{\prime}_{q}}\):
\[\mathbf{q}\cdot\mathbf{J^{\prime}_{q}}=\left[H_{I},\rho_{\mathbf{q}}\right], \tag{22}\]
and the total current is the summation of the two terms \(\mathbf{j_{q}}=\mathbf{J_{q}}+\mathbf{J^{\prime}_{q}}\). Parameters in both the kinetic and interacting Hamiltonians will be affected by the external electromagnetic field \(\mathbf{A}_{-\mathbf{q}}\) via Peierls substitution due to the gauge invariance, because the creation and annihilation operators in the interacting Hamiltonian are do not always locate on the same lattice sites in real space. As a consequence, the current operator \(\mathbf{J^{\prime}_{q}}\) originated from the interacting Hamiltonian is also coupled to the gauge field. Similarly, the diamagnetic current can also originate from the interaction.
### Conductivity tensor
With all these factors considered, the optical conductivity with imaginary frequency can be written as summation of the current-current susceptibility and the diamagnetic response from both \(\mathbf{J_{q}}\) and \(\mathbf{J^{\prime}_{q}}\). More precisely, the conductivity tensor takes the following form:
\[\sigma_{ij}(\mathbf{q},i\Omega)=\frac{1}{\Omega}\left(\mathcal{D}_{ij}+\chi_{ij}( \mathbf{q},i\Omega)+\mathcal{D}^{\prime}_{ij}(\mathbf{q})+X_{ij}(\mathbf{q},i\Omega) \right), \tag{23}\]
in which \(\mathcal{D}_{ij}\) and \(\mathcal{D}^{\prime}_{ij}(\mathbf{q})\) are the diamagnetic response tensors obtained from expanding the kinetic Hamiltonian and interaction Hamiltonian to the second order of \(\mathbf{A_{q}}\), and the susceptibilities \(\chi_{ij}\) and \(X_{ij}\) are defined as follows:
\[\chi_{ij}(\mathbf{q},i\Omega)=-\int d\tau\,e^{i\Omega\tau}\langle T_{ \tau}j^{i}_{\mathbf{q}}(\tau)J^{j}_{-\mathbf{q}}(0)\rangle\,, \tag{24}\] \[X_{ij}(\mathbf{q},i\Omega)=-\int d\tau\,e^{i\Omega\tau}\langle T_{ \tau}j^{i}_{\mathbf{q}}(\tau)J^{\prime j}_{-\mathbf{q}}(0)\rangle\,. \tag{25}\]
Since the the interaction Hamiltonian usually contains four-fermion terms, the susceptibility \(X_{ij}(\mathbf{q},i\Omega)\) could contain correlation functions with more than 6 fermionic operators. Using Ward-Takahashi identity [51; 52], we are able to rewrite the susceptibility \(\chi_{ij}\) together with the diamagnetic term \(\mathcal{D}_{ij}\) as a charge-current susceptibility:
\[\mathcal{D}_{ij}+\lim_{\mathbf{q}\to 0}\chi_{ij}(\mathbf{q},\Omega)=-i \Omega\lim_{\mathbf{q}\to 0}\frac{\partial}{\partial q_{i}}\chi_{0j}(\mathbf{q},\Omega)\,, \tag{26}\] \[\chi_{0j}(\mathbf{q},i\Omega)=-\int d\tau e^{i\Omega\tau}\langle T_{ \tau}\rho_{\mathbf{q}}(\tau)J^{j}_{-\mathbf{q}}(0)\rangle\,. \tag{27}\]
The derivation of this relationship can be found in App. A. Thus, the conductivity tensor will contain the charge-current susceptibility as follows:
\[\sigma_{ij}(\mathbf{q}\to 0,i\Omega)\] \[= -i\lim_{\mathbf{q}\to 0}\frac{\partial}{\partial q_{i}}\chi_{0j}(\mathbf{q},i \Omega)+\sigma^{\prime}_{ij}(\mathbf{q},i\Omega)\,, \tag{28}\]
where \(\sigma^{\prime}_{ij}(\mathbf{q},i\Omega)\) stands for the contributions from \(\mathcal{D}^{\prime}_{ij}(\mathbf{q})\) and \(X_{ij}(\mathbf{q},i\Omega)\), which contain correlation functions with 6 or more fermionic operators. In order to find the connection between the conductivity tensor and the Green's functions, it is better to write the charge-current susceptibility \(\chi_{0j}\) as an integral of exact Green's functions
\(G(\mathbf{k},i\omega)\) and the exact vertex function \(\Lambda^{0}(\mathbf{q},i\Omega;\mathbf{k},i\omega)\) (the definition of which can be found in App. A):
\[\chi_{0j}(\mathbf{q},i\Omega)=\frac{1}{\sqrt{N_{L}}}\sum_{\mathbf{k}}\int \frac{d\omega}{2\pi}\text{Tr}\left[G\left(\mathbf{k}+\frac{\mathbf{q}}{2},i\omega\right)\right.\] \[\cdot\left.\Lambda^{0}(\mathbf{k},i\omega;\mathbf{q},i\Omega)G\left(\mathbf{ k}-\frac{\mathbf{q}}{2},i\omega+i\Omega\right)v^{j}(\mathbf{k})\right]\,. \tag{29}\]
Taking the derivative of the susceptibility \(\chi_{0j}\) with respect to the wave vector \(\mathbf{q}\), we yield the following expression for the conductivity tensor:
\[\sigma_{ij}(i\Omega)= -i\frac{1}{\sqrt{N_{L}}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi} \text{Tr}\left[G(\mathbf{k},i\omega)\frac{\partial\Lambda^{0}(\mathbf{k},i\omega;\mathbf{ q},i\Omega)}{\partial q_{i}}\Big{|}_{\mathbf{q}\to 0}G(\mathbf{k},i\omega+i\Omega)v^{j}(\mathbf{k})\right]\] \[-\frac{i}{2\sqrt{N_{L}}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi} \text{Tr}\left[\frac{\partial G(\mathbf{k},i\omega)}{\partial k_{i}}\Lambda^{0}( \mathbf{k},i\omega;\mathbf{q}\to 0,i\Omega)G(\mathbf{k},i\omega+i\Omega)v^{j}(\mathbf{k})\right]\] \[+\frac{i}{2\sqrt{N_{L}}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi} \text{Tr}\left[G(\mathbf{k},i\omega)\Lambda^{0}(\mathbf{k},i\omega;\mathbf{q}\to 0,i\Omega) \frac{\partial G(\mathbf{k},i\omega+i\Omega)}{\partial k_{i}}v^{j}(\mathbf{k})\right] +\sigma^{\prime}_{ij}(\mathbf{q},i\Omega)\,. \tag{30}\]
One can easily notice that the second and third terms only contain the vertex function \(\Lambda^{0}\) at \(\mathbf{q}\to 0\). Using the Ward-Takahashi identity for \(\Lambda^{\mu}\), we are able to solve the vertex function \(\Lambda^{0}(\mathbf{k},i\omega;0,i\Omega)\) as follows:
\[-i\Omega\cdot\Lambda^{0}(\mathbf{k},i\omega;\mathbf{q}\to 0,i\Omega)=\frac{\left[G^{-1}( \mathbf{k},i\omega)-G^{-1}(\mathbf{k},i\omega+i\Omega)\right]}{\sqrt{N_{L}}}+\lim_{ \mathbf{q}\to 0}\sum_{i}q_{i}\Lambda^{i}(\mathbf{q},i\Omega;\mathbf{k},i\omega)\,. \tag{31}\]
Here the second term will vanish if the vertex functions \(\Lambda^{i}\) do not diverge at \(\mathbf{q}=0\). In normal condensed matter systems with short range interaction, this condition is usually satisfied. Because the HK model has a long-range interaction, we have, for generality, kept this term in the consideration. In the DC limit \(i\Omega\to 0\), it can also be written as:
\[\Lambda^{0}(\mathbf{k},i\omega;\mathbf{q}\to 0,i\Omega\to 0)= -i\frac{1}{\sqrt{N_{L}}}\frac{\partial G^{-1}(\mathbf{k},i\omega)}{ \partial\omega}+\mathcal{F}(\mathbf{k},i\omega)\,, \tag{32}\] \[\mathcal{F}(\mathbf{k},i\omega)= \lim_{\Omega\to 0}\lim_{\mathbf{q}\to 0}\sum_{i}\frac{q_{i}}{-i \Omega}\Lambda^{i}(\mathbf{q},i\Omega;\mathbf{k},i\omega)\,. \tag{33}\]
Using Eq. 32, the conductivity at DC limit can be written as:
\[\sigma_{ij} =-i\frac{1}{\sqrt{N_{L}}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi} \text{Tr}\left[G(\mathbf{k},i\omega)\frac{\partial\Lambda^{0}(\mathbf{k},i\omega;\mathbf{ q},i\Omega\to 0)}{\partial q_{i}}\Big{|}_{q_{i}\to 0}G(\mathbf{k},i\omega)v^{j}(\mathbf{k})\right]\] \[-\frac{i}{2\sqrt{N_{L}}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi} \text{Tr}\left[\frac{\partial G(\mathbf{k},i\omega)}{\partial k_{i}}\mathcal{F}( \mathbf{k},i\omega)G(\mathbf{k},i\omega)v^{j}(\mathbf{k})\right]\] \[+\frac{i}{2\sqrt{N_{L}}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi} \text{Tr}\left[\partial G(\mathbf{k},i\omega)\mathcal{F}(\mathbf{k},i\omega)\frac{ \partial G(\mathbf{k},i\omega)}{\partial k_{i}}v^{j}(\mathbf{k})\right]\] \[-\frac{1}{2N_{L}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi} \text{Tr}\left[\frac{\partial G(\mathbf{k},i\omega)}{\partial k_{i}}\frac{ \partial G^{-1}(\mathbf{k},i\omega)}{\partial\omega}G(\mathbf{k},i\omega)v^{j}(\mathbf{k} )\right]\] \[+\frac{1}{2N_{L}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi}\text{Tr} \left[G(\mathbf{k},i\omega)\frac{\partial G^{-1}(\mathbf{k},i\omega)}{\partial\omega} \frac{\partial G(\mathbf{k},i\omega)}{\partial k_{i}}v^{j}(\mathbf{k})\right]+\sigma^ {\prime}_{ij}(\mathbf{q},i\Omega)\,. \tag{34}\]
In addition to the \(\sigma^{\prime}_{ij}\) term from higher points correlation functions, Eq. 34 contain the information of the vertex functions \(\Lambda^{\mu}\) at nonzero \(\mathbf{q}\), which are inherently multi-point correlation functions and _cannot_ be represented by Green's functions.
### \(N_{3}\) and Hall conductivity
We focus on the Hall conductivity \(\sigma_{xy}\) in this paragraph. By using the identities \(\partial G=-G\partial G^{-1}G\) and \(v^{j}(\mathbf{k})=-\partial_{k_{j}}G^{-1}(\mathbf{k},i\omega)-\partial_{k_{j}}\Sigma( \mathbf{k},i\omega)\), we are able to
represent the velocity matrices and the derivatives of the Green function as the derivatives of Green's function inverse matrices. Thus, the expression of the Hall conductivity can be rewritten as follows:
\[\sigma_{xy}= -i\frac{1}{\sqrt{N_{L}}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi}\text{ Tr}\left[G(\mathbf{k},i\omega)\frac{\partial\Lambda^{0}(\mathbf{k},i\omega;\mathbf{q},i \Omega\to 0)}{\partial q_{x}}\Big{|}_{q_{x}\to 0}G(\mathbf{k},i\omega)v^{y}(\mathbf{k})\right]\] \[-\frac{i}{2\sqrt{N_{L}}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi} \text{Tr}\left[\frac{\partial G(\mathbf{k},i\omega)}{\partial k_{i}}\mathcal{F}( \mathbf{k},i\omega)G(\mathbf{k},i\omega)v^{j}(\mathbf{k})\right]\] \[+\frac{i}{2\sqrt{N_{L}}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi} \text{Tr}\left[\partial G(\mathbf{k},i\omega)\mathcal{F}(\mathbf{k},i\omega)\frac{ \partial G(\mathbf{k},i\omega)}{\partial k_{i}}v^{j}(\mathbf{k})\right]\] \[+\frac{1}{2N_{L}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi}\text{Tr} \left[G(\mathbf{k},i\omega)\partial_{\omega}G^{-1}(\mathbf{k},i\omega)G(\mathbf{k},i \omega)\partial_{\lambda_{x}}G^{-1}(\mathbf{k},i\omega)G(\mathbf{k},i\omega)\left( \partial_{k_{y}}G^{-1}(\mathbf{k},i\omega)+\partial_{k_{y}}\Sigma(\mathbf{k},i\omega) \right)\right\}\] \[-\frac{1}{2N_{L}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi}\text{Tr} \left[G(\mathbf{k},i\omega)\partial_{k_{x}}G^{-1}(\mathbf{k},i\omega)G(\mathbf{k},i \omega)\partial_{\omega}G^{-1}(\mathbf{k},i\omega)G(\mathbf{k},i\omega)\left(\partial _{k_{y}}G^{-1}(\mathbf{k},i\omega)+\partial_{k_{y}}\Sigma(\mathbf{k},i\omega)\right)\right]\] \[+\sigma^{\prime}_{ij}(\mathbf{q},i\Omega) \tag{35}\] \[\Delta N_{3}=\sigma_{xy}-N_{3}= -i\frac{1}{\sqrt{N_{L}}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi} \text{Tr}\left[G(\mathbf{k},i\omega)\frac{\partial\Lambda^{0}(\mathbf{k},i\omega;\mathbf{ q},i\Omega\to 0)}{\partial q_{x}}\Big{|}_{q_{x}\to 0}G(\mathbf{k},i\omega)v^{y}(\mathbf{k})\right]\] \[-\frac{i}{2\sqrt{N_{L}}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi} \text{Tr}\left[\frac{\partial G(\mathbf{k},i\omega)}{\partial k_{i}}\mathcal{F}( \mathbf{k},i\omega)G(\mathbf{k},i\omega)v^{j}(\mathbf{k})\right]\] \[+\frac{i}{2\sqrt{N_{L}}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi} \text{Tr}\left[\partial G(\mathbf{k},i\omega)\mathcal{F}(\mathbf{k},i\omega)\frac{ \partial G(\mathbf{k},i\omega)}{\partial k_{i}}v^{j}(\mathbf{k})\right]\] \[+\frac{1}{2N_{L}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi}\text{Tr} \left[G(\mathbf{k},i\omega)\partial_{\omega}G^{-1}(\mathbf{k},i\omega)G(\mathbf{k},i \omega)\partial_{\lambda_{x}}G^{-1}(\mathbf{k},i\omega)G(\mathbf{k},i\omega)\partial_{ k_{y}}\Sigma(\mathbf{k},i\omega)\right]\] \[-\frac{1}{2N_{L}}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi}\text{Tr} \left[G(\mathbf{k},i\omega)\partial_{k_{x}}G^{-1}(\mathbf{k},i\omega)G(\mathbf{k},i\omega) \partial_{\omega}G^{-1}(\mathbf{k},i\omega)G(\mathbf{k},i\omega)\partial_{k_{y}} \Sigma(\mathbf{k},i\omega)\right]+\sigma^{\prime}_{ij}(\mathbf{q},i\Omega)\,. \tag{36}\]
Terms in the fourth and fifth lines in Eq. 35 can clearly be recombined into the expression of \(N_{3}\), and the difference between \(\sigma_{xy}\) and \(N_{3}\), which has been denoted as \(\Delta N_{3}\) and is an analog to \(\delta v\) in the generalized Luttinger's theorem, is shown in Eq. 36. In the non-interacting limit, the self energy \(\Sigma(\mathbf{k},i\omega)\) vanishes, and the vertex function is a constant matrix \(\Lambda^{0}(\mathbf{k},i\omega;\mathbf{q},i\Omega)=\mathds{1}/\sqrt{N_{L}}\) because of _Wick's theorem_ (instead of a consequence of Ward-Takahashi identity). The contribution from 6 or more point correlation functions \(\sigma^{\prime}_{ij}(\mathbf{q},i\Omega)\) also vanishes since \(\mathbf{J}^{\prime}_{\mathbf{q}}=0\). Thus, all the terms on the right-hand-side are zero, which indicates \(\sigma_{xy}=N_{3}\). However, with the presence of interactions, the vertex \(\Lambda^{0}\) is no longer a constant function of \(\mathbf{q}\), and the self-energy \(\Sigma(\mathbf{k},i\omega)\neq 0\), and in general we cannot identify the topological index \(N_{3}\) as the Hall conductivity. The vertex functions \(\Lambda^{0}\) inherently encodes a four-point function, and the Ward-Takahashi identity is only able to relate it to _both_ the Green's functions and other vertex functions together, rather than representing everything solely through Green's function.
A natural question regarding the Hall conductivity is whether \(\sigma_{xy}\) will be equal to \(N_{3}\) when \(\Sigma(\mathbf{k},i\omega)=\Sigma(i\omega)\) is \(\mathbf{k}\) independent. Indeed, such self energy could eliminate the two terms in the fourth and fifth lines of Eq. 36, indicating that \(\sigma_{xy}\) is differed from \(N_{3}\) by only a term containing \(\Lambda^{0}\) and \(\sigma^{\prime}_{ij}(\mathbf{q},i\Omega)\) which contains 6-point or 8-point correlation functions [when the vector vertex functions are well-behaved in the long-wavelength limit, such that \(\mathcal{F}(\mathbf{k},i\omega)=0\)]. One may wonder if the derivative of the vertex function \(\Lambda^{0}\) can be represented by Green's functions via Ward-Takahashi identity. Since this remaining term contains the derivative of \(\Lambda^{0}\) with respect to \(\mathbf{q}\), we are not able to use Eq. 32 directly. In fact, one could check the following ansatz vertex functions \(\Lambda^{\mu}\) all satisfies the Ward-Takahashi identity:
\[\Lambda^{0}(\mathbf{k},i\omega;\mathbf{q},i\Omega\to 0)\] \[= \frac{1}{\sqrt{N_{L}}}\left(\mathds{1}+i\frac{\Sigma(i\omega+i \Omega)-\Sigma(i\omega)}{\Omega}-\mathbf{q}\cdot\mathbf{\ell}\right)\,, \tag{37}\] \[\Lambda^{i}(\mathbf{k},i\omega,\mathbf{q},i\Omega\to 0)=\frac{1}{\sqrt{N_{L}}} \left(v^{i}(\mathbf{k})+i\Omega\cdot\ell_{i}\right)\,. \tag{38}\]
Here \(\mathbf{\ell}\) is a constant vector with the same dimension as length. The Ward-Takahashi identity is satisfied regardless of the choice of \(\mathbf{\ell}\). Therefore, the derivative of the vertex function \(\Lambda^{0}\) in the \(\mathbf{q}\to 0,\Omega\to 0\) limit will be:
\[\frac{\partial\Lambda^{0}(\mathbf{k},i\omega;\mathbf{q},i\Omega\to 0)}{\partial q_{x}} \Big{|}_{q_{x}\to 0}=-\frac{\ell_{x}}{\sqrt{N_{L}}}\,. \tag{39}\]
The value of \(\ell_{x}\) is not able to be solely solved from the Ward-Takahashi identity. Thus, we cannot further reduce Eq. 36 into an expression which only contains full Green's functions, even if the self energy \(\Sigma\) is momentum \(\mathbf{k}\) independent.
### HK model with Chern bands
HK models are easily solvable using numerically exact diagonalization even if the kinetic energy \(h_{\alpha\sigma,\beta\sigma^{\prime}}(\mathbf{k})\) is not diagonal, due to the presence of a huge amount of good quantum numbers \(N_{\mathbf{k}}=\sum_{\alpha\sigma}c^{\dagger}_{\mathbf{k}\alpha\sigma}c_{\mathbf{k}\alpha\sigma}\). We choose a tight binding lattice model which carries a non-zero Chern number. The corresponding Hamiltonian is given by:
\[H_{0}= \sum_{\mathbf{k},\alpha\sigma,\beta\sigma^{\prime}}h_{\alpha\sigma, \beta\sigma^{\prime}}(\mathbf{k})c^{\dagger}_{\mathbf{k}\alpha\sigma}c_{\mathbf{k}\beta \sigma^{\prime}}\,, \tag{40}\] \[h(\mathbf{k})= \left[t_{12}\left(\tau_{1}\sin k_{x}+\tau_{2}\sin k_{y}\right)\right.\] \[+\left.\tau_{3}(M-t\cos k_{x}-t\cos k_{y})\right]\otimes s_{0}\,,\] (41) \[H_{I}= \frac{U}{2}\sum_{\mathbf{k}}\sum_{\alpha=1}^{2}\left(n_{\mathbf{k}\alpha \uparrow}+n_{\mathbf{k}\alpha\downarrow}-1\right)^{2}\,. \tag{42}\]
Here we use \(\tau_{0,1,2,3}\) to represent the identity and Pauli matrices with sublattice indices (\(\alpha=1,2\)), and we use \(s_{0,1,2,3}\) to represent the identity and Pauli matrices with spin indices (\(\sigma=\uparrow,\downarrow\)). Since the kinetic Hamiltonian \(h(\mathbf{k})\) is proportional to \(s_{0}\), it has a spin \(SU(2)\) symmetry. When the parameters are chosen to be \(t_{12}=t=1\) and \(|M|<2\), the two energy bands of each spin will carry Chern numbers \(\nu_{C}=\pm 1\), and due to the spin \(SU(2)\) symmetry of the whole kinetic Hamiltonian, the two lower energy bands have the same Chern number. In the numerical calculation, we will choose \(M=1\).
The Hamiltonian for every momentum value \(H_{\mathbf{k}}\) are completely decoupled from each other. Hence, the spectra and the wavefunctions for each \(H_{\mathbf{k}}\) can be numerically solved easily, since it is a \(16\times 16\) matrix. In Fig. 3(a), we provide the energy spectra of \(H_{\mathbf{k}}-\mu N_{\mathbf{k}}\) when the chemical potential is tuned such that the ground state is at half filling. We also note that the ground state for each \(\mathbf{k}\) will not change if the chemical potential is changed by a value \(|\Delta\mu|\ll U/2\), due to the gap between the ground states (\(N_{\mathbf{k}}=2\)) and the charge \(\pm 1\) (\(N_{\mathbf{k}}=1,3\)) excitations.
The many-body wavefunctions of the whole system are simply the tensor products of the wavefunctions for each \(\mathbf{k}\). With these exact wavefunctions in hand, we are able to compute quantities, such as Green's functions and susceptibilities via spectral decomposition. The determinant of the Green's function along high symmetry lines can be found in Fig. 3(b). Dispersive poles and zeros with different dispersion relationships are clearly visible. We also find numerically the topological index \(N_{3}=-2\) for this Green's function.
The derivative of the charge-current susceptibility \(\chi_{0j}(\mathbf{q},i\Omega)\), which has been shown to be an important part of the conductivity tensor \(\sigma_{ij}\), can also be evaluated numerically using spectral decomposition, as we discuss in App. B. Figs. 3(c-d) provide the values of the \(xx\) and \(xy\) components of \(-i\lim_{\mathbf{q}\to 0}\partial_{q_{i}}\chi_{0j}(\mathbf{q},i\Omega)\) with imaginary frequencies. Clearly, the transverse component of this quantity is a non-zero small value. However, it is far from the value of \(N_{3}\). Thus, we can conclude that the backflow terms \(\Delta N_{3}\) in Eq. 36 are not negligible even if \(\sigma^{\prime}_{xy}\) does not contribute.
We now comment on the extra term \(\sigma^{\prime}_{ij}(\mathbf{q},i\Omega)\) in the expression of the total conductivity tensor, which describes the current \(\mathbf{j}_{\mathbf{q}}\) response from the coupling term between the external gauge field and the interaction current operator \(\mathbf{J}^{\prime}_{-\mathbf{q}}\). As we have already mentioned, this term is inherently a 6-and-more-point correlation function, and it cannot be represented by the exact Green's function. It amounts to an extra contribution to the backflow term \(\Delta N_{3}\).
Finally, we now show why the conductivity at zero temperature remains unchanged under variations in the chemical potential, provided that such changes do not alter the ground state. For the HK Hamiltonian at half filling, the ground states for each \(\mathbf{k}\) has a large gap \(\sim U/2\) from any charge excitations, and as a consequence, the many-body ground states will remain in the half filling sector in a range of chemical potential. As we specified in Eq. 23, the conductivity tensor contains the the current-current susceptibility, which can be expressed in terms of a spectral decomposition:
\[\chi_{ij}(\mathbf{q},i\Omega)+X_{ij}(\mathbf{q},i\Omega)\] \[= \frac{1}{|\mathcal{G}|}\sum_{g\in\mathcal{G},m}\left[\frac{\langle g |j^{i}_{\mathbf{q}}|m\rangle\langle m|j^{j}_{-\mathbf{q}}|g\rangle}{i\Omega+(E_{g}- \mu N_{g})-(E_{m}-\mu N_{m})}\right.\] \[\left.-\frac{\langle g|j^{j}_{-\mathbf{q}}|m\rangle\langle m|j^{i}_ {\mathbf{q}}|g\rangle}{i\Omega-(E_{g}-\mu N_{g})+(E_{m}-\mu N_{m})}\right]\,. \tag{43}\]
Here, \(\mathcal{G}\) denotes the set of degenerate ground states, and \(|\mathcal{G}|\) stands for the ground states degeneracy. In addition, \(E_{g},E_{m}\) and \(N_{g},N_{m}\) stand for the eigenvalues of the many-body Hamiltonian and the fermion number operator of many-body eigenstates \(|g\rangle\) and \(|m\rangle\), respectively. The current operators \(\mathbf{j}_{\mathbf{q}}\) always contain the same amount of fermion creation and annihilation operators. As a consequence, any excited state \(|m\rangle\) which satisfies \(\langle g|j^{i}_{\mathbf{q}}|m\rangle\neq 0\) must have the same fermion number eigenvalue (\(N_{g}=N_{m}\)). Hence, the chemical potential \(\mu\) does not show up in the denominator. If the ground states \(|g\rangle\) remain unchanged when varying \(\mu\) (which is true for a charge gapped system), Eq. 43 will also remain unchanged, regardless of whether the Green's function zeros are at the Fermi level or not. In the mean time, the diamagnetic response tensors \(\mathcal{D}_{ij}\) and \(\mathcal{D}^{\prime}_{ij}(\mathbf{q})\) are directly determined by the ground state expectation values of fermionic operator products, which are unchanged if the ground state remains the same while changing \(\mu\). As a consequence, the conductivity tensor will not be changed as well. However, the value of \(N_{3}\) can be changed by vary
ing chemical potential due to the presence of zeros of the Green's functions, which can occur even if the ground state is unaffected; this is consistent with the discussion in Ref. [40].
## V Discussion and Summary
Several remarks are in order. First, as stated in the Introduction, probing Green's function zeros experimentally is not a straightforward task. A naive application of ordinary spectroscopic tools such as photoemission, tunneling or x-ray/neutron scattering does not automatically reveal their existence and a more nuanced approach is necessary. Instead, one might rely on specific probes that could extract this information more indirectly. For example, a key mechanism for the occurrence of zeros is through ground-state degeneracies, where we expect the zeros to come from a resonance scattering of electrons from singular collective spin excitations. In this regard, a Curie-like behavior of the static, long wavelength magnetic susceptibility in the zero field limit with a Mott gap is a strong indicator of Green's function zeros. In principle, processes connecting non-degenerate but mixed ground states to excited states can also yield zeros at zero temperature. Such scenarios must be treated on a case-by-case basis. Second, in the current paper we have not derived explicit forms of the six-and-more-point correlation functions dictated by gauge invariance and alluded to in Eq. 30. These terms simply contribute to the deviation of \(\sigma_{xy}\) from its topological invariant \(N_{3}\) as its particle number counterpart in Eq. 7. Nonetheless, we are able to reach the key conclusion that measurable properties are contributed to by Green's function zeros, while remaining unchanged with chemical potential up to the Mott scale.
In summary, in this work we have examined the role of quasiparticle loss on physical properties by studying an exactly solvable model of a Mott insulator. The model contains contours in momentum-frequency space where the Green's function vanishes to yield zeros within the Mott gap. We demonstrate that these zeros contribute to physical properties, such as the total particle number and conductivity tensor, in a way that is consistent with the expectations from general physical grounds. As an example of the latter, the observables are shown to be in
Figure 3: (a) The many-body energy spectrum of \(H_{\mathbf{k}}-\mu N_{\mathbf{k}}\) along the high symmetry lines. Here we choose \(t_{12}=t=1\), \(M=1\) and \(U=20\). States with electron numbers \(N_{\mathbf{k}}=0,1,2,3,4\) are labeled by blue, purple, red, brown and green respectively. In this figure we also added a chemical potential \(\mu=0.2\) to separate the \(N_{\mathbf{k}}=1\) and \(3\) states. (b) The Green’s function determinant \(|\det G(\mathbf{k},\omega+i0^{+})|\) along the high symmetry lines. The corresponding value of \(N_{3}\) computed from this Green’s function is \(N_{3}=-2\). (c-d) Longitudinal and transverse components of the tensor \(-i\lim_{q\to 0}\partial_{q_{i}}\chi_{0j}(\mathbf{q},i\Omega)\) as functions of imaginary frequency. The conductivity tensor \(\sigma_{ij}\) is differed from this quantity by \(\sigma^{\prime}_{ij}(\mathbf{q},i\Omega)\) as shown in Eq. 28. The real part value of \(-i\lim_{q\to 0}\partial_{q_{x}}\chi_{0j}(\mathbf{q},i\Omega)\) at zero frequency is clearly different from the value of \(N_{3}\), indicating that the backflow term \(\Delta N_{3}\) is nonzero.
sensitive to changes in chemical potential within the Mott gap. Our results offer a conceptual framework for further analysis of topological response functions in strongly correlated systems and quantum materials where a well-defined quasiparticle picture is absent. As such, we expect our work to help further advance the understanding of the rich interplay among topology, symmetry and strong correlations.
_Note Added:_ After completing this manuscript, we became aware of a recently updated preprint and a new preprint, in which the many-body effects of Hall conductivity are also addressed [39, 53].
###### Acknowledgements.
We thank Jennifer Cano, Elio Konig, Diana-Gabriela Oprea, Silke Paschen and Roser Valenti for useful discussions. Work at Rice has primarily been supported by the Air Force Office of Scientific Research under Grant No. FA9550-21-1-0356 (C.S. and S.S.), by the National Science Foundation under Grant No. DMR-2220603 (F.X. and Q.S.), and by the Robert A. Welch Foundation Grant No. C-1411 (L.C.). The majority of the computational calculations have been performed on the Shared University Grid at Rice funded by NSF under Grant EIA-0216467, a partnership between Rice University, Sun Microsystems, and Sigma Solutions, Inc., the Big-Data Private-Cloud Research Cyberinfrastructure MRI-award funded by NSF under Grant No. CNS-1338099, and the Extreme Science and Engineering Discovery Environment (XSEDE) by NSF under Grant No. DMR170109. M.G.V. acknowledges support of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) GA 3314/1-1 - FOR 5249 (QUAST) and partial support from European Research Council (ERC) grant agreement no. 101020833. This work has also been funded by the European Union NextGenerationEU/PRTR-C17.I1, as well as by the IKUR Strategy under the collaboration agreement between Ikerbasque Foundation and DIPC on behalf of the Department of Education of the Basque Government. All authors acknowledge the hospitality of the Kavli Institute for Theoretical Physics, UCSB, supported in part by the National Science Foundation under Grant No. NSF PHY-1748958, during the program "A Quantum Universe in a Crystal: Symmetry and Topology across the Correlation Spectrum." Q.S. and S.S. also acknowledge the hospitality of the Aspen Center for Physics, which is supported by the National Science Foundation under Grant No. PHY-2210452, during the workshop "New Directions on Strange Metals in Correlated Systems."
\(\dagger\) [email protected]
\(\oplus\) [email protected]
|
2309.04153 | Mapping EEG Signals to Visual Stimuli: A Deep Learning Approach to Match
vs. Mismatch Classification | Existing approaches to modeling associations between visual stimuli and brain
responses are facing difficulties in handling between-subject variance and
model generalization. Inspired by the recent progress in modeling speech-brain
response, we propose in this work a "match-vs-mismatch" deep learning model to
classify whether a video clip induces excitatory responses in recorded EEG
signals and learn associations between the visual content and corresponding
neural recordings. Using an exclusive experimental dataset, we demonstrate that
the proposed model is able to achieve the highest accuracy on unseen subjects
as compared to other baseline models. Furthermore, we analyze the inter-subject
noise using a subject-level silhouette score in the embedding space and show
that the developed model is able to mitigate inter-subject noise and
significantly reduce the silhouette score. Moreover, we examine the Grad-CAM
activation score and show that the brain regions associated with language
processing contribute most to the model predictions, followed by regions
associated with visual processing. These results have the potential to
facilitate the development of neural recording-based video reconstruction and
its related applications. | Yiqian Yang, Zhengqiao Zhao, Qian Wang, Yan Yang, Jingdong Chen | 2023-09-08T06:37:25Z | http://arxiv.org/abs/2309.04153v2 | Mapping EEG Signals to Visual Stimuli: A Deep Learning Approach to Match vs. Mismatch Classification
###### Abstract
Existing approaches to modeling associations between visual stimuli and brain responses are facing difficulties in handling between-subject variance and model generalization. Inspired by the recent progress in modeling speech-brain response, we propose in this work a "match-vs-mismatch" deep learning model to classify whether a video clip induces excitatory responses in recorded EEG signals and learn associations between the visual content and corresponding neural recordings. Using an exclusive experimental dataset, we demonstrate that the proposed model is able to achieve the highest accuracy on unseen subjects as compared to other baseline models. Furthermore, we analyze the inter-subject noise using a subject-level silhouette score in the embedding space and show that the developed model is able to mitigate inter-subject noise and significantly reduce the silhouette score. Moreover, we examine the Grad-CAM activation score and show that the brain regions associated with language processing contribute most to the model predictions, followed by regions associated with visual processing. These results have the potential to facilitate the development of neural recording-based video reconstruction and its related applications.
Yiqian Yang\({}^{1}\), Zhengqiao Zhao\({}^{1}\), Qian Wang\({}^{2}\), Yan Yang\({}^{1}\), Jingdong Chen\({}^{1}\), FELLOW, IEEE\({}^{1}\) Center of Intelligent Acoustics and Immersive Communications
Shaanxi Provincial Key Laboratory of Artificial Intelligence
School of Marine Science and Technology
Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China
\({}^{2}\) North Electro-Optic Science & Technology Defense Co.Ltd EEG, deep learning, neural representation, visual content reconstruction.
## 1 Introduction
The ability to predict the brain's neural representation in response to external stimuli holds great potential for advancing fundamental neuroscience research and helping develop key components of Brain-Computer Interfaces (BCI). Thanks to recent advancements in large-scale neural recording such as electroencephalography (EEG), wide-field calcium imaging, and magnetic resonance imaging, scientists can now computationally learn the complex relationship between animal behavior and the corresponding neural activities [1]. To study neural dynamics in a controlled manner, researchers present visual stimuli such as images and videos to subjects and synchronously record their neural signals. Machine learning models are subsequently trained to learn the relationship between neural signals and stimuli, enabling the reconstruction of visual content from neural recordings.
Prior studies focused on analyzing spectral signatures of neural recordings to comprehend neural firing mechanisms [2, 3], and explored the use of traditional frequency features to reconstruct both overt and imagined speech [4]. Recent progress in deep learning has allowed researchers to delve into more complex feature spaces of neural signals and nonlinear relationships between external stimuli and their corresponding neural representations. It is demonstrated that deep generative models can be trained to recover category-level images from EEG signals [5]. Furthermore, a recent work demonstrated the possibility of reconstructing video frame order from the visual cortex recordings using a contrastive learning-based model [6]. MinD-Video was proposed to extract semantic features from time-continuous fMRI signals of the cerebral cortex and restore semantically accurate videos [7]. Nevertheless, the aforementioned reconstruction approaches still face great challenges due to two primary factors. First, the neural response latency is correlated with the subject's attention level due to "attention drift", and a large within-subject variance is often observed in repeated experiments [8]. How to extract invariant neural signal patterns that correspond to external stimuli remains a significant challenge. Second, there are large differences in neural response signals between different individuals [7]. Therefore, models obtained on training subjects may fail to generalize to holdout subjects [9]. To address these challenges, researchers have reframed the EEG-stimuli mapping problem as one of "match-vs-mismatch" classification instead of one of regression, leading to the successful reconstruction of the auditory envelope from EEG signals [8, 10].
Inspired by the ideas in [8, 10], we propose in this work an approach to learning the neural representation of visual content through a "match-vs-mismatch" framework. Briefly, we develop a deep learning-based classification model to predict whether a video segment evoked the query segment of the EEG signal. Existing "match-vs-mismatch" models typically leverage convolutional neural network (CNN) [11] and
dilated convolutional neural network (DCNN) [12] to extract the local features as well as the global patterns of neural signals. In contrast, we explore in this work the use of sequence models, such as Transformer (TRFM) [13], and Gated Recurrent Unit (GRU) [14], in addition to convolutional layers, to capture the contextual information of videos and neural recordings. We will demonstrate the benefits of using sequence models over the convolutional layers-based baseline models. We further analyze the developed models by extracting intermediate layer output and visualizing the channel importance scores. It is shown that the proposed models can capture meaningful neural representations in the embedding space and mitigate inter-subject variance. The major contributions of this work are as follows. 1) We propose to model the visual stimuli and neural response using a "match-vs-mismatch" framework. 2) A CNN and GRU based-classification model is developed to predict the video that evoked the input EEG signal, which is able to produce a significantly better performance as compared to other competing models; 3) Various visualization methods are applied to analyze the developed model and the results show that the proposed model is capable of capturing neurologically meaningful features while effectively handling inter-subject noise.
## 2 Methods
The classification model in this work consists of two branches of inputs: 1) the EEG branch and 2) the video branch, as shown in Fig. 1 (a). The former takes a segment of the EEG signal as its input while the latter takes two video segments as its inputs. Specifically, the corresponding "matching" stimulus video that evokes the EEG signal and a "mismatching" imposter video are randomly assigned to video branch input ports 1 and 2. The classification task is to predict whether video 1 is the "matching" video. Note that this framework is inspired by previous studies on modeling brain responses to audio stimuli [8, 10], which we refer to as a two-way classification problem. To verify the benefit of this two-way classification problem formulation, we develop a one-way classification model as a baseline model, where the video branch takes only one video segment as input and determines whether this input matches with the EEG signal, as shown in Fig. 1 (b). In both models, the EEG and video signals are processed by different deep neural networks respectively (i.e., the NN blocks in Fig. 1). The cosine similarity is computed between the EEG feature and the video feature per channel along the time dimension. The similarities for different channels are then concatenated. Finally, a fully connected layer with sigmoid activation is used to predict whether the video input matches with the EEG signal. For the two-way model, the networks used to process video input 1 and input 2 share the same weights.
Although most existing works use convolutional layers in the NN blocks to extract features from the EEG and video signals, we believe that sequence models can help learn the contextual information, thereby improving the performance. Therefore, we develop several models that consist of a combination of deep neural network modules such as CNN, DCNN, GRU, TRFM, and LSTM [15], and compare them to baseline models. Note that in this study we also use an architecture similar to the convolutional networks proposed in [10] as another baseline model in addition to the aforementioned one-way baseline model. The network architecture of the one-way baseline model is the same as the best-performing architecture of the two-way models.
For models with DCNN, the upstream CNN layers simply perform point-wise convolutional operations, which can keep the same convolution channels as the input channels for downstream layers. For models without DCNN, both the kernel size and the stride size are 40 for the CNN layers. In DCNN, there are three dilated layers with an output dimension of 256 and a kernel size of 5. The dilation factor is chosen to be \(k^{n}\), where \(n\) is the layer index of dilated layers starting from \(0\), and padding size is calculated as \(floor(((k-1)\times k^{n}+1)/2)\) in order to keep the temporal dimension divisible by the stride. A rectified linear unit (ReLU) non-linearity is applied after each dilated convolution layer. We choose the hyperparameters of the networks such that the EEG signals and the video signals are compatible in the temporal dimension. For sequence models, We adopt TRFM modules to capture the potential long-term dependencies. The TRFM module consists of 3 layers with a single attention head of 256 or 768 dimensions and a dropout rate of \(0.2\). GRU and LSTM modules consist of 256 hidden neurons.
## 3 Experiments
We have developed a comprehensive database, which consists of 64-channel EEG recordings obtained from a cohort of 100 Chinese participants while watching a 3.5-minute Chinese movie clip. The sampling frequency for the EEG signal is 1000 Hz and the video has a resolution of 1080p with a frame rate of 25 frames per second. In this study, a subset of the dataset is used to evaluate the proposed models (currently available for research use upon request). In this experimental subset, there are 56 subjects. The total length of the EEG signal from each subject is 210000 samples, and the length of the corresponding video signal is 5250 frames. We use the MNE
Figure 1: Illustration of the “match-vs-mismatch” models: (a) the two-way model and (b) the one-way model.
toolbox to preprocess the EEG signals [16]. Briefly, we apply a notch filter to mitigate the impact of the power-line interference at 50 Hz. Then, a band-pass filter is applied, which has a passband ranging from \(1\) to \(200\mathrm{Hz}\). Moreover, we normalize the maximum magnitude of the EEG signal to 0.8 to ensure the model stability. For video input, we use a pretrained ViT-B/8 model, DINO [17], to preprocess the video and extract the preliminary feature matrix from the video clip.
Data are fed to the models on a segment basis with a segment size of 3 seconds and a shift of 1 second between consecutive segments. The start time of the imposter video segment is chosen exactly 1 second after the end (positively shifted samples) or 4 seconds before the start of the current EEG segment (negatively shifted samples), as illustrated in Fig. 1(b). The corresponding video segment is preprocessed by the DINO model to produce a 768-dimensional vector per frame. The video feature matrix input for a 3-second video clip is 768 by 75 as shown in Fig. 1. We randomly choose 45, 5, and 6 subjects (18630, 2070, and 2480 samples) as training, validation, and testing sets respectively. The experimental data setup is illustrated in Fig. 1(a). This experiment is repeated 5 times to study the statistical performance.
For training, we use the Adam optimizer with a learning rate of \(10^{-3}\). The batch size is set to 64. If the validation accuracy does not improve over 5 consecutive epochs, the learning rate is reduced by a factor of 10. If the validation accuracy does not improve over 10 consecutive epochs, an early stopping is triggered to terminate the training process. The model that yields the highest accuracy on the validation set is kept for testing.
## 4 Results
The results of the studied models are shown in Fig. 3 where E and V denote, respectively, EEG and video branches, and C, D, T, G, L, and O stand, respectively, for CNN, DCNN, TRFM, GRU, LSTM and the one-way baseline model. For example, "ECD3VG" denotes that the EEG branch consists of 1 convolutional layer and 3 dilated convolutional layers in sequence and the video branch has 1 GRU layer. The feature dimension for all the models is except for the model with the "-768" extension in which the dimension is 768.
From Fig. 3, one can clearly see that the models with GRU/LSTM module in the video branch outperform the CNN-based module, indicating that using sequence models helps capture the contextual information of video signals. Complex models with a large number of trainable parameters tend to exhibit suboptimal performance. The underlying reason is attributed to the scarcity of training samples. The best-performing model, which leverages the recurrent and convolutional layers, outperforms both baseline models, indicating that the proposed model can capture the association between video stimuli and corresponding neural responses.
In this set of experiments, we investigate the benefits of using both the positively and negatively shifted samples as imposter samples (balanced), in comparison with using only positively shifted samples (imbalanced) as in [10], for the EEG-video classification task. We train our model under balanced and imbalanced configurations respectively, and evaluate the accuracy of these two models in distinguishing matching videos from mismatching (imposter) ones created with different time offsets (\(t_{\mathrm{sep}}\)), as shown in Fig. 4. As expected, the accuracy drops to approximately 50% at \(t_{\mathrm{sep}}=-3\mathrm{s}\) where the matching and mismatching videos are identical. The accuracy of our proposed model (balanced) remains relatively stable at other offsets, indicating that the proposed model can distinguish matching and mismatching videos extracted at different intervals without requiring explicit training on those mismatching samples. We observe that when the training dataset is imbalanced, the accuracy on negative side imposter samples drops below 20%. This suggests that the model simply memorizes the order of training examples and predicts the relatively earlier video as the matching one.
To further study which brain region contributes more to model predictions, we compute and visualize the Gradient-weighted Class Activation Mapping (Grad-CAM) scores of the proposed model [18]. The proposed model is trained 5 times and we compute activation scores for these models. The mean and standard deviation of the scores are shown in Fig. 5. One can see from the results that the Oz area (around the visual cortex responsible for vision processing) and the Pz area (the parietal lobe related to information fusion) are highlighted in the activation map with a score of around 0.4. Furthermore, the inferior frontal gyrus area (F7 and F8), partly responsible for language processing and comprehension [19], exhibits the highest activation score (approximately 0.8), which indicates that the brain's semantic pro
Figure 3: Performance of the proposed and compared models. The number given in each pair of parentheses denotes the number of parameters in the corresponding model.
Figure 2: Illustration of the experimental data setup.
cessing activities are more informative in decoding the representations of visual content in the human brain than visual processing activities [20].
We now examine the potential benefit of our model in terms of handling the inter-subject noise, i.e., how well the model is able to generalize to the EEG signals from unseen subjects. Traditional features of the EEG signals, including Hjorth parameters, differential entropy, asymmetry coefficient feature and fraction dimension mentioned in [21], are extracted for 6 EEG bands, namely: \(\delta\) (1-3Hz), \(\theta\) (4-7Hz), \(\alpha\) (8-13Hz), \(\beta\) (14-30Hz), \(\gamma\)(31-50Hz), and high-\(\gamma\) (51-100Hz), which exhibits large inter-subject level variance. As shown in Fig. 5(a), the EEG signals from the same subject cluster closely in the traditional feature space and are quite separable from signals from other subjects. We calculate the silhouette score over all samples using the feature vectors and the subject IDs to quantify the degree to which the undesired subject-level variance is retained. Fig. 5(b) shows the silhouette coefficient per subject for traditional features. Note that a higher score indicates that the feature preserves more inter-subject information.
We then extract the latent representation of the EEG signals from the proposed model and compute silhouette scores in the embedding space of the proposed one-way baseline model for simplicity. Briefly, we extract the output of the EEG branch of the model and flatten it into a vector as the deep representation of the input EEG signal. As shown in Fig. 6 (b,d), the resulting embedding vectors of our model do not exhibit subject-based clusters and achieve a lower silhouette score of -0.004, while the silhouette score is 0.136 for the traditional method. This result indicates that our model is able to mitigate the inter-subject noise in the EEG signal that the traditional method is not able to handle properly while preserving the relevant neurological information encoding visual stimuli (as evidenced by the high classification accuracy).
## 5 Conclusion
In this work, we developed a "match-vs-mismatch" classification framework to model the associations between visual content and brain responses. Compared with the studied baseline methods, the proposed model achieved the highest accuracy of 73.05% on the experimental dataset. To analyze the performance, we extracted deep embedding vectors and showed that deep representations of EEG signals have a lower subject-level silhouette score as compared to the traditional feature vectors. These experimental results suggest that the proposed deep learning model is able to effectively extract meaningful neurological features and improve the prediction of visual stimuli while suppressing the inter-subject noise. Visualizations of model activation scores reveal that brain regions associated with language and visual processing were important for model predictions.
Figure 4: Accuracy of the balanced and imbalanced models on differentiating matching samples from mismatching samples extracted at different time offsets. The x-axis shows the time offset \(t_{\rm sep}\) and the y-axis shows the accuracy
Figure 5: The topographic map: the mean (left) and the standard deviation (right) of the Grad-CAM activation scores for all EEG channels.
Figure 6: The t-SNE visualization in 2D (a, c) and silhouette analysis (b, d) of the traditional and proposed deep feature vectors. Each point represents a three-second segment of the EEG signal from one of the six testing subjects. |
2309.12731 | Defeasible Reasoning with Knowledge Graphs | Human knowledge is subject to uncertainties, imprecision, incompleteness and
inconsistencies. Moreover, the meaning of many everyday terms is dependent on
the context. That poses a huge challenge for the Semantic Web. This paper
introduces work on an intuitive notation and model for defeasible reasoning
with imperfect knowledge, and relates it to previous work on argumentation
theory. PKN is to N3 as defeasible reasoning is to deductive logic. Further
work is needed on an intuitive syntax for describing reasoning strategies and
tactics in declarative terms, drawing upon the AIF ontology for inspiration.
The paper closes with observations on symbolic approaches in the era of large
language models. | Dave Raggett | 2023-09-22T09:27:26Z | http://arxiv.org/abs/2309.12731v1 | # Defeasible Reasoning with Knowledge Graphs1
###### Abstract
Human knowledge is subject to uncertainties, imprecision, incompleteness and inconsistencies. Moreover, the meaning of many everyday terms is dependent on the context. That poses a huge challenge for the Semantic Web. This paper introduces work on an intuitive notation and model for defeasible reasoning with imperfect knowledge, and relates it to previous work on argumentation theory. PKN is to N3 as defeasible reasoning is to deductive logic. Further work is needed on an intuitive syntax for describing reasoning strategies and tactics in declarative terms, drawing upon the AIF ontology for inspiration. The paper closes with observations on symbolic approaches in the era of large language models.
defeasible reasoning, argumentation theory, knowledge graphs.
## 1 Defeasible Reasoning
### Introduction
The accepted wisdom for knowledge graphs presumes deductive logic as the basis for machine reasoning. In practice, application logic is usually embedded in conventional programming, exploiting scripting APIs and graph query languages, which make it costly to develop and update as application needs evolve.
Declarative approaches to reasoning hold out the promise of increased agility for applications to cope with frequent change. Notation 3 (N3) is a declarative assertion and logic language [1] that extends the RDF data model with formulae, variables, logical implication, functional predicates and a lightweight notation. N3 is based upon traditional logic, which provides mathematical proof for deductive entailments for knowledge that is certain, precise and consistent.
Unfortunately, knowledge is rarely perfect, but is nonetheless amenable to reasoning using guidelines for effective arguments. This paper introduces the Plausible Knowledge Notation (PKN) as an alternative to N3 that is based upon defeasible reasoning as a means to extend knowledge graphs to cover imperfect everyday knowledge that is typically uncertain, imprecise, incomplete and inconsistent.
_"PKN is to N3 as defeasible reasoning is to deductive logic"_
Defeasible reasoning creates a presumption in favour of the conclusions, which may need to be withdrawn in the light of new information. Reasoning develops arguments in support of, or counter to, some supposition, building upon the facts in the knowledge graph or the conclusions of previous arguments.
As an example, consider the statement: _if it is raining then it is cloudy_. This is generally true, but you can also infer that it is somewhat likely to be raining if it is cloudy. This is plausible based upon your rough knowledge of weather patterns. In place of logical proof, we have multiple lines of argument for and against the premise in question just like in courtrooms and everyday reasoning.
The above figure shows how properties and relations involving a class may be likely to apply to a sub-class as a specialization of the parent class. Likewise, properties and relations holding for a sub-class may be likely to apply to the parent class as a generalization. The likelihood of such inferences is influenced by the available metadata. Inferences can also be based on implication rules, and analogies between concepts with matching structural relationships. PKN [2] further supports imprecise concepts:
* fuzzy terms, e.g., cold, warm and hot, which form a scalar range with overlapping meanings.
* fuzzy modifiers, e.g., very old, where such terms are relative to the context they apply to.
* fuzzy quantifiers, e.g., few and many, for queries akin to SPARQL.
Figure 1: Illustration of how plausible inferences for properties and relations can act as generalizations or specializations of existing knowledge.
PKN represents an evolution from graph databases to cognitive databases, that can more flexibly support reasoning over everyday knowledge. For a web-based demonstrator, see [3].
### Relation to Previous Work
The Stanford Encyclopedia of Philosophy entry on argument and argumentation [4] lists five types of arguments: deduction, induction, abduction, analogy and fallacies. Argumentation can be adversarial where one person tries to beat down another, or cooperative where people collaborate on seeking a better joint understanding by exploring arguments for and against a given supposition. The latter may further choose to focus on developing a consensus view, with the risk that argumentation may result in group polarization when people's views become further entrenched.
Studies of argumentation have been made by a long line of philosophers dating back to Ancient Greece, e.g., Carneades and Aristotle. More recently, logicians such as Frege, Hilbert and Russell were primarily interested in mathematical reasoning and argumentation. Stephen Toulmin subsequently criticized the presumption that arguments should be formulated in purely formal deductive terms [5]. Douglas Walton extended tools from formal logic to cover a wider range of arguments [6]. Ulrike Hahn, Mike Oaksford and others applied Bayesian techniques to reasoning and argumentation [7], whilst Alan Collins applied a more intuitive approach to plausible reasoning [8].
Formal approaches to argumentation such as ASPIC+ [9] build arguments from axioms and premises as well as strict and defeasible rules. Strict rules logically entail their conclusions, whilst defeasible rules create a presumption in favor of their conclusions, which may need to be withdrawn in the light of new information.
Arguments in support of, or counter to, some supposition, build upon the facts in the knowledge graph or the conclusions of previous arguments. Preferences between arguments are derived from preferences between rules with additional considerations in respect to consistency. Counter arguments can be classified into three groups. An argument can:
* _undermine_ another argument when the conclusions of the former contradict premises of the latter.
* _undercut_ another argument by casting doubt on the link between the premises and conclusions of the latter argument.
* _rebut_ another argument when their respective conclusions can be shown to be contradictory.
AIF [10] is an ontology intended to serve as the basis for an interlingua between different argumentation formats. It covers information (such as propositions and sentences) and schemes (general patterns of reasoning). The latter can be used to model lines of reasoning as argument graphs that reference information as justification. The ontology provides constraints on valid argument graphs, for example:
_Scheme for Argument from Expert Opinion:_
_premises_: E asserts that A is true (false), E is an expert in domain D containing A; _conclusion_: A is true (false); _presumptions_: E is a credible expert, A is based on _evidence_; exceptions: E is not reliable, A is not consistent with what other experts assert.
Conflict schemes model how one argument conflicts with another, e.g., if an expert is deemed unreliable, then we cannot rely on that expert's opinions. Preference schemes define preferences between one argument and another, e.g., that expert opinions are preferred over popular opinions. The AIF Core ontology is available in a number of standard ontology formats (RDF/XML, OWL/XML, Manchester OWL Syntax).
PKN defines a simple notation and model for imperfect knowledge. Arguments for and against a supposition are constructed as chains of plausible inferences that are used to generate explanations. PKN draws upon Alan Collins core theory of plausible reasoning [COLLINS] in respect to statement metadata corresponding to intuitions and gut feelings based upon prior experience. This is in contrast to Bayesian techniques that rely on the availability of rich statistics, which are unavailable in many everyday situations.
Recent work on large language models (LLMs), such as GPT-4, have shown impressive capabilities in respect to reasoning and explanations. However, there is a risk of hallucinations, where the system presents convincing yet imaginary results. Symbolic approaches like PKN are expected to play an important and continuing role in supporting semantic interoperability between systems and knowledge graphs. LLMs are trained on very large datasets, and in principle, could be exploited to generate symbolic models in a way that complements traditional approaches to knowledge engineering.
## 2 Plausible Knowledge Notation (PKN)
The Plausible Knowledge Notation is an intuitive lightweight syntax designed to support defeasible reasoning. PKN documents use data types restricted to numbers (as in JSON) and names with optional prefixes.
### PKN Statements
PKN supports several kinds of statements: properties, relations, implications and analogies. These optionally include a scope and a set of parameters as metadata. The scope is one or more names that indicate the context in which the statement applies, e.g., ducks are similar to geese in that they are birds with relatively long necks when compared to other bird species. Each parameter consists of a name and a value. Parameters represent prior knowledge as an informal qualitative gut feeling based upon prior experience. Predefined parameters include:
**certainty** - the confidence in the associated statement being true.
**strength** - the confidence in the consequents being true for an implication statement, i.e., the likelihood of the consequents holding if the antecedents hold.
**inverse** - the confidence in the antecedents being true when using an implication statement in reverse, i.e., the likelihood of the antecedents holding if the consequents hold.
**typicality** - the likelihood that a given instance of a class is typical for that class, e.g., that a Robin is a typical song bird.
**similarity** - the extent to which one thing is similar to another, e.g., the extent that they have some of the same properties.
**dominance** - the relative importance of an instance of a class as compared to other instances. For a country, for instance, this could relate to the size of its population or the size of its economy.
**multiplicity** - the number of items in a given range, e.g., how many different kinds of flowers grow in England, remembering that parameters are qualitative not quantitative.
This paper is too short to provide detailed information, so here are a few examples of PKN statements, starting with properties:
flowers of Netherlands includes daffodils, tulips (certainty high)
Here "flowers" is the descriptor, "Netherlands" is the argument, "includes" is the operator, and "daffodils, tulips" is the referent. In other words, daffodils and tulips are amongst the flowers found in the Netherlands. The metadata indicates that this statement has a high certainty. Next here are two examples of relation statements:
Belgium similar-to Netherlands for latitude
Paul close:friend-of John
Next here is an implication statement with a locally scoped variable:
weather of?place includes rainy implies weather of?place includes cloudy (strength high, inverse low)
This example has a single antecedent and a single consequent. Note the use of "?place" as a variable, and metadata for the confidence in using the statement for forward and backward inferences. Next is a couple of examples of analogy statements:
leaf:tree::petal:flower
dog:puppy::cat:?
Next, here are some examples of queries, which are akin to SPARQL:
which?x where?x is-a person and age of?x is very:old count?x where age of?x greater-than 20 from?x is-a person few?x where color of?x includes yellow from?x kind-of rose
The first query lists the people in the PKN graph who are considered to be very old. The second query counts the number of people older than 20. The third query checks whether there are few yellow roses in the PKN graph.
PKN allows statements to embed sub-graphs for statements about statements, e.g.
Mary believes {{John says {John loves Joan}}} is-a lie} which models "Mary thinks John is lying when he says he loves Joan.
### Fuzzy Knowledge
Plausible reasoning subsumes fuzzy logic as expounded by Lotfi Zadeh in his 1965 paper on fuzzy logic, see [11]. Fuzzy logic includes four parts: fuzzification, fuzzy rules, fuzzy inference and defuzzification.
Fuzzification maps a numerical value, e.g., a temperature reading, into a fuzzy set, where a given temperature could be modelled as 0% cold, 20% warm and 80% hot. This involves transfer functions for each term, and may use a linear ramp or some kind of smooth function for the upper and lower part of the term's range.
Fuzzy rules relate terms from different ranges, e.g., if it is hot, set the fan speed to fast, if it is warm, set the fan speed to slow. The rules can be applied to determine the desired fan speed as a fuzzy set, e.g., 0% stop, 20% slow and 80% fast. Defuzzification maps this back to a numeric value.
Fuzzy logic works with fuzzy sets in a way that mimics Boolean logic in respect to the values associated with the terms in the fuzzy sets. Logical AND is mapped to selecting the minimum value, logical OR is mapped to selecting the maximum value, and logical NOT to one minus the value, assuming values are between zero and one.
Plausible reasoning expands on fuzzy logic to support a much broader range of inferences, including context dependent concepts, and the means to express fuzzy modifiers and fuzzy quantifiers.
Here is an example of a scalar range along with the definition of the constituent terms:
range of age is infant, child, adult for person age of infant is 0, 4 for person age of child is 5, 17 for person age of adult is 18, age-at-death for person
The range property lists the terms used for different categories. The age property for the terms then specifies the numerical range. Additional properties can be used to define the transfer function.
PKN allows terms to be combined with one or more fuzzy modifiers, e.g., "very:old" where very acts like an adjective when applied to a noun. The meaning of modifiers can be expressed using PKN statements for relations and implications, together with scopes for context sensitivity. In respect to old, "very" could either be defined by reference to a term such as "geriatric" as part of a range for "age", or with respect to the numerical value, e.g., greater than 75 years old.
Fuzzy quantifiers have an imprecise meaning, e.g., include few, many and most. Their meaning can be defined in terms of the length of the list of query variable bindings that satisfy the conditions. _few_ signifies a small number, _many_ signifies a large number, and _most_ signifies that the number of bindings for the _where_ clause is a large proportion of the number of bindings for the _from_ clause.
### PKN and RDF
The Resource Description Framework (RDF), see [12], defines a data model for labelled directed graphs, along with exchange formats such as Turtle, query expressions with SPARQL, and schemas with RDF-S, OWL and SHACL. RDF identifiers are either globally scoped (IRIs) or locally scoped (blank nodes). RDF literals include numbers, booleans, dates and strings. String literals can be tagged with a language code or a data type IRI.
The semantics of RDF graphs is based upon Description Logics, see [13] and [14]. RDF assumes that everything that is not known to be true should treated as unknown. This can be contrasted with closed contexts where the absence of some statement implies that it is not true.
Description Logics are based upon deductive proof, whereas, PKN is based upon defeasible reasoning which involves presumptions in favor of plausible inferences, and estimating the degree to which the conclusions hold true. As such, when PKN graphs are translated into RDF, defeasible semantics are implicit and dependent on how the resulting graphs are interpreted. Existing tools such as SPARQL don't support defeasible reasoning.
Consider PKN property statements. The descriptor, argument, operator and referent, along with any statement metadata can be mapped to a set of RDF triples where the subject of the triples is a generated blank node corresponding to the property statement. Comma separated lists for referents and scopes can be mapped to RDF collections.
PKN relations statements can be handled in a similar manner. It might be tempting to translate the relation's subject, relationship and object into a single RDF triple, but this won't work when the PKN relation is constrained to a scope, or is associated with statement metadata. Recent work on RDF 1.2 [15] should help.
PKN implication statements are more complex to handle as they involve a sequence of antecedents and a sequence of consequents, as well as locally scoped variables. One possible approach is to first generate a blank node for the statement, and use it as the subject for RDF collections for the variables, antecedents and consequents.
PKN analogy statements are simpler, although there is a need to be able to distinguish variables from named concepts, e.g. as in "dog:puppy::cat:?".
## 3 Plausible Reasoning and Argumentation
Following the work of Allan Collins, PKN uses qualitative metadata in place of detailed reasoning statistics, which are challenging to obtain. Heuristic algorithms are used to estimate the combined effects of different parameters on the estimated certainty of conclusions. Reasoning generally starts from the supposition in question and seeks evidence, working progressively back to established facts. Sadly, this paper is far too short to go into details and the interested reader should look at the PKN specification.
An open challenge is how to declaratively model strategies and tactics for reasoning rather than needing to hard code them as part of the reasoner's implementation. Further work is needed to clarify the requirements and to evaluate different ways to fulfil those requirements using an intuitively understandable syntax. The AIF ontology would be a useful source of inspiration.
## 4 Summary
This paper introduced PKN as a notation and model for defeasible reasoning with knowledge graphs that include knowledge that is uncertain, imprecise, incomplete and inconsistent. Deductive proof is replaced with plausible arguments for, and against, the supposition in question. This builds on thousands of years of study of effective arguments, and more recently work on argumentation theory. Further work is needed on an intuitive syntax for reasoning strategies and tactics.
Large Language Models have demonstrated impressive capabilities in respect to reasoning and explanations. This raises the question of the role of symbolic approaches such as RDF, N3 and PKN. Deep learning over large corpora has totally eclipsed traditional approaches to knowledge engineering in respect to scope and coverage. However, we are likely to continue to need symbolic approaches as the basis for databases which complement neural networks, just as humans use written records rather than relying on human memory.
|
2309.08769 | The Use of Multi-Scale Fiducial Markers To Aid Takeoff and Landing
Navigation by Rotorcraft | This paper quantifies the performance of visual SLAM that leverages
multi-scale fiducial markers (i.e., artificial landmarks that can be detected
at a wide range of distances) to show its potential for reliable takeoff and
landing navigation in rotorcraft. Prior work has shown that square markers with
a black-and-white pattern of grid cells can be used to improve the performance
of visual SLAM with color cameras. We extend this prior work to allow nested
marker layouts. We evaluate performance during semi-autonomous takeoff and
landing operations in a variety of environmental conditions by a DJI Matrice
300 RTK rotorcraft with two FLIR Blackfly color cameras, using RTK GNSS to
obtain ground truth pose estimates. Performance measures include absolute
trajectory error and the fraction of the number of estimated poses to the total
frame. We release all of our results -- our dataset and the code of the
implementation of the visual SLAM with fiducial markers -- to the public as
open-source. | Jongwon Lee, Su Yeon Choi, Timothy Bretl | 2023-09-15T21:22:51Z | http://arxiv.org/abs/2309.08769v3 | # The Use of Multi-Scale Fiducial Markers To Aid Takeoff and Landing Navigation by Rotorcraft
###### Abstract
This paper quantifies the performance of visual SLAM that leverages multi-scale fiducial markers (i.e., artificial landmarks that can be detected at a wide range of distances) to show its potential for reliable takeoff and landing navigation in rotorcraft. Prior work has shown that square markers with a black-and-white pattern of grid cells can be used to improve the performance of visual SLAM with color cameras. We extend this prior work to allow nested marker layouts. We evaluate performance during semi-autonomous takeoff and landing operations in a variety of environmental conditions by a DJI Matrice 300 RTK rotorcraft with two FLIR Blackfly color cameras, using RTK GNSS to obtain ground truth pose estimates. Performance measures include absolute trajectory error and the fraction of the number of estimated poses to the total frame. We release all of our results -- our dataset and the code of the implementation of the visual SLAM with fiducial markers -- to the public as open-source.
## 1 Introduction
Visual SLAM with fiducial markers, a variation of simultaneous localization and mapping (SLAM), utilizes easily detectable and identifiable artificial visual patterns called fiducial markers to aid in mapping and tracking. Several previous studies [1, 2, 3, 4] have shown that visual SLAM with fiducial markers offers improved performance compared to generic visual SLAM, which may enhance the reliability of navigation scenarios during takeoff and landing, adhering to visual flight rules.
Despite their advantages, the existing visual SLAM approaches with fiducial markers have potential limitations when applied to takeoff and landing navigation in rotorcraft. The most significant issue is that existing approaches assume the use of fiducial markers of the same size, restricting their detectable distance range. This constraint affects SLAM performance, especially during takeoff and landing scenarios, where the distance between markers on the ground and the camera on the rotorcraft varies significantly. Moreover, existing visual SLAM approaches with fiducial markers are primarily assessed only in indoor environments, where visibility conditions remain constant. It is crucial to investigate how such visual SLAM approaches perform under various outdoor visibility conditions, including different illumination levels and adverse weather, which are likely to be encountered during actual takeoff and landing navigation in rotorcraft.
In response to these limitations, our investigation focuses on two key contributions within the realm of visual SLAM with fiducial markers. Firstly, we introduce the utilization of multi-scale fiducial markers, derived from a set with flexible layouts [5], showcased in Fig.1. This approach enables detection across a wider range of distances, addressing the limitations highlighted in a prior work proposing the use of fiducial markers for rotorcraft navigation [6]. Secondly, we assess the performance of visual SLAM with multi-scale fiducial markers on a dataset collected outdoors with a rotorcraft. This dataset emulates semi-autonomous takeoff and landing operations performed by a DJI Matrice 300 RTK rotorcraft in various environmental conditions. The dataset includes image data captured by two FLIR Blackfly color cameras, with ground truth pose estimates obtained using RTK GNSS.
Section 2 delves into various related works, with a specific focus on introducing the concept of visual SLAM with fiducial markers. The subsequent section, Section 3, outlines the system we devised for collecting data in semi-autonomous takeoff and landing scenarios governed by visual flight rules. This section also covers details about the employed multi-scale fiducial marker on the vertiport, the flight scenario implemented, and the SLAM code utilized. The evaluations and discussions are presented comprehensively in Section 4 and 5, respectively, and the paper concludes with a summary and remarks in Section 6.
Both the code and dataset used in this paper are available online1.
## 2 Related Works
SLAM is a process through which a mobile robot constructs a map of its environment while simultaneously determining its own location within that map. A specific subset of SLAM utilizing image data from one or more cameras is known as visual SLAM. In visual SLAM, the process typically involves extracting features from the current image, associating them with features from previous images, and concurrently estimating the poses of landmarks (map) and the camera's trajectory.
The use of square markers featuring a black-and-white grid pattern, commonly known as fiducial markers, has gained widespread adoption in robotics applications. These markers serve as easily identifiable visual landmarks with a low probability of misdetection. While some works [1, 2, 3] solely rely on the detection results of these fiducial markers -- rather than utilizing feature points like corners, a commonly employed landmark information -- UcoSLAM [4], an approach based on a feature point based state-of-the-art visual SLAM approach [8], proposes the simultaneous use of marker detection results and feature points. This hybrid approach shows enhanced performance compared to solutions relying solely on either fiducial markers or feature points alone.
While UcoSLAM positions itself as a viable choice for visual SLAM with fiducial markers, it has a few limitations. Firstly, it is tied to a specific type of fiducial marker known as ArUco markers [9], precluding the use of Apriltag [5] with flexible layouts, which allows for the utilization of multi-scale fiducial markers. Secondly, UcoSLAM poses challenges when it comes to extending its functionality to incorporate other types of sensor measurements typically found on a mobile robot, such as IMU and GNSS. This limitation hinders its future potential extensions. In contrast, WOLF [10], an open-source modular SLAM framework, overcomes these constraints. It offers a visual SLAM implementation with Apriltag [5] and is easily extendable to various sensor configurations, providing the potential for diverse extensions in future development.
## 3 Experiments
### System for data collection
Fig. 2 illustrates the DJI Matrice 300 RTK rotorcraft utilized for our data collection. The rotorcraft is equipped with RTK GNSS capabilities, offering enhanced precision in measurements compared to standard GNSS. This capability is crucial for providing accurate ground truth data in the evaluation of SLAM. Two FLIR Blackfly color cameras are mounted at the bottom -- one BFS-PGE-50S5C-C with a resolution of 2448x2048 facing downward (primary) and the other BFS-PGE-122S6C-C with a resolution of 4096x3000 oriented 45\({}^{\circ}\) forward (secondary). This configuration is designed to capture image data during flight, with only a slight overlap between the cameras, ensuring a broad field of view for the easy detection of fiducial markers on the ground.
Figure 1: Multi-scale fiducial markers proposed for use. Non-nested layout with AprilTag Standard36h11 family (left) and nested layout with AprilTag Custom52h12 family (right) integrated into the touchdown and liftoff area (TLOF), adhering to FAA guidelines for vertiport design [7].
### Dataset
#### 1.1.1 Multi-scale fiducial markers
We propose the utilization of two types of multi-scale fiducial markers, namely non-nested and nested layouts, depicted in Fig. 1. These layouts are based on the Standard36h11 and Custom52h12 Apriltag [5] families. The rationale behind incorporating fiducial markers at multiple scales is to extend the detectable distance range. For example, experimental findings presented by Krogius et al. [5] indicate that a fiducial marker with a unit side length can be consistently detected from distances ranging from 5 to 20 units. Moreover, other prior works [6, 11] underscore the restricted detectable distance range of single-scale fiducial markers. This emphasizes the need for employing multi-scale markers to extend the range, a capability not attainable with single-scale markers. Consequently, employing markers of various sizes enhances the robustness of visual SLAM systems, ensuring more reliable performance compared to using markers of a single size.
Returning to the specifics, the non-nested layout integrated into the touchdown and liftoff area (TLOF), following FAA guidelines for vertiport design [7], consists of twenty Standard36h11 Apriltag markers with three different scales (1:5:28). The nested layout integrated into the vertiport comprises three Custom52h12 Apriltag markers with three different scales (1:4:30). These markers are printed in a 1m\({}^{2}\) size to align with the control dimension of the DJI Matrice 300 RTK, the rotorcraft used for data collection.
Figure 2: Front and side views of the DJI Matrice 300 RTK rotorcraft equipped with a sensor system for data collection (top). The bottom of the rotorcraft hosts a sensor system comprising two cameras, one directed downward and the other positioned at a 45\({}^{\circ}\) forward angle (bottom).
2 Data collection under the scenario encompassing takeoff and landing of rotorcraft
We implement a trajectory encompassing both the takeoff and landing phases of the rotorcraft. Initially, the rotorcraft ascends vertically to an altitude of 5 meters above ground level (AGL). Subsequently, it traverses a horizontal distance of 40 meters at a speed of 1 m/s. After a pause, the rotorcraft returns to a location directly above the vertiport at a 5-meter altitude, followed by the landing phase. These maneuvers adhere to visual flight rules (VFR) approach/departure path requirements, maintaining an 8:1 ratio in horizontal and vertical units [7]. Throughout the entire flight, a human operator remotely controls the rotorcraft.
The dataset was collected at Champaign County R/C field (3616 W Bloomington Rd, Champaign, IL 61822, USA) as shown in Fig. 3, providing a suitable location for rotorcraft flights. Multi-scale fiducial markers, as described earlier, were positioned along the 400-foot-length runway. The rotorcraft followed the specified trajectory along the southbound runway during both takeoff and landing simulations. Data collection occurred on two distinct dates, November 30th, 2023, and December 2nd, 2023, capturing various times and weather conditions to encompass different visibility scenarios as shown in Fig. 4.
### SLAM implementation
We employed WOLF [10], an open-source modular SLAM framework that already incorporates a visual SLAM implementation with Apriltag [5]. Specifically, we set up a binocular visual SLAM system using the two types of Apriltag-based multi-scale fiducial markers we proposed earlier -- nested and non-nested layouts as shown in Fig. 1 -- by the two synchronized FLIR BlackHy color cameras. The intrinsic and extrinsic parameters of these cameras were identified using Kalibr [12], an open-source camera calibration tool.
In what follows, we evaluate the two configurations of visual SLAM with fiducial markers provided by WOLF [10]. The first mode relies solely on marker detection results to construct visual landmarks (marker SLAM), while the second mode utilizes both marker and feature detection results for landmark construction (marker + feature SLAM). This aims to investigate which mode performs well in our rotorcraft takeoff and landing scenarios under visual flight rules in diverse conditions.
## 4 Results
Tables 1 and 2 show the results for visual SLAM using non-nested and nested multi-scale fiducial markers, depicted in the left and right images of Fig. 1, respectively, under various conditions. The evaluation metrics include the absolute trajectory error (ATE; lower is better) and the fraction of the number of estimated poses to the total frame,
Figure 3: Champaign County R/C field overlaid with a marker indicating takeoff and landing location, along with the depicted flight trajectory.
which represents the percentage of time the navigation system is operational and usable by the aircraft (availability; higher is better). ATE is a standard performance-based requirement for navigation systems utilizing SLAM in the robotics community. The availability measurement aligns with the performance requirements outlined by ICAO for performance-based navigation.
We also assessed the performance of ORB-SLAM3 [13], a state-of-the-art visual SLAM utilizing feature points, on our dataset. However, it consistently failed in all cases, and therefore, we have excluded the results from the tables.
Fig. 4: **Rotorcraft flights conducted for data collection under diverse weather conditions (left column), accompanied by examples of images captured from the primary camera (mid column) and the secondary camera (right column) during each flight mission.**
## V Discussion
One significant observation from the results presented in Section IV is the failure of SLAM in all data collected under the lowest illumination condition (10-50 Lux). This failure is consistent across both types of multi-scale fiducial markers and whether using a marker SLAM or a marker + feature SLAM approach. The challenge in such low-light environments is attributed to the reduced visibility of fiducial markers, making detection challenging.
In comparing marker SLAM and marker + feature SLAM, no significant differences are evident in terms of both ATE and availability. This finding contradicts the argument presented by UcoSLAM [4], which advocates for the enhanced performance of using both marker and feature point detection results. The discrepancy may stem from our
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Date} & \multicolumn{4}{c|}{Weather} & \multirow{2}{*}{Trial} & \multicolumn{2}{c|}{Marker SLAM} & \multicolumn{2}{c}{Marker + Feature SLAM} \\ \cline{2-3} \cline{5-10} & state & temp. & wind & illumination & ATE (m) & Availability & ATE (m) & Availability \\ \hline \multirow{4}{*}{Nov. 30, 2023} & \multirow{4}{*}{sunny} & \multirow{4}{*}{10\({}^{\circ}\)C} & 5.3 m/s & 6000 Lux & 1 & 0.47 & 0.84 & 0.39 & 0.84 \\ & & & NE & (day) & 2 & 2.04 & 0.84 & 2.56 & 0.83 \\ & & & NE & & 3 & 1.46 & 0.84 & 1.70 & 0.84 \\ \cline{2-3} \cline{5-10} & \multirow{4}{*}{drizzle} & \multirow{4}{*}{10\({}^{\circ}\)C} & 6.5 m/s & 1200 Lux & 1 & 2.19 & 0.84 & 2.58 & 0.84 \\ & & & NE & (day) & 2 & 3.56 & 0.84 & 4.92 & 0.84 \\ & & & NE & & 3 & 1.86 & 0.85 & 1.59 & 0.84 \\ \cline{2-3} \cline{5-10} & \multirow{4}{*}{drizzle} & \multirow{4}{*}{9\({}^{\circ}\)C} & 7.3 m/s & 10-50 Lux & 1 & - & - & - & - \\ & & & N & (dusk) & 2 & - & - & - & - \\ \hline \multirow{4}{*}{Dec. 2, 2023} & \multirow{4}{*}{cloudy} & \multirow{4}{*}{6\({}^{\circ}\)C} & 1.3 m/s & 4000 Lux & 1 & 2.00 & 0.84 & 1.82 & 0.84 \\ & & & S & (day) & 2 & - & - & 4.95 & 0.84 \\ \cline{1-1} & & & S & (day) & 3 & 0.33 & 0.84 & 0.54 & 0.84 \\ \cline{1-1} & & & & & 4 & 0.49 & 0.89 & 1.61 & 0.83 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results for visual SLAM with non-nested multi-scale fiducial marker. Performance measures include absolute trajectory error (ATE) and the fraction of the number of estimated poses to the total frame (availability).
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Date} & \multicolumn{4}{c|}{Weather} & \multirow{2}{*}{Trial} & \multicolumn{2}{c|}{Marker SLAM} & \multicolumn{2}{c}{Marker + Feature SLAM} \\ \cline{2-3} \cline{5-10} & state & temp. & wind & illumination & ATE (m) & Availability & ATE (m) & Availability \\ \hline \multirow{4}{*}{Nov. 30, 2023} & \multirow{4}{*}{unny} & \multirow{4}{*}{10\({}^{\circ}\)C} & 5.3 m/s & 6000 Lux & 1 & 1.00 & 0.80 & 0.77 & 0.82 \\ & & & NE & (day) & 2 & 0.92 & 0.80 & 1.11 & 0.81 \\ & & & NE & & 3 & 0.69 & 0.80 & 0.78 & 0.81 \\ \cline{1-1} & & & 6.5 m/s & 1200 Lux & 1 & - & - & - & - \\ & & NE & & (day) & 2 & 0.90 & 0.81 & 0.96 & 0.83 \\ \cline{1-1} & & & 7.3 m/s & 10-50 Lux & 1 & - & - & - & - \\ & & & N & (dusk) & 2 & - & - & - & - \\ \hline \multirow{4}{*}{Dec. 2, 2023} & \multirow{4}{*}{cloudy} & \multirow{4}{*}{6\({}^{\circ}\)C} & 7.3 m/s & 10-50 Lux & 1 & - & - & - & - \\ & & & N & (dusk) & 2 & - & - & - & - \\ \cline{1-1} & & & & & 3 & - & - & - & - \\ \hline \hline \multirow{4}{*}{Dec. 2, 2023} & \multirow{4}{*}{cloudy} & \multirow{4}{*}{6\({}^{\circ}\)C} & 6.5 m/s & 1200 Lux & 1 & - & - & - & - \\ \cline{1-1} & & NE & (day) & 2 & 0.90 & 0.81 & 0.96 & 0.83 \\ \cline{1-1} & & & N & (dusk) & 2 & - & - & - & - \\ \cline{1-1} & & & N & (dusk) & 3 & - & - & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results for visual SLAM with nested multi-scale fiducial marker. Performance measures include absolute trajectory error (ATE) and the fraction of the number of estimated poses to the total frame (availability).
testing environment in which the rotorcraft flew over a runway with limited texture, hindering the effectiveness of feature point detection. The similarity in availability measures between marker and marker + feature SLAM modes supports this hypothesis, suggesting that not only the marker SLAM mode but also the marker + feature SLAM mode struggle to locate the rotorcraft once the fiducial marker on the vertiport is out of the field of view. This occurs as the rotorcraft moves farther away during the flight mission, as illustrated in Fig. 3.
It is crucial to emphasize that RTK GNSS measurements, while used as ground truth, may not be as accurate as expected. Significantly, we observed a notable mismatch in the takeoff and landing positions, which were intended to coincide, when examining the plotted RTK GNSS measurements across various missions. Consequently, readers are advised to interpret ATEs, which are evaluated against RTK GNSS measurements as ground truth, as indicative of visual SLAM with fiducial markers providing pose estimates within a certain boundary relative to RTK GNSS measurements. This recommendation is made instead of drawing rigorous conclusions about the superiority of one mode over another or its performance in specific conditions.
## 6 Conclusion
This paper introduces the application of visual SLAM with multi-scale fiducial markers and assesses its performance using visual data captured by a pair of cameras in rotorcraft takeoff and landing scenarios across diverse weather conditions. Evaluation focuses on two metrics: absolute trajectory error and the fraction of estimated poses to the total frame.
We recognize the potential benefits of integrating additional measurements, such as inertial data and GNSS, to enhance SLAM accuracy and efficiency. Future work particularly involves incorporating inertial measurement unit (IMU) data to improve overall performance. Additionally, we acknowledge the opportunity to enhance accuracy by leveraging the known configurations of co-planar multi-scale fiducial markers through the adoption of a perspective-n-points (PnP) algorithm. Our next step includes integrating a PnP algorithm tailored to our multi-scale marker layouts and reporting its performance within the existing SLAM framework we use.
## Acknowledgments
This work is supported by Supernal, LLC.
|
2309.10126 | Charge and spin gaps of the ionic Hubbard model with density-dependent
hopping | We calculate the charge gap $\Delta E_C$ and the spin gap $\Delta E_S$ of the
ionic Hubbard chain including electron-hole symmetric density-dependent
hopping. The vanishing of $\Delta E_C$ ($\Delta E_S$) signals a quantum
critical point (QCP) in the charge (spin) sector. Between both critical points,
the system is a fully gapped spontaneously dimerized insulator (SDI). We focus
our study in this region. Including alternation in the hopping, it is possible
to perform an adiabatic Thouless pump of one charge per cycle, but with a
velocity limited by the size of the gaps. | Oscar A. Moreno Segura, Karen Hallberg, Armando A. Aligia | 2023-09-18T20:04:09Z | http://arxiv.org/abs/2309.10126v1 | # Charge and spin gaps of the ionic Hubbard model with density-dependent hopping
###### Abstract
We calculate the charge gap \(\Delta E_{C}\) and the spin gap \(\Delta E_{S}\) of the ionic Hubbard chain including electron-hole symmetric density-dependent hopping. The vanishing of \(\Delta E_{C}\) (\(\Delta E_{S}\)) signals a quantum critical point (QCP) in the charge (spin) sector. Between both critical points, the system is a fully gapped spontaneously dimerized insulator (SDI). We focus our study in this region. Including alternation in the hopping, it is possible to perform an adiabatic Thouless pump of one charge per cycle, but with a velocity limited by the size of the gaps.
## I Introduction
The ionic Hubbard model (IHM) consists of the usual Hubbard model with on-site Coulomb repulsion \(U\) supplemented by an alternating one-particle potential \(\Delta\). It has been used to study the neutral-to-ionic transition in organic charge-transfer salts [1; 2] and the ferroelectric transition [3]. More recent studies have established that the chain has three different thermodynamic phases, and different gaps, correlation functions and other properties have been studied [4; 5; 6; 7; 8; 9; 10; 11; 12]. Here we assume half filling. The unit cell consists of two sites with on-site energies \(\pm\Delta\). Chains with larger unit cells have also been studied [13; 14].
The model has three phases, the band insulating (BI), the Mott insulating (MI) and a narrow spontaneously dimerized insulating (SDI) phase in between. An intuitive understanding of the first two phases is provided by the zero-hopping limit, in which the occupancies of the different sites are like 2020... (BI phase) for \(\Delta>U/2\) and 1111... (MI phase) for \(\Delta<U/2\). For finite hopping the SDI phase appears in between, as first shown by bosonization [4], and described in more detail later using an approximate mapping to an SU(3) Heisenberg model [8; 9].
The phase diagram of the model has been constructed in Ref. [5] using the method of crossings of excited energy levels (MCEL) based on conformal field theory [15; 16; 17; 18; 19]. For this model (including also density-dependent hopping) the method also coincides with that of jumps of charge and spin Berry phases used in Ref. [20]. The basis of the MCEL is that in one dimension, the dominant correlations at large distances correspond to the smallest excitation energies. Thus, the crossings of excited levels in appropriate symmetry sectors therefore correspond to phase transitions. Lanczos using total wave vector, inversion symmetry [21] and time-reversal symmetry has been used in order to separate the different symmetry sectors, limiting the maximum size to 16 sites. The results were obtained extrapolating to the thermodynamic limit [5]. Open-shell boundary conditions (OSBC) were used, which correspond to periodic BC for a number of sites \(L\) multiple of 4, and antiperiodic BC for even \(L\) not multiple of 4.
For fixed \(U\), small \(\Delta\), and half-filling as assumed here, the system is in the MI phase with zero spin gap. Increasing \(\Delta\), at the point \(\Delta=\Delta_{s}\), a spin gap \(\Delta E_{S}\) opens signaling the transition to the SDI phase. the transition is of the Kosterlitz-Thouless type [4]. Although the spin gap is exponentially small near the transition, the MCEL allows to identify it unambiguously and accurately from the crossing of the even singlet with lowest energy and the odd triplet of lowest energy (both states have higher energy than the ground state)- At \(\Delta=\Delta_{s}\), also the spin Berry phase \(\gamma_{s}\)[5; 20] jumps from \(\pi\) to \(0\) mod(\(2\pi\)). Further increasing \(\Delta\), rather soon, at the point \(\Delta=\Delta_{c}\) a charge transition from the SDI to the BI phase takes place in which the charge reorders. At this point, there is a crossing of the two singlets of lowest energy with opposite parity under inversion. In the BI phase the ground state is the singlet even under inversion, while it is the odd singlet in the other two phases. All these states have wave vector 0 for \(\Delta\neq 0\). In turn, this crossing leads to a jump in the charge Berry phase \(\gamma_{c}\) from \(\pi\) to \(0\) mod(\(2\pi\)). As explained above, for \(\Delta=\Delta_{c}\) and using OSBC, the charge gap \(\Delta E_{C}\) defined as the absolute value of difference in energy between the ground state and the first excited state at half filling (in other works called exciton gap [6] or internal gap [31]) vanishes at the charge transition.
Changes in \(\gamma_{c}\) are proportional to changes in the polarization. Actually, calculations of the charge Berry phase form the basis of the modern theory of polarization [22; 23; 24; 25; 26; 27; 28; 29; 30]. A jump in \(\pi\) in \(\gamma_{c}\) is consistent with a displacement of an electronic charge per unit cell in half a unit cell (to the next site) on average. This is the change of polarization that corresponds to the change in site occupancies from 1111.. to 2020.. The IHM in a ring has inversion symmetry with center at any site [21], and as a consequence
\(\gamma_{c}\) and \(\gamma_{s}\) can only be 0 or \(\pi\) mod(\(2\pi\)). In other words, they are \(Z_{2}\) topological numbers protected by inversion symmetry [30].
If a modulation of the hopping \(\delta\) is introduced, the inversion symmetry is lost, and \(\gamma_{c}\) can change continuously. This permits to transfer one charge to the next unit cell in a Thouless pump cycle in the \((\Delta,\delta)\) plane (see Fig. 1). This can be understood as follows. Starting at a point \((\Delta_{1},0)\) with \(\Delta_{1}>\Delta_{c}\), \(\gamma_{c}=0\). Then introducing a finite \(\delta\), with the appropriate sign, \(\gamma_{c}\) increases continuously with increasing \(|\delta|\). Decreasing \(\Delta\) to a value \(\Delta_{2}<\Delta_{c}\) and returning \(\delta\) to zero, the point \((\Delta_{2},0)\) is reached where \(\gamma_{c}=\pi\). Continuing the cycle with the opposite sign of \(\delta\), \(\gamma_{c}\) continues to increase and reaches the value \(\gamma_{c}=2\pi\) at the end of the cycle at \((\Delta_{1},0)\). This corresponds to the displacement of one unit charge by one unit cell according to the modern theory of polarization. The values of the Berry phases in the cycle and time-dependent calculations of the charge transferred have been presented in Ref. [31]. Moreover, this pumping procedure has been realized experimentally recently [32], allowing to study the effects of interactions in the field of quantized topological charge pumping in driven systems, which is of great interest in the last years [33; 34].
A problem with the pumping cycle mentioned above is that it usually crosses the MI segment between the points \((\Delta_{s},0)\) and \((-\Delta_{s},0)\) at which the spin gap vanishes. Since unavoidably this segment is traversed at a finite speed, spin excitations are created, leading to the loss of adiabatic quantized pumping [31; 32] (we note that introducing \(\delta\) in the MI phase, a spin gap \(\Delta E_{S}\) opens proportional to \(|\delta|^{2/3}\) for small \(\delta\)[31]). To avoid this problem, one might choose the crossing point \(\Delta_{2}\) inside the SDI phase, that is \(\Delta_{s}<\Delta_{2}<\Delta_{c}\) (as it is shown in Fig. 1), and then the system is fully gapped in the whole trajectory. However, at \(\Delta=\Delta_{2}\) both gaps \(\Delta E_{C}\) and \(\Delta E_{S}\) are small, and their magnitude is not known. Previous calculations of \(\Delta E_{S}\) were affected by strong finite-size effects in the SDI region and were limited to very large values of \(U\)[6]. On the other hand, it has been recently shown that the SDI phase is enlarged at small values of \(U\) if density dependent hopping is introduced [35]. A density-dependent hopping can be experimentally engineered by near-resonant Floquet modulation [36; 37; 38; 39; 40].
In this work, we calculate both gaps, \(\Delta E_{C}\) and \(\Delta E_{S}\) inside and near the SDI phase, and explore the optimum value of \(\Delta_{2}\) for which the smallest gap is maximum. We use density-matrix renormalization group (DMRG) [54; 55; 56; 57; 58], as described in Section III. We find that to calculate \(\Delta E_{S}\) open BC are more convenient, while to calculate \(\Delta E_{C}\), a ring with OSBC leads to the optimum results, improving previous estimates and allowing to calculate the charge gap within the SDI phase with unprecedented accuracy.
The paper is organized as follows. In Section II we briefly explain the model. In Section III, we describe the methods used to calculate the gaps. The results are presented in Section IV. Section V contains a summary and discussion.
## II Model
The model we study here is the IHM with density-dependent hopping (DDH). It is the version without alternation of the hopping (\(\delta=0\)) of the interacting Rice-Mele model [41] including DDH. Because of its relevance for quantized charge pumping, we describe the full Hamiltonian including also \(\delta\) below
\[H = \sum_{j\sigma}\left[-1+\delta\ (-1)^{j}\right]\left(c_{j\sigma}^ {\dagger}c_{j+1\sigma}+\text{H.c.}\right) \tag{1}\] \[\times[t_{AA}(1-n_{j\bar{\sigma}})(1-n_{j+1\bar{\sigma}})+t_{BB}n _{j\bar{\sigma}}n_{j+1\bar{\sigma}}\] \[+t_{AB}(n_{j\bar{\sigma}}+n_{j+1\bar{\sigma}}-2n_{j\bar{\sigma}} n_{j+1\bar{\sigma}})]\] \[+\Delta\sum_{j\sigma}(-1)^{j}n_{j\sigma}+U\sum_{j}n_{j\uparrow}n _{j\downarrow}.\]
The first term is the DDH, which is alternating for \(\delta\neq 0\). The amplitudes \(t_{AA}\), \(t_{AB}\) and \(t_{BB}\) correspond to hopping of a particle with a given spin, when the total occupancy of both sites for particles with the opposite spin is 0, 1 and 2 respectively. In the following we assume the electron-hole symmetric case \(t_{BB}=t_{AA}\), which is the one implemented experimentally with cold atoms [36; 37; 38; 39; 40]. \(\Delta\) is the alternating on-site energy and \(U\) is the on-site Coulomb repulsion.
The model with \(\Delta=\delta=0\) has been derived and studied in two dimensions as an effective model for cuprate superconductors [42; 43; 44; 45]. In one dimension also superconductivity is favored for some parameters [46; 47; 48; 49; 50]. Our interest in DDH here is that for \(t_{AB}\) larger than the other two, the fully gapped SDI phase is favored [18; 20; 35; 51; 52]. This is important for fully adiabatic quantized charge pumping of one charge. So far, charge pumping has been studied
Figure 1: Schematic representation of the pump trajectory. Dashed, dotted, and solid lines indicate the BI, SDI and MI phases of the IHM (at \(\delta=0\)), respectively.
in the interacting Rice-Mele model in absence of DDH (\(t_{AA}=t_{AB}=t_{BB}\)) [31; 32; 34; 53].
## III Methods
To perform the energy level calculations, we have used the DMRG method with a code that relies on the ITensors library for Julia [59]. Conveniently setting \(S_{z}\) sectors, we have calculated ground and excited states with a fixed bond dimension of 900. The truncation error is, in the worst case, on the order of \(10^{-6}\) for periodic BC (PBC), and \(10^{-10}\) for open BC (OBC).
In general, it is convenient to use OBC rather than PBC, because the entanglement is lower in the former case, leading to more accurate results in less amount of time. In turn, this fact permits to reach larger systems. This is particularly important for the spin gap \(\Delta E_{S}\), because even in regions of parameters for which \(\Delta E_{S}=0\) in the thermodynamic limit, it is finite for finite systems, scaling as \(1/L\) for increasing system size \(L\)[15]. We have calculated the spin gap by extrapolating the results for different system sizes using a quadratic function in \(1/L\). The calculations were done for systems between \(L=40\) and \(L=100\), except in the case of \(U=10\), where we have used sizes up to \(L=64\).
For the charge gap \(\Delta E_{C}\), which is the difference of energies between the first excited state and the ground state in the singlet sector, the situation is different. For OBC we find similar difficulties as those found before [6] for calculating the gap in the SDI phase and particularly near the transition to the BI phase, where it should vanish in the thermodynamic limit. The reason is the following. As it is clear using the MCEL method mentioned in Section I, the ground state and the first excited state have opposite parity under inversion, being the even state the one of lowest energy in the BI phase, and both states cross at the BI-SDI transition. For a chain with OBC and an integer number of unit cells, the inversion symmetry is lost, the crossing becomes an anticrossing, and extrapolation to the thermodynamic limit becomes problematic. Therefore, we change the method using a ring with OSBC as described below.
The Lanczos method used in the MCEL has divided the Hilbert space in different symmetry sectors, but the method is limited to 16 sites at half filling [5; 18; 35]. Our method allows us to use larger system sizes, but we do not have access to the different symmetry sectors. In any case, just plotting the energy of the ground state and first excited state as a function of \(\Delta\) in a ring, both energy levels and the crossing can be clearly identified. This is illustrated in Fig. 2 for a typical case. \(t_{AB}=1\) is chosen as the unit of energy. We find that extrapolating the energies of \(L\) multiple of 4 for a ring with PBC (which coincide with OSBC for the chosen \(L\)) between \(L=12\) and \(L=32\) using a quadratic function in \(1/L\), an accurate and reliable result for \(\Delta E_{C}\) in the thermodynamic limit is obtained. An example of the extrapolation is presented in Fig. 3
Noting that the slopes of the gap are different at both sides of the transition, we find that the difference between odd and even states can be well fitted by the following function with three parameters
\[E_{\rm odd}-E_{\rm even}=(\Delta-\Delta_{c})\left[A+\tanh\left(\frac{\Delta- \Delta_{c}}{B}\right)\right]. \tag{2}\]
Examples will be shown in the next Section.
Comparing with previous results using the MCEL in smaller systems [35], we have also found that using OSBC, the crossing between the first excited state in the sector with total spin projection \(S_{z}=0\) (corresponding to the even singlet [35]) and the lowest-energy state in the sector
Figure 3: Difference of energy between the even and odd states of lowest energy as a function of the inverse of the systems size \(L\) for all \(L\) multiple of 4 in the range \(12\leq L\leq 32\) with PBC for several values of \(\Delta\). The transition is calculated to be at \(\Delta_{c}=0.978\).
Figure 2: (Color online) Ground state and first excited state as a function of \(\Delta\) for 32 sites and PBC.
with \(S_{z}=1\) (an odd triplet [35]) corresponds to the crossing at \(\Delta=\Delta_{s}\) that signals the opening of the spin gap \(\Delta E_{S}\), and the SDI-MI transition, as explained in Section I.
Therefore, our methods might be also used to improve the accuracy of phase diagrams calculated with the MCEL, extending the results to larger systems.
We have found empirically, that sufficiently far from \(\Delta_{s}\) and in the SDI phase, or in the BI phase near the SDI-BI transition at \(\Delta=\Delta_{c}\), the dependence on \(\Delta\) of the spin gap is well described by the expression
\[\Delta E_{S}=A_{s}{\rm exp}\left[B_{s}\left(\Delta-C_{s}\right)\right]. \tag{3}\]
## IV Results
In Fig. 4, we show the gaps for the model without DDH and two values of \(U\). The maximum difference between any excited state and the ground state in the SDI phase is denoted by \(\Delta E_{M}\). This value is obtained at the crossing between both studied gaps. The value of \(\Delta\) at this crossing is denoted as \(\Delta_{M}\). We take \(t_{AB}=1\) as the unit of energy.
From the figure, one can see that the spread of the SDI phase \(\Delta_{c}-\Delta_{s}\) and also \(\Delta E_{M}\) are larger for larger values of \(U\) than for moderate ones. The former fact is in agreement with calculations of the phase diagram using up to 16 sites [5], although \(\Delta_{c}-\Delta_{s}\) is a little bit smaller in our case. Our values should be more accurate since we have calculated \(\Delta_{c}\) and \(\Delta_{s}\) using up to 32 and 28 sites, respectively.
In Fig. 5 we analyze the effect of DDH, decreasing \(t_{AA}=t_{BB}\) to half the value of \(t_{AB}=1\), for two extreme values of \(U\), leaving the intermediate values of \(U\) for Fig. 6. For \(U=10\), the maximum value of the gap \(\Delta E_{M}\)_increases_ slightly. This effect is rather surprising, because one naively expects that reducing the average value of the hopping, both \(\Delta E_{M}\) and the amplitude of the SDI phase should decrease. Therefore, the effect of introducing DDH overcomes the effect of reducing the average hopping regarding \(\Delta E_{M}\). Instead, the amplitude of the SDI phase \(\Delta_{c}-\Delta_{s}\) decreases slightly.
As discussed earlier [35], for small values of \(U\), the amplitude of the SDI phase increases strongly, since it continues to exist even for \(\Delta=0\). However, the magnitude of the maximum gap \(\Delta E_{M}\) is reduced by 25% when \(U\) is
Figure 4: (Color online) Charge gap (blue circles) and spin gap (red squares) as a function of \(\Delta\) for two values of \(U\) and \(t_{AA}=t_{BB}=t_{AB}=1\). Blue solid [red dashed] line is a fit using Eq. (2) [Eq. (3)]. Vertical lines in the top figure separate the different phases of the IHM.
reduced from 10 to 1 in units of \(t_{AB}\).
In order to look for the largest possible value of \(\Delta E_{M}\) in presence of DDH, we have calculated the gaps for intermediate values of \(U\). The result is shown in Fig. 6. While qualitatively, the results for \(U=3\), 4 and 5 are similar, the maximum gap \(\Delta E_{M}=0.0188\) is obtained for \(U=4\).
## V Summary and discussion
We have calculated the charge and spin gaps of the spontaneously dimerized insulating (SDI) phase of the ionic Hubbard model, including electron-hole symmetric density-dependent hopping. We have developed a new method using DMRG to calculate the charge gap, which presents advantages with respect to previously used ones, leading to substantially more accurate values. In addition, phase diagrams constructed by the method of crossing of energy levels might be calculated more accurately than using only Lanczos methods, if they are combined with DMRG (the former can be used to identify the symmetry sectors).
The results might be useful to present experiments with cold atoms in which quantized Thouless pumping of one charge is observed, when a pump cycle in the two-dimensional space \((\Delta,\delta)\) enclosing the point \((\Delta_{c},0)\) is performed in a realization of the interacting Rice-Mele model [Eq. (1)], where \(\Delta_{c}\) is the value of \(\Delta\) at the transition between the SDI and the band insulating (BI) phase. A fully adiabatic pump is possible if the Mott insulating (MI) phase is avoided. This phase lies at the segment between the points \((\Delta_{s},0)\) and \((-\Delta_{s},0)\), where \(\pm\Delta_{s}\) are the points of the MI-SDI phase transitions. For this purpose, the SDI phase should be traversed.
Fixing \(t_{AB}=1\) we find that the maximum gap inside the SDI phase is about 0.019. This is a rather small value, which by simple estimates seems to require a velocity about 10 times smaller than that used in available experiments[32] to guarantee adiabatic pumping in crossing the point \((\Delta_{M},0)\). However introducing \(\delta\) the gap increases quickly (as \(|\delta|^{2/3}\) in the MI phase). A time-dependent calculation, possibly decreasing the velocity near \((\Delta_{M},0)\) would be useful to check this procedure.
The effect of density-dependent hopping, reducing \(t_{AA}=t_{BB}\) and keeping \(t_{AB}=1\) is moderate in increasing the gap, although it is important if the average hopping is kept at the same value. Its main effect is that for small \(U\), the extension of the fully gapped SDI phase is strongly increased.
###### Acknowledgements.
AAA (KH) acknowledges financial support provided by PICT 2017-2726 and PICT 2018-01546 (PICT 2018-01546) of the ANPCyT, Argentina. KH acknowledges support from ICTP through the Associates Programs.
|
2309.16543 | Jet suppression and azimuthal anisotropy at RHIC and LHC | Jets are multi-partonic systems that develop before interactions with the
quark-gluon plasma set in and lead to energy loss and modifications of their
substructure. Jet modification depends on the degree to which the medium can
resolve the internal jet structure that is dictated by the physics of coherence
governed by a critical angle $\theta_c$. Using resummed quenching weights that
incorporate the IOE framework for medium-induced radiation and embedding the
system into a realistic heavy-ion environment we compute the dependence of jet
suppression on the cone angle $R$ of the jet, both at RHIC and the LHC. At RHIC
kinematics we see a very mild cone angle dependence for the range of $R$
studied, similar to what was found at the LHC. We also present results for the
jet azimuthal anisotropy $v_2$ as a function of $R$. We observe that as
centrality is decreased, $v_2$ for moderate $R$ jets sequentially collapse
towards the result for small $R = 0.1$. The reason of this sequential grouping
is the evolution of $\theta_c$ with centrality due to its strong dependence on
the in-medium traversed length. For jets with $R > \theta_c$, traversing
shorter lengths within the medium will make a larger difference than for jets
with $R < \theta_c$, since the size of the resolved phase-space over which
quenching weights are resummed will be reduced. For this reason, $v_2(R)$ is
quite sensitive to the typical value of $\theta_c$ at a given centrality. | Yacine Mehtar-Tani, Daniel Pablos, Konrad Tywoniuk | 2023-09-28T15:54:57Z | http://arxiv.org/abs/2309.16543v1 | # Jet suppression and azimuthal anisotropy at RHIC and LHC
###### Abstract:
Jets are multi-partonic systems that develop before interactions with the quark-gluon plasma set in and lead to energy loss and modifications of their substructure. Jet modification depends on the degree to which the medium can resolve the internal jet structure that is dictated by the physics of coherence governed by a critical angle \(\theta_{c}\). Using resummed quenching weights that incorporate the IOE framework for medium-induced radiation and embedding the system into a realistic heavy-ion environment we compute the dependence of jet suppression on the cone angle \(R\) of the jet, both at RHIC and the LHC. At RHIC kinematics we see a very mild cone angle dependence for the range of \(R\) studied, similar to what was found at the LHC. We also present results for the jet azimuthal anisotropy \(v_{2}\) as a function of \(R\). We observe that as centrality is decreased, \(v_{2}\) for moderate \(R\) jets sequentially collapse towards the result for small \(R=0.1\). The reason of this sequential grouping is the evolution of \(\theta_{c}\) with centrality due to its strong dependence on the in-medium traversed length. For jets with \(R>\theta_{c}\), traversing shorter lengths within the medium will make a larger difference than for jets with \(R<\theta_{c}\), since the size of the resolved phase-space over which quenching weights are resummed will be reduced. For this reason, \(v_{2}(R)\) is quite sensitive to the typical value of \(\theta_{c}\) at a given centrality.
Introduction
Hard probes such as high-\(p_{T}\) identified hadrons or jets have been one of the most prominent observables carrying the imprint of the hot and dense nuclear medium created in ultra-relativistic heavy-ion collisions. Jet yield suppression, commonly expressed in terms of the so-called nuclear modification factor \(R_{AA}(p_{T})\), represents the most paradigmatic example of the phenomenon of jet quenching, or how energetic colored objects get modified due to their interaction with deconfined QCD matter. A second essential observable is jet azimuthal anisotropy which, roughly speaking, is sensitive to path-length differences in jet suppression due to the relative orientation in the transverse plane of the jet direction \(\phi\) with respect to the event plane of the collision \(\Psi_{R}\). These observables have established a consistent picture of the quark-gluon plasma (QGP) as an opaque medium to hadronic probes and highlighted the role of the geometry of the heavy-ion collisions as an important factor that modulates the strength of the interaction.
From the point of view of perturbative QCD, there is a crucial difference between high-\(p_{T}\) hadrons and fully reconstructed jets. While the latter involves the fragmentation of a leading fragment originating from the hard matrix element, jets are multi-scale probes which are sensitive to soft and collinear radiation within the permitted phase space as given by the jet total \(p_{T}\) and cone angle \(R\). The differences in radiation patterns imply that these two observables should be treated differently in a medium as well. In particular, the differences in the fragmentation process will directly affect how these observables become sensitive to the opaque nature of the QGP.
The theory of how highly energetic quarks or gluons interact with a deconfined background is well established for both elastic and inelastic processes.1 Focussing on the latter, in a sufficiently dense medium, where the size of the medium \(L\) becomes significantly bigger than the mean free path \(\lambda_{\rm mfp}\), i.e. \(L\gg\lambda_{\rm mfp}\), multiple scattering with the medium leads to the frequent emission of soft quanta that subsequently rapidly cascade further down to the medium temperature scale and spread out over large angles. This turns out to be an efficient mechanism for _energy-loss_, i.e. the transport of energy from the leading parton to soft modes at large angles [1].
Footnote 1: For a strongly coupled medium, the gauge-gravity duality offers insights about the drag forces suffered by the projectile.
The modifications of a leading fragment, which predominantly contributes to the high-\(p_{T}\) inclusive hadron spectrum, is closely related to how a single parton is affected by medium interactions. How to arrive at the modifications of a jet is a wholly different question. Being a multi-partonic object, a reconstructed jet should be sensitive to the energy-loss suffered by several of its constituents and could experience modifications to its internal structure by capturing genuine medium-induced bremsstrahlung or medium recoil within its cone angle. A new framework for calculating energy-loss of jet observables has recently been developed [2, 3, 4], see also [5] for calculations of jet substructure observables. In particular, one has identified the relevant scales where medium-modifications set it. Furthermore, the physics of QCD coherence also unambiguously identifies a minimal critical angle \(\theta_{c}\) which the medium is not able to resolve.
In a previous publication [4], we have calculated the \(R\)-dependent jet spectrum in heavy-ion collisions at LHC, obtaining excellent agreement with experimental data as a function of \(p_{T}\), the cone angle \(R\) and the centrality of the collisions. In these proceedings, we extend the analysis to the azimuthal anisotropy through the harmonic coefficient \(v_{2}(p_{T},R)\) as a function of the same
variables. The inherent sensitivity of \(v_{2}\) to path-length differences in the azimuthal plane makes it also an excellent measure of coherence physics.
## 2 Methodology for jets in heavy-ion collisions
In Figure 1 we depict the two resummation schemes involved in the calculation of jet observables in heavy-ion collisions. Initially, the hard parton emerging from the hard QCD matrix element undergoes a DGLAP evolution from large angles \(R_{0}\sim 1\) down to the cone angle \(R\). This is referred to as the \(\log 1/R\) resummation [6, 7, 8, 9]. The cross section for jet production in pp collisions therefore becomes
\[\sigma^{PP}(p_{T},R)=\sum_{k=q.g}f_{\mathrm{jet}/k}^{(n-1)}(R|p_{T},R_{0})\, \hat{\sigma}_{k}(p_{T},R_{0})\,, \tag{1}\]
where \(n\equiv n_{k}(p_{T},R_{0})\) is the power-index of the cross-section of the hard parton with flavor \(k\). This is calculated at leading order (LO) at the factorization scale \(Q_{\mathrm{fac}}^{2}\), such that \(\hat{\sigma}_{k}=f_{i/p}\otimes f_{j/p}\otimes\hat{\sigma}_{ij\to k(l)}\), and involves a convolution of parton distribution functions (PDFs) \(f_{i/p}(x,Q_{\mathrm{fac}}^{2})\) with the \(2\to 2\) QCD scattering cross section \(\hat{\sigma}_{ij\to kl}\). The moment of the fragmentation function of an initial hard parton with flavor \(k\) is \(f_{\mathrm{jet}/k}^{(n)}(R|p_{T},R_{0})=\int_{0}^{1}\mathrm{d}x\,x^{n}f_{ \mathrm{jet}/k}(R|x,R_{0})\), and receives both quark and gluon contributions, \(f_{\mathrm{jet}/k}^{(n)}=\sum_{i=q,g}f_{i/k}^{(n)}\). In heavy-ion (AA) collisions, the baseline spectrum, referred to as \(\tilde{\sigma}^{AA}(p_{T},R)\), is modified by replacing the proton PDFs by nuclear PDFs \(f_{i/p}(x,Q_{\mathrm{fac}}^{2})\to f_{i/A}(x,Q_{\mathrm{fac}}^{2})\).
The second stage of the evolution, see Fig. 1, reflects the resummation of energy loss due to medium-induced processes. The full cross section in AA collisions therefore reads,
\[\sigma^{AA}(p_{T},R)\simeq\sum_{i=q,g}Q_{i}(\nu|p_{T},R)\tilde{\sigma}_{i}^{ AA}(p_{T},R)\,, \tag{2}\]
where \(\tilde{\sigma}_{i}^{pp}\) is given in Eq. (1). Here, the _quenching factor_ of a jet \(Q_{i}(\nu|p_{T},R)\) accounts for the full energy loss of a jet including the partial recovery of "lost" energy within the jet cone. It depends on the Laplace variable \(\nu=n_{i}(p_{T})/p_{T}\), where \(n_{i}(p_{T})\) is the power of the steeply falling partonic
Figure 1: Depiction of the two resummation schemes needed to compute the jet spectrum in heavy-ion collisions.
spectrum, only through the initial conditions. The resummation of energy loss is calculated via a set of non-linear evolution equations [3], that read
\[\frac{\partial Q_{i}(p,\theta)}{\partial\ln\theta}=\int_{0}^{1}\mathrm{d}z\, \frac{\alpha_{s}(k_{\perp})}{2\pi}p_{ji}^{(k)}(z)\,\Theta_{\rm res}(z,\theta) \,\left[Q_{j}(zp,\theta)Q_{k}((1-z)p,\theta)-Q_{i}(p,\theta)\right]\,, \tag{3}\]
where \(k_{\perp}=z(1-z)p\theta\), \(p_{ji}^{(k)}(z)\) are the un-regularized Altarelli-Parisi splitting functions. The initial conditions are simply given by the partonic quenching factors for elastic and inelastic energy loss, i.e. \(Q_{i}(p,0)=Q_{{\rm rad},i}^{(0)}(\nu)Q_{{\rm el},i}^{(0)}(\nu)\), see [4] for further details.
The phase space constraint, encoded in \(\Theta_{\rm res}\), ensures that only sufficiently hard jet splitting, i.e. those that form with short formation times inside of the medium, contribute to the total energy loss of the jet and is given by \(\Theta_{\rm res}(z,\theta)=\theta(L-t_{\rm d})\theta(t_{\rm d}-t_{\rm f})\). The condition \(t_{\rm f}<t_{\rm d}\), where \(t_{\rm f}=2/[z(1-z)p_{T}\theta^{2}]\) and \(t_{\rm d}=[12/(\hat{q}\theta^{2})]^{1/3}\), restricts the early vacuum-like emissions to be harder than the medium scale. Finally, \(t_{\rm d}<L\), or \(\theta>\theta_{c}\equiv[12/(\hat{q}L^{3})]^{1/2}\), makes sure that the jet splittings have sufficiently time to be resolved by medium interactions. This implies a minimal angle of the full resummation of jet energy loss as depicted by the red region in Fig. 1. Finally, in our semi-analytical treatment the medium parameters controlling elastic and inelastic energy loss, i.e. \(\hat{e}=\hat{q}/(4T)\) and \(\hat{q}\), are both sampled from a dynamically evolving hydrodynamical medium. For further details, see [4].
## 3 Sensitivity to \(\theta_{c}\) through path-length dependence
The description of experimental data over a wide \(p_{T}\) and centrality range is remarkable [4].2 The predicted \(R\)-dependence agrees well with the most recent experimental data [11, 12, 13]. As an example, we compare in Fig. 2 our semi-analytical results with data from ATLAS and ALICE for central collisions heavy-ion collisions at \(\sqrt{s_{NN}}=5.02\) ATeV. Comparisons to data on reconstructed jets at RHIC energies are also in good agreement with recent measurements, and will be reported in a forthcoming publication.
Footnote 2: Note that after adjusting the two free parameters to mid-rapidity data [10], no further tuning of the parameters was permitted.
From a thorough analysis of the associated theoretical uncertainties we found that the details of the phase space for in-medium vacuum-like emissions played a major role to describe the right
Figure 2: Calculations of the nuclear modification factor for reconstructed jets for \(R=0.2\), \(0.4\) and \(0.6\), see plots for details, compared to data from [12] (left) and [13] (right).
jet suppression factor from small to moderate cone angles \(R\leq 0.4\)[4]. Considering only gluon emission off a parton with color charge \(C_{R}\), the phase space integral over the resolved phase space,
\[\Omega_{\rm res}(p_{T},R)=\int_{0}^{R}\frac{{\rm d}\theta}{\theta}\int_{0}^{1}{ \rm d}z\ \frac{\alpha_{s}(k_{\perp})}{2\pi}p_{gR}(z)\stackrel{{\rm DLA}}{{ \approx}}\bar{\alpha}\left[\ln\frac{R}{\theta_{c}}\ln\frac{p_{T}}{\omega_{c}}+ \frac{2}{3}\ln^{2}\frac{R}{\omega_{c}}\right]\, \tag{4}\]
where \(\omega_{c}=\frac{1}{2}\hat{q}L^{2}\) is the critical energy and \(\bar{\alpha}=\alpha_{s}C_{R}/\pi\). This result presumes that \(R>\theta_{c}\) and \(p_{T}>\omega_{c}\). Clearly, the role of both the energy of the jet as well as its cone angle contribute at single-logarithmic level.
To shed further light on the role of the phase space, we have calculated the jet production cross section as a function of azimuthal angle, i.e.
\[\frac{{\rm d}N^{\rm jet}}{{\rm d}\phi\,{\rm d}p_{T}}=\frac{N^{\rm jet}}{2\pi }\left[1+2\sum_{n}v_{n}(p_{T})\cos(n(\phi-\Psi_{R}))\right]\,. \tag{5}\]
In particular, we focus on the second harmonic coefficient \(v_{2}(p_{T},R)\) which captures the path-length differences in- and out-of-plane due to the initial almond-shaped density distribution. The sensitivity to path-length becomes apparent if we approximate the flow coefficient by its linearized version, and write \(v_{2}\simeq[R_{AA}^{\rm in}-R_{AA}^{\rm out}]/[R_{AA}^{\rm in}+R_{AA}^{\rm out}]\), where \(R_{AA}^{\rm in}=R_{AA}(L)\) and \(R_{AA}^{\rm out}=R_{AA}(L+\Delta L)\). Due to the many ways the path length enters the jet spectrum in Eq. (2), the resulting behavior is a complicated interplay between a wide variety of factors such as modifications of the quark/gluon fractions, sensitivity to path-length in the resolved phase space, modulations of the recovered energy within the jet cone etc. Currently, it is hard to pinpoint one general effect that stands out across the widely varying experimental conditions for measuring the jets.
Our new calculation for these proceedings follows the same semi-analytic methodology as in the previous Section, except that we additionally bin the sampled jets in azimuthal angle to extract the \(v_{2}\) coefficient. Our calculations are in excellent agreement with experimental data for reconstructed \(R=0.2\) jets in a wide range of centralities [14], see Fig. 3. We plan to present our predictions for the fully \(R\)- and \(p_{T}\)-dependent \(v_{2}\) coefficient for jets for both RHIC and LHC kinematics in a forthcoming publication. We believe this novel observable to be promising to pin down the exact interplay between perturbative QCD evolution early in the medium, controlled by the evolution equations for the resummed quenching factors (3), and the non-perturbative physics governing the distribution of soft fragments around the jet axis, resulting in the effective energy-loss that drives both \(R_{AA}\) and \(v_{2}\) at high-\(p_{T}\).
## Acknowledgments
Y. M.-T.'s work has been supported by the U.S. Department of Energy under Contract No. DE-SC0012704. D.P. has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 754496.
|
2309.10132 | Ontology-Based Feedback to Improve Runtime Control for Multi-Agent
Manufacturing Systems | Improving the overall equipment effectiveness (OEE) of machines on the shop
floor is crucial to ensure the productivity and efficiency of manufacturing
systems. To achieve the goal of increased OEE, there is a need to develop
flexible runtime control strategies for the system. Decentralized strategies,
such as multi-agent systems, have proven effective in improving system
flexibility. However, runtime multi-agent control of complex manufacturing
systems can be challenging as the agents require extensive communication and
computational efforts to coordinate agent activities. One way to improve
communication speed and cooperation capabilities between system agents is by
providing a common language between these agents to represent knowledge about
system behavior. The integration of ontology into multi-agent systems in
manufacturing provides agents with the capability to continuously update and
refine their knowledge in a global context. This paper contributes to the
design of an ontology for multi-agent systems in manufacturing, introducing an
extendable knowledge base and a methodology for continuously updating the
production data by agents during runtime. To demonstrate the effectiveness of
the proposed framework, a case study is conducted in a simulated environment,
which shows improvements in OEE during runtime. | Jonghan Lim, Leander Pfeiffer, Felix Ocker, Birgit Vogel-Heuser, Ilya Kovalenko | 2023-09-18T20:13:28Z | http://arxiv.org/abs/2309.10132v1 | # Ontology-Based Feedback to Improve Runtime Control for Multi-Agent Manufacturing Systems
###### Abstract
Improving the overall equipment effectiveness (OEE) of machines on the shop floor is crucial to ensure the productivity and efficiency of manufacturing systems. To achieve the goal of increased OEE, there is a need to develop flexible runtime control strategies for the system. Decentralized strategies, such as multi-agent systems, have proven effective in improving system flexibility. However, runtime multi-agent control of complex manufacturing systems can be challenging as the agents require extensive communication and computational efforts to coordinate agent activities. One way to improve communication speed and cooperation capabilities between system agents is by providing a common language between these agents to represent knowledge about system behavior. The integration of ontology into multi-agent systems in manufacturing provides agents with the capability to continuously update and refine their knowledge in a global context. This paper contributes to the design of an ontology for multi-agent systems in manufacturing, introducing an extendable knowledge base and a methodology for continuously updating the production data by agents during runtime. To demonstrate the effectiveness of the proposed framework, a case study is conducted in a simulated environment, which shows improvements in OEE during runtime.
## I Introduction
Manufacturing systems have become increasingly complex in recent years due to various factors, including globalization, the spread of new technology, the rising demand for customized products, and the growing concern about sustainability. As a result, manufacturers are facing the challenge of managing the performance of their manufacturing systems to ensure that they operate efficiently and effectively, which can be achieved by improving Overall Equipment Effectiveness (OEE) [1]. A key approach to improve OEE is to enable runtime control, managing the performance of the manufacturing system in real-time, including continuous monitoring and decision making [2].
Decentralized methods allow for greater flexibility and adaptability in complex manufacturing environments and quickly respond to changes on the plant floor [3]. One such decentralized strategy is the use of Multi Agent System (MAS) to control various types of complex systems. MAS has been applied to various aspects of manufacturing systems, such as product design, production planning, and control [4]. These systems perform well in dynamic environments with increased uncertainty and complexity.
Despite the advantages of improved flexibility and adaptability, coordinating the activities of agents requires extensive communication and computational efforts during runtime with a shared understanding of the environment. To overcome these challenges, Semantic Web Technologies (SWT) have been proposed to support more efficient communication and knowledge sharing [5]. Ontologies, enabled by SWT, play a crucial role in facilitating communication and interoperability in MASs, particularly in the manufacturing domain [6]. SWT can provide improved communication and cooperation among agents by providing a common language and structure.
While online communication load and computational effort in MAS have been reduced by using SWT [5], there are still challenges when ensuring effective runtime control of the MAS in manufacturing. One challenge lies in the ability of the MAS to adapt to changes in the production system and its environment, capturing all the necessary data for the agents to accurately measure the OEE of each machine. _(C1)_. Another challenge involves the need for technical expertise for both customers and engineers to interact with agents in the MAS, as it requires a deep understanding of the system's architecture and communication protocols _(C2)_. There is also a challenge of continuously updating agents relevant to previous and current states of the resources/parts/processes in the system _(C3)_.
The main contribution of this work is the design of the ontology, i.e. Knowledge Base (KB), for MAS in manufacturing to enable agents to access and update the KB during runtime and provide continuous decision support based on a global context. The contributions of this paper over the previous work include: (1) an extendable KB that effectively accumulates the MAS's state and history of production to accurately measure the OEE, (2) an ontology-based interface, enabling customers and engineers to facilitate communication with the MAS during runtime, and (3) the implementation of a mechanism where agents within the manufacturing system continuously update the KB with their current state and the latest history of their actions.
The rest of the manuscript is organized as follows. Section II provides background regarding ontologies and MASs in manufacturing. Section III describes an ontology-based multi-agent framework to improve runtime control of manufacturing systems. Section IV explains the implementation of the proposed framework Section V presents a case study to demonstrate the proposed framework. Section VI summarizes the paper and discusses future work.
## II Background
Ontologies and multi-agent systems have been implemented in manufacturing systems to enable more flexible, adaptable, and efficient production systems.
### _Ontologies in Manufacturing_
An ontology is understood as an "explicit specification of a conceptualization" [7]. As such, ontologies can be used to represent knowledge about production systems. The ontology consists of the terminological component (TBox), the schema, and the assertional component (ABox), which includes the instance level. Ontologies can be utilized to efficiently locate, categorize, represent, and reuse knowledge that already exists inside various information resources [8].
MASON (MAnufacturing's Semantics ONtology) is one of the first examples of ontology in the manufacturing domain [9] to formalize and share data using the Web Ontology Language (OWL). Ontologies have also been applied in specific applications within the manufacturing field. For example, Dinar et al. [10] proposed an ontology for Design for Additive Manufacturing (DFAM) to model 3D printed components. In the steel manufacturing industry, Common Reference Ontology for Steelmaking (CROS) was proposed as a solution to the problem of semantic interoperability [11]. One study presented an OWL-based manufacturing resource capability ontology (MaRCO) [12], which aims to describe the capabilities of manufacturing resources in a common formal resource model. The project AutomationML ontology (AMLO) interlinks and integrates heterogeneous data in industrial systems design by providing a semantic representation of the AutomationML standard [13]. While these studies have developed ontologies that provide information about the combined capabilities of resources and integrate heterogeneous data, they do not focus on applications during runtime.
### _Multi-Agent Systems in Manufacturing_
A variety of multi-agent architectures have been developed to achieve system-level control of manufacturing systems [14, 15, 16]. These multi-agent architectures use several software agents to make high-level decisions for different manufacturing system components [3, 17]. The high-level decisions made by the agents influence the overall performance of the manufacturing system [18]. Therefore, the design of these software agents is crucial in understanding and enhancing the performance of the manufacturing system.
Recently proposed MAS architectures for manufacturing systems contain instances of product agents (PAs) and resource agents (RAs) [3, 19, 20]. A PA is an agent which makes decisions for a specific component in the production system. An RA is a high-level controller for a resource (e.g., robot, machine) on the shop floor. An example proposed in [3] presents a MAS architecture revolving around PAs that have the ability to understand their surroundings, plan, and activities, and request necessary actions from the RAs. Similarly, Bi et al. [20] developed a model-based RA architecture that incorporates risk assessment and improves throughput in dynamic manufacturing environments. While these architectures provide data structures and requirements for coordinating the behavior of agents, their aim is not on utilizing the historical knowledge for improving the runtime control of the manufacturing system.
### _Ontologies and Multi-Agent Systems in Manufacturing_
The introduction of ontologies in MASs allows agents to make decisions based on their shared understanding of the domain. Formalizing knowledge is key to enabling agents to dynamically adapt to system changes [15]. Thus, it has been suggested that the use of SWTs would improve the MAS of manufacturing systems [6].
ADAptive holonic COntrol aRchitecture (ADACOR) [21] is one of the early efforts for using an ontology and MAS in manufacturing systems. This project aims to develop an ontology for distributed manufacturing systems that include components and procedures to assist in scheduling and monitoring. Vrba et al. [22] suggested using a central ontology in the form of an Resource Description Framework (RDF) database. A production plan agent provides each PA with a possible production plan and then single agents schedule a plan by collaborating with available resources. An ontology framework for automatic initialization of multi-agent production systems using SWT was proposed in prior work [5]. The ontology is effectively queried to produce an automaton for initializing the individual agents. As a result, the online communication load and the computational effort are reduced for PAs and RAs. However, in the framework proposed runtime information from the agents is not fed back to the ontology, which would allow for adapting and evolving over time as the environment changes.
In summary, several architectures have been proposed for using SWT and MAS for controlling manufacturing systems. However, none of these works aim to utilize an ontology for MAS in manufacturing to improve runtime control by feeding information back into the ontology. To the best of our knowledge, none of the existing works addresses the combination of utilizing an extendable KBs to capture comprehensive historical information (_C1_), providing an interface for customers and engineers for agent communication during runtime via the KB (_C2_), and updating the MAS during runtime (_C3_) to provide decision support for the agents in a global context.
## III Framework for Runtime Control of Agents
In this section, we introduce an ontology-enhanced multi-agent framework for manufacturing that contains an extendable KB, an interface for engineers and customers, and runtime control through communication and coordination between agents and the ontology.
### _Design of the Framework_
The novel architecture presented in Figure 1 provides a general overview of an ontology-based multi-agent framework. An ontology is designed based on the architecture of
a MAS that includes both PAs and RAs using the Belief-Desire-Intention (BDI) paradigm. This ontology stores the knowledge about the production process, the resources, and the interactions. Section IV-B offers a detailed discussion on how the knowledge is stored within the ontology. Engineers input the initial knowledge of a manufacturing system in the spreadsheets and then export it into the ontology using the _Ontology Builder_ during the "offline" phase, depicted in Figure 1. Using the constructed ontology, agents in MAS communicate with the ontology through the Application Programming Interface (API) during runtime, i.e. the "online" phase. The agents are indicated as PAs and RAs in the box of MAS in Figure 1. Both customers and engineers also have the ability to retrieve and update the KB using the API as shown in Figure 1. Some of the essential functions provided by the API are introduced in Section III-C.
The BDI architecture includes _beliefs_, _desires_, and _intentions_, where _beliefs_ refer to the manufacturing environment, _desires_ define goals, and _intentions_ are plans derived from beliefs to fulfill desires [3]. Therefore, an ontology based on the BDI architecture can provide a structured and organized way to represent the knowledge required for both PAs and RAs to operate effectively in the MAS.
The framework introduces three essential capabilities, offering main benefits over existing approaches. First, the framework utilizes an ontology as a centralized KB to accumulate the production history of the MAS. This knowledge provides agents with holistic environment models, offering a comprehensive understanding of the system's current state and history (_C1_). The agents are able to make their respective decentralized decisions in a global context. The framework also allows engineers and customers to easily communicate with the agents in the MAS through the use of ontology (_C2_). They can retrieve and update the manufacturing environment without needing knowledge of the system architecture, thus providing flexibility and scalability. Additionally, the framework provides capabilities for communication and coordination between agents and the ontology during the runtime of the manufacturing process (_C3_). The agents can directly query the ontology for adjustments in response to global changes in resources, product features, or other factors.
### _The Basic Ontology_
The core concepts in the manufacturing system that are defined in the ontology include these classes: _feature_, _process_, _resource_, and _specification_. This is based on the Product Process Resource (PPR) model, in which _resources_ execute _processes_ on _products_ to realize _features_[23].
Fig. 1: Graphical Abstract of the Framework
Fig. 2: Overview of the Ontology
The _specification_ class refers to a detailed description of the desired product, including the specific features, requirements, and the expected deadline, as specified by the customer. The _process_ class contains information on the production processes. These include both the _processPlans_, which is a set of operations and procedures that transform inputs into finished products including the use of resources, as well as _processExecutions_, which represent real-world execution of the respective process plans. The _resource_ class encompasses physical assets or capabilities used in the production process such as robots and buffers. The _feature_ refers to the various characteristics of a product that are defined in order and used to describe the desired product. The initial TBox of the ontology that includes this information is shown in Figure 2.
To ensure that the KB accurately represents not only the static MAS, but also the dynamically changing environment, further classes, and properties have to be defined. To align with the BDI architecture and capture the PA objectives, we propose an objective function that considers a set of use-case specific metrics which is shown in Figure 3. This is represented in the ontology by the PA specific class _objectiveFunction_. The link between the objective function and the PA is created by the _hasObjectiveFunction_ property. The _objectiveFunction_ then has several _coefficients_ linked to it by the _hasCoefficient_ property. The value of the coefficient is given by the data property _hasValue_ and the type of metric by the _coefficientFor_ datatype property. Their respective performance also needs to be evaluated to allow for an evaluation of different process plans. This is done by introducing a _performance_ class and the _hasPerformance_ property. The _hasPerformance_ property has two subclasses, _realPerformance_ and _expectedPerformance_ to further enable transparency when it comes to historic decision-making. The last metric for each product is the _deadline_. By utilizing the previously mentioned classes, we can accurately represent both the beliefs (via environment models and expected performances) and desires (via objective function and deadline) of the BDI architecture.
### _Modeling the System during Runtime_
To support runtime control of MAS utilizing the historical data, runtime information is added to the KB. We developed a concept called _processExecution_, establishing its relationship with product, resource, and performance, to provide agents information during runtime. A _processExecution_ represents a physical event, in which a _processPlan_ gets executed.
An example of runtime data is shown in Figure 4. A PA adds each process execution to the ontology and displays its intention in a globally accessible format. To connect each process execution to different parts of the base ontology, a new set of object properties is introduced. The _hasProcessExecution_ property associates a product with execution, while the _runsOnResource_ property indicates which resource is utilized. The _runsProcessPlan_ property links the process plan to the execution.
The _hasStatus_ property captures the different stages of the execution by assigning a literal value. Initially, the PA proposes the execution in the ontology and the status changes to _planned_. When the execution starts, the status changes to _running_ and then to _successful_ if completed successfully. If unexpected events occur and the execution does not finish as planned, the status changes to _errored_ with a _hasErrorMessage_ property. Other important datatype properties for the execution include _plannedStartTime_ and _plannedEndTime_, as well as _realStartTime_ and _realEndTime_. The actual performance of the execution may differ from the expected performance of the process, which is captured by the _realPerformance_ property. By utilizing this model, we can incorporate past, present, and future events into an ontology, which further contributes to the belief component of the BDI architecture. As a result, these events form the basis of intentions for the various agents within the MAS. The process execution model facilitates the integration and retrieval of a wide range of knowledge and capabilities during runtime. The following are some of the essential interactions implemented:
* **addPlannedExecutionData**: This query adds data on the product that will be processed, planned start and end times, and the resources that will be utilized.
Fig. 4: Example of Runtime Information Ontology
Fig. 3: Relationship among Products, Processes, Features, and Resources
* **updateExecutionData**: This query updates execution data such as status, real start times, and end times.
* **getProductStatus**: This query is used to retrieve product-related information such as the status, duration, and deadline.
* **getResourceHistory**: This query allows retrieval of the historical data of the resources utilized in the manufacturing processes.
* **changeResourcePerformance**: This query allows modification of the performance such as energy costs, emissions, and quality.
By utilizing queries, various interactions can be implemented to retrieve and modify information such as resource history and performance. The agents can use this information to improve runtime control and OEE of machines in the system.
## IV Implementation
### _Dynamically Building the Knowledge Base_
The approach for dynamically building the KB is based on prior work [5]. The initial ontology is created using spreadsheets to reduce dependence on proprietary tools. The spreadsheets are converted into CSV files using a macro. Finally, the transformation of these files into a formalized ontology using OWL is implemented in Python using the package Owlready2 [24]. By leveraging this package, the ontology creation process was made more accessible.
### _Agent-Ontology Communication during Runtime_
We developed an API to facilitate communication between the MAS and the ontology. This API allows interacting with the ontology, querying and updating the ontology using a HyperText Transfer Protocol (HTTP) requests. The PAs and RAs from the MAS request specific actions through this API, which then utilizes a REpresentational State Transfer (REST)-interface to pass the corresponding query to the ontology. When a PA or RA from the MAS retrieve or update information from an ontology, the agents send a request to the REST API. The API then converts the request to SPARQL Protocol And RDF Query Language (SPARQL) queries, allowing agents to retrieve or manipulate data in the ontology. We utilize the Stardog [25] knowledge graph platform to run the SPARQL queries outlined in III-C. For instance, if an agent requests information about a product's status, the API will execute the appropriate SPARQL query. The response is provided in JavaScript Object Notation (JSON) to an agent. The format is optimized for the agents, meaning that it is structured in a way that makes it easy for the agents to use the data. Moreover, customers and engineers can also interact with the ontology to retrieve or update information during runtime. Note that in the context of this framework, there is no communication delay between the agents and ontology. A graphical representation of this communication between agents and ontology is provided in Figure 5.
### _Querying the Knowledge Base_
The ontology was designed to store information about processes, resources, relationships, and performance data. To register runtime information to an ontology based on this design, we utilize SPARQL to interact with an ontology as described in Section IV-B. The overall process of querying some of the knowledge and capabilities for a PA and an RA is shown in Figure 6. The solid line represents the request flow from the agents to the ontology, while the dotted line represents the response flow from the ontology back to the agents. An INSERT operation allows new data to be added, thus extending the KB. For example, to register a new planned process execution, the data includes the properties of the execution such as its ID, part name, and planned start time and end time. For UPDATE operations, INSERT and DELETE operations can be combined. When a process in the MAS is executed, we update the start time and status of a process. The SELECT operation can retrieve information during runtime. For example, an RA can use this operation to retrieve information about the resource's past activities. The SPARQL query illustrated in Listing 1 retrieves information associated with successful process executions run on a particular resource. The query selects distinct values for the execution ID, emissions, costs, quality, and start and end time. The question mark ("?") is used as a prefix to represent variables that will be assigned values during the execution of the query. It accomplishes this
Fig. 5: Communication between Agents and Ontology
Fig. 6: Querying the Knowledge Base
by searching for a resource identified by the <resource> URI and filtering for executions with a status of "successful". The query then looks for performance data including emissions, costs, and quality. This information can be used by an RA to make decisions about allocating resources to improve the manufacturing process.
```
1SELECTDISTINCT?execution?emissions?costs
2?quality?startTime?endTime
3WHERE
4{
5?execution:runsOnResource<resource>;
6:hasStatus'successful";
7:realPerformance[
8:emissions?emissions?emissions;
9:costs?costs;
10:quality?quality
11:];
12:realStartTime?startTime;
13:realEndTime?endTime.
14}ORDERBY?startTime
```
Listing 1: SPARQL Query for retrieving a resource's history.
## V Case Study
In this case study, we examine the proposed framework and demonstrate how the MAS utilizes the ontology to improve runtime control.
### _Case Study Setup and Workflow_
To evaluate the feasibility, a case study is set up in a small manufacturing system environment, which is composed of four identical machines (M1, M2, M3, and M4), two robots (R1 and R2), and three buffers (B1, B2, and B3). The system has 6 RAs representing R1, R2, M1, M2, M3, and M4. R1 transfers the part from B1 to B2. R2 transfers the part from B2 to a machine. When a manufacturing process P1 is completed, a part is transferred from a machine to B3 to exit the system. The performance of the machines is shown in Table I.
The objective of PA and RA is to enhance the runtime control of MAS in manufacturing by utilizing the knowledge in the ontology. Specifically, the goal of PAs is to fulfill the product requirements based on the _objectiveFunction_, which is described in Section III-B. The goal of RAs is to increase their OEE while limiting energy utilization. In this example, the RAs try to increase their OEE by ensuring that each resource has an uptime over 50% while limiting the total energy cost to 450 kWh. Each machine's performance is adjusted by controlling the duration and energy cost by the RAs. In this case study, the assumption is made that 5% of energy cost from the previous cost results in a one-minute decrease in duration. The layout of a small manufacturing system is depicted in Figure 7.
A new manufacturing process is initiated once a new part enters the system beginning at B1. As a part enters the system, product data are initialized in the ontology as shown in Figure 3. This data also contains information such as planned start and end time, provided in Figure 4. When R1 RA queries from the ontology that a process P1 is planned, R1 moves the part from B1 to B2. Once the part has been moved to B2, the PA updates the data for P1 by specifying the machine to be run on. It should be noted that the ontology only provides each PA with possible plans, and does not compute optimal plans. Therefore, the PA identifies the machine to be used by evaluating the _objectiveFunction_, as depicted in Figure 3. Then, the PA selects a machine that allows parts to exit the system as fast as possible, taking into account the time required for previously running parts inside the machines. When R2 RA detects from the ontology that a part is planned to be processed on a machine such as M2, R2 moves the part to M2. After the part is processed, the PA updates the ontology by setting the _hasStatus_ to "successful" and _realEndTime_ to the current time using the UPDATE Query shown in Figure 6. Finally, R2 picks up the part from M2 to B3 to exit the system.
### _Case Study Scenario_
In the case study, we created a scenario of how RAs use the ontology to adjust the performance levels of each machine. The RAs of each machine queries the ontology to obtain historical data about the status, energy costs, quality, and process time of each machine. Specifically, the RAs query the ontology using the SPARQL query shown in Listing 1 to retrieve the performance of each completed process for their respective machines. The query provides RAs with the uptime and real performance over the expected performance data of each machine.
Suppose that after an initial set of operations, the uptime of M1 is below the threshold of 50% (e.g. M1: 48%, M2: 60% M3: 80% M4: 70%) while the performance efficiency was the best at M1 (e.g. M1: 95%, M2: 93% M3: 80% M4: 85%). To improve the uptime of M1, The M1 RA increases the energy cost and reduces the duration, while the M3 RA decreases the energy cost and increases the duration. For example, the RA
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Machines & M1 & M2 & M3 & M4 \\ \hline Duration (minutes) & 20 & 18 & 15 & 17 \\ \hline Energy Costs (kWh) & 100 & 110 & 120 & 115 \\ \hline \end{tabular}
\end{table} TABLE I: Performance of the Machines
Fig. 7: Layout of a Small Manufacturing System
of M1 reduces the duration of M1 from 20 to 18 minutes and increases the energy cost by 10.25% (from 100 to 110.25) while M3 increases the duration from 15 to 16 minutes and decreases the energy by 5% (from 120 to 114) using the API function called _changeResourcePerformance_. With these adjustments, the uptime of M1 not only exceeds 50%, leading to improvements in OEE but also ensures that the products are processed in the most high-performing machine available, thereby maximizing productivity.
In this case study, we showcase how a small manufacturing system exhibits nominal behavior by utilizing ontology for monitoring the system's status in a global context. The agents involved in the system continuously monitor the ontology during runtime to make informed decisions, leading to improved runtime control of the manufacturing process.
## VI Summary and Outlook
In this paper, we presented an ontology-based approach for implementing MAS in manufacturing to enable agents and users to access and update the KB during runtime. The framework proposed for implementing an ontology for MAS in manufacturing has several advantages over existing approaches. First, the extendable KB accumulates the system's state and history of production data, enabling agents to make informed decisions based on this data to measure the OEE (_C1_). Second, providing an API enables customers and engineers to communicate with the MAS during runtime without requiring detailed technical knowledge of the system (_C2_). Lastly, the agents are continuously updating the KB with their current state and their latest actions, which lead to improvements of OEE (_C3_).
Future research will include evaluating this ontology-based framework by incorporating additional criteria that focus on improving system efficiency and effectiveness to further validate its benefits over existing approaches. Also, we plan to explore the application of this ontology-based framework to a real-world testbed, aiming to achieve improved OEE through ontology utilization during runtime with MAS. This testbed will enable us to prove the effectiveness of the framework in a practical manufacturing environment.
|
2309.03628 | OSMOSIS: Enabling Multi-Tenancy in Datacenter SmartNICs | Multi-tenancy is essential for unleashing SmartNIC's potential in
datacenters. Our systematic analysis in this work shows that existing on-path
SmartNICs have resource multiplexing limitations. For example, existing
solutions lack multi-tenancy capabilities such as performance isolation and QoS
provisioning for compute and IO resources. Compared to standard NIC data paths
with a well-defined set of offloaded functions, unpredictable execution times
of SmartNIC kernels make conventional approaches for multi-tenancy and QoS
insufficient. We fill this gap with OSMOSIS, a SmartNICs resource manager
co-design. OSMOSIS extends existing OS mechanisms to enable dynamic hardware
resource multiplexing of the on-path packet processing data plane. We integrate
OSMOSIS within an open-source RISC-V-based 400Gbit/s SmartNIC. Our performance
results demonstrate that OSMOSIS fully supports multi-tenancy and enables
broader adoption of SmartNICs in datacenters with low overhead. | Mikhail Khalilov, Marcin Chrapek, Siyuan Shen, Alessandro Vezzu, Thomas Benz, Salvatore Di Girolamo, Timo Schneider, Daniele De Sensi, Luca Benini, Torsten Hoefler | 2023-09-07T10:50:32Z | http://arxiv.org/abs/2309.03628v3 | # OSMOSIS: Enabling Multi-Tenancy in Datacenter SmartNICs
###### Abstract
Multi-tenancy is essential for unleashing SmartNIC's potential in datacenters. Our systematic analysis in this work shows that existing on-path SmartNICs have resource multiplexing limitations. For example, existing solutions lack multi-tenancy capabilities such as performance isolation and QoS provisioning for compute and IO resources. Compared to standard NIC data paths with a well-defined set of offloaded functions, unpredictable execution times of SmartNIC kernels make conventional approaches for multi-tenancy and QoS insufficient. We fill this gap with OSMOSIS, a SmartNICs resource manager co-design. OSMOSIS extends existing OS mechanisms to enable dynamic hardware resource multiplexing on top of the on-path packet processing data plane. We implement OSMOSIS within an open-source RISC-V-based 400Gbit/s SmartNIC. Our performance results demonstrate that OSMOSIS fully supports multi-tenancy and enables broader adoption of SmartNICs in datacenters with low overhead.
## 1 Introduction
Network data plane design has undergone two decades of exciting research, leading to the achievement of sub-micosecond packet processing host latency [8, 24, 26, 35, 82, 44, 45, 60, 72, 76, 82]. SmartNICs (sNICs) have further improved processing times by enabling direct in-network packet processing, thereby reducing data movement [42]. sNICs started a trend in datacenter networking acceleration [92, 47] similar to the GPU trend in high-performance computing [94].
sNICs enable running _kernels_ on programmable, energy-efficient cores tailored for packet processing and integrated within the host network interface card (NIC) System-on-Chip (SoC). These cores are attached directly (i.e., _on-path_) to the datacenter Ethernet or InfiniBand link [5, 54]. Such a design reduces the latency of some applications since the sNIC can process the packets in the network [58] and reply directly without moving the packets to/from the host OS networking stack [1, 31]. This enables the offload and acceleration of several workloads such as distributed learning gradients aggregation [94, 89], disaggregation and storage [62, 63, 49, 28], Key-Value Stores (KVS) [75, 100, 88], Remote Procedure Calls (RPCs) [75, 57, 98, 14, 97], network protocols and telemetry [99, 14, 16, 99, 85, 16, 82].
Network resources in a datacenter are multiplexed between tenants through a virtualization layer [12, 17, 51, 102, 66]. However, processing user code by sNICs brings a set of considerable resource management issues. As Figure 1 shows, NICs have three resources that must be multiplexed: compute, Direct Memory Access (DMA) bandwidth, and egress bandwidth. The traditional NIC data path only forwards packets to host memory and executes simple operations with a _predictable_ and _bounded_ complexity. Typically, the number of incoming bytes equals the number of outgoing bytes, and NICs do not run any elaborate processing on them. In contrast, sNICs can execute _unpredictably_ complex stateful offloads [74]. For example, heavily used in machine learning [9] Allreduce operates on the payload and is compute-bound, while storage offloading predominantly accesses host memory and is DMA/IO bound. sNICs need to operate on _uncoordinated_, _non-deterministic_, and _concurrent_ data streams while meeting Service Level Objective (SLO) policies set by the administrator.
Achieving a fair resource multiplexing for sNICs is chal
Figure 1: A predictable NIC data path versus the unpredictable sNIC kernel execution.
lenging. sNICs combine characteristics of an accelerator, such as a GPU, and a traditional NIC. While this provides the aforementioned benefits, the resource management of neither is directly applicable due to the unique sNIC requirements (Section 3). Conventional RDMA NICs (rNICs) have bounded and predictable workloads (e.g., atomics, scatter-gather RDMA reads/writes) and often use link bandwidth allocation as a _"just enough"_ mechanism for resource isolation and Quality-of-Service (QoS) between tenants. Although rNICs exhibit bounded and foreseeable behavior, achieving fairness is a challenging endeavor [96] even within their simpler than sNIC context. In contrast, accelerators fall entirely under the governance of the host OS, which oversees all active kernels [48, 50]. These accelerators neither generate nor receive events beyond instructions from accelerated applications, setting them apart from sNICs capable of executing arbitrary kernels independently of the host's involvement.
Furthermore, for sNICs to sustain the sub-_nanos_econd packet arrival intervals at fully utilized 400Gbit/s link (Section 3, [27]), resource multiplexing must be conducted fast. On-path sNICs have much stricter compute and buffering constraints than traditional NICs and accelerators due to the packet rate and the three multiplexed resources (compute, DMA, and egress). This issue is even more critical as network rates constantly increase and are expected to exceed Terabit per second by 2025 [15, 23, 33, 91].
A common approach to effectively manage processing at high packet arrival rates involves implementing resource management in hardware [2, 4, 27]. This is usually accomplished through scheduling policies such as Weighted Round Robin (WRR), which divide link bandwidth among tenants [19, 20, 96]. However, because sNICs have varying application kernel requirements, incorporating WRR for compute resource allocation can lead to unfairness. For example, as we show in Section 3, if one application (e.g., Allreduce) is compute-bound and takes twice as much compute time as a non-compute-bound application (e.g., KVS), the former will be able to process twice as many bytes. Other recently proposed methods for compute isolation in sNICs are not optimal for all scenarios as they are either non-work conserving [29] or rely on the host CPU as a fallback path [57].
We tackle these issues by introducing OSMOSIS (Operating System Support for Streaming In-Network Processing) (Section 5). OSMOSIS is a lightweight sNIC management layer that supports performance-critical data-plane management in hardware and non-critical management tasks in a flexible software runtime. OSMOSIS is a fair, work-conserving sNIC resource manager that requires minimal hardware footprint and employs expressive yet simple Service Level Objective (SLO) semantics. In OSMOSIS, the sNIC is exposed to a tenant as Single-Root Input/Output Virtualization (SR-IOV) Virtual Function (VF). This allows the administrator to allocate proportionally more _compute processing units_, _egress bandwidth_, and _DMA bandwidth_ to VFs associated with high-priority tenants.
We implement (Section 6) and evaluate (Section 7) OSMOSIS on top of one of the available open-source on-path sNIC architectures, PsPIN [32, 18]. PsPIN is based on energy-efficient silicon-proven RISC-V cores. In our setup, PsPIN is the hardware backbone for packet processing using kernels written in C. Our performance evaluation focuses on typical datacenter workloads such as storage IO and in-network Allreduce, and shows that OSMOSIS provides comprehensive support for multi-tenancy without sacrificing performance.
In summary, we make the following contributions.
1. _sNIC multi-tenancy:_ We show typical multi-tenancy sNIC problems and define a set of requirements for high-performance sNICs. These requirements serve as a guideline for developing sNICs that can meet the needs of diverse workloads and tenant environments (Section 3).
2. _OSMOSIS:_ We introduce OSMOSIS, a lightweight sNIC resource manager based on fair and work-conserving scheduling policies. OSMOSIS is a minimal hardware footprint solution to the problem of fair and efficient resource sharing in multi-tenant sNICs with diverse application needs (Section 5).
3. _Evaluation:_ We implement OSMOSIS in an open-source on-path 400Gbit/s sNIC by extending it with schedulers and a control path prototype (Section 6). We use this implementation to verify and evaluate OSMOSIS. We demonstrate how it solves the defined sNIC problems and handles multi-tenant applications fairly with varying resource requirements while minimizing tail latency (Section 7).
## 2 Background and Related Work
From the system's perspective, we abstract out the sNIC as a packet processing accelerator between the network fabric and the host CPU, GPU, or FPGA.
Existing sNICs can be classified broadly into two categories: _off-path_ and _on-path_[57].
Off-path sNICs add an entire CPU complex to the network card, often running a full operating system (e.g., Linux). This design enables a management plane based on receive side scaling (RSS) to be conveniently implemented [61, 8, 76]. However, they often suffer from lower performance in terms of latency, bandwidth, and packet processing rates due to their system design, which closely resembles the CPU-centered host architecture (e.g., Broadcom Stingray and Nvidia Bluetooth data processing units (DPUs) are both feature ARM SoCs with PCIe and DRAM).
On-path sNICs share packet input buffers with _processing units_ (PUs) tailored for packet processing (e.g., LiquidIO [59],
Netronome [68], PsPIN [18], Data Path Accelerator (DPA) introduced in Bluefield 3 DPU [69, 70]). On-path sNICs typically provide programming API for writing _kernels_ that process traffic on PUs, either on per-packet (PsPIN [18]) or per-message granularity (Bluefield-3 FlexIO API [69]). PUs typically feature three layers of the memory hierarchy, e.g., L1 single-cycle access scratchpad, L2 memory with access latency of 15-50 cycles, and host side memory (either off-path SoC or host CPU memory). L1 and L2 memories could be organized as multi-level caches (e.g., LiquidIO) or be explicitly managed by the user (e.g., PsPIN).
OSMOSIS provides a solution to a fair resource multiplexing for sNICs in a multi-tenant context and is not specific to any system. However, to showcase the identified issues and verify and evaluate the overhead of OSMOSIS, we selected one of the possible synthesizable open-source on-path sNIC implementations available in the literature, namely, PsPIN. We decided to use an on-path sNIC as our experiments (Table 1) show that only such sNICs can sustain packet processing at emerging line rates. PsPIN is open-source, based on energy-efficient silicon-proven RISC-V cores, and allows users to write packet processing kernels in C and explicitly manage sNIC memories [18]. OSMOSIS could have been equivalently implemented in any other sNIC framework [68, 59, 69].
### Challenges of Resource Isolation
We generalize on-path sNIC architecture in Figure 2. Packets arrive at the sNIC inbound engine and are initially stored at the L2 packet buffer organized as a set of per-application first-in-first-out (FIFO) queues. Next, packets are scheduled for processing on available PUs where kernel execution is initiated. Kernels execute using three resources, PUs, DMA, and Egress bandwidth. Each application uses these resources differently (e.g., compute- or IO-bound) depending on its needs. In general, these resources can be used as follows:
* PUs: computing (e.g., hashing the packet header or summing values in an Allreduce reduction);
* DMA engine: transferring data to read/write in sNIC memory (e.g., KVS cache in sNIC L2 memory) or host memory (e.g., KVS cold storage);
* Egress engine: sending packet replies (e.g., reply to a read request with a value from the KVS cache).
Metrics to measure the quality of resource multiplexing by datacenter tenants, known as Service-Level Objectives (SLOs), are typically tied to the conventional NIC path displayed in Figure 2 by considering tail latency [17] and throughput [67, 84]. However, these SLOs do not consider the sNIC data path with its unique resource multiplexing discussed in Section 3, such PU time, tail latency of DMA over host interconnect, and buffer space. Existing proposals have only partially addressed this issue by introducing performance isolation mechanisms, such as multi-level packet scheduling [27, 87, 87] and static resource allocation [29] of shared resources (see Section 4). Yet, due to the kernels' dynamic and unpredictable nature, static assignments do not solve the problem. OSMOSIS fills this gap by _providing bounded guarantees for the sNIC resource availability to tenants using dynamic resource multiplexing_.
## 3 Multi-Tenant sNICs
Diverse application requirements create distinct resource multiplexing bottlenecks. Our quantitative analysis highlights these issues in multi-tenant setups of existing sNIC stacks [18, 69], yielding sNIC requirements. These insights directly led to the microarchitectural and software choices for OSMOSIS. We use a 400 Gbit/s link for all experiments (more details on the setup in Section 7).
**Per-packet time budget (PPB):** While studies of datacenter traffic show that only a fraction of the established connections actively exchange data at any given time [10, 81, 97], they can still saturate the link bandwidth. To analyze the implications of this for sNICs we define per-packet time budget (PPB) using PU count \(N\), packet size \(P\), and link bandwidth \(B\) as \(PPB(N,P,B)=N\times(P/B)\). In this case, we model the sNIC as a \(M/M/m\) queue where PPB defines the condition which needs to be satisfied for the queue to be stable [13]1. To be more specific, PPB represents how long the sNIC can process a packet until the next one arrives, assuming a fully utilized link. If PPB is exceeded, the per-application ingress queue will eventually fill up during transient traffic bursts leading to packet drops or falling back to link flow control (e.g., PFC [103]) and a possible violation of per-VF SLO policy.
Footnote 1: \(1/\lambda=P/B\), \(m=N\), to achieve \(\rho<1\), \(1/\mu>N\cdot P/B\), where PPB = \(1/\mu\).
Figure 3 compares service times of IO- and compute-bound workloads with theoretical PPB assuming that tenant workloads fit one packet and that the sNIC has only one tenant.
Figure 2: Schematic overview of on-path sNIC architectures. Red arrows indicate the data path and blue arrows correspond to the control/management path.
We observe that all workloads with packet size \(\leq 64\) Bytes fail to fit in PPB. Compute-bound workloads (i.e., Aggregate, Reduce, Histogram) whose execution time scales linearly with packet payload length exceed the PPB for all packet sizes bottlencing the PUs. Notably, IO-bound kernels above 256 Bytes (i.e., DMA writes/reads, Egress packet sends) fit PPB as they avoid PU congestion but are bottlenecked by the link bandwidth. However, as we will demonstrate, _IO-bound workloads are sensitive to DMA transfer contention on the host interconnect_.
**PU contention:** While a single tenant can cause pressure on the ingress queue and contention of PUs, multiple tenants can lead to unfairness. For example, consider two compute-bound tenants with different requirements. One of them, the _Congestor_, has twice as large compute cost per packet as the other, the _Victim_, leading to twice as many cycles on PU to finish the kernel. During the burst, _Congestor_ and _Victim_ push packets at the corresponding per-application (per-VF) queues at the same ingress rate. As Figure 4 shows, using the conventional round robin (RR) scheduling of per-application queues across 8 sNIC PUs, the _Congestor_ uses 2\(\times\) the PUs used by the _Victim_.
**4**: _sNIC manager should fairly allocate compute components (e.g., PUs, cryptographic accelerators) while serving tenants with different compute costs per packet_.
**Egress and DMA engines contention:** Similarly, as the compute-bound kernels cause contention on PUs, IO-bound kernels can lead to contention on the appropriate DMA or egress engines. IO-bound kernels running on different PUs can simultaneously initiate IO requests through the same sNIC engines, e.g., DMA requests from a KVS application. In case the underlying interconnect (e.g., PCIe or AXI [78]) is blocking and lacks the support of QoS provisioning, _the issue of multiple concurrent requests may result in Head-of-Line (HoL) blocking_[1].
For example, consider two IO-bound tenants with different IO requirements. The _Victim_ has constant 64B packets, while the _Congestor_ increases its packet size from 64B to 4096B. As Figure 5 shows, the contention on the IO engine leads to an order of magnitude higher latency of the _Victim_'s messages without considerably affecting the _Congestor_'s flow. This unfairly increases the latency of one of the tenants by 4-15\(\times\).
**4**: _sNIC manager should fairly allocate DMA and egress bandwidth (e.g., using AXI and PCIe) between running kernels and be resilient to HoL-blocking_.
**Memory management:** Applications have diverse memory runtime needs, with dynamic memory allocation causing an unknown _a priori_ memory consumption. In extreme cases, a tenant could monopolize all sNIC memory, e.g., L1 packet buffers, resulting in HoL-blocking for others. Introducing virtual memory (paging) semantics could lead to substantial memory access overheads, as each page fault significantly amplifies memory access latency [37].
**4**: _sNIC manager should fairly allocate memory using lightweight allocation strategies defined in the control plane_.
**Scheduling overhead:** Existing _software_ packet processing data paths [8, 24, 76] were designed for off-path sNICs or con
Figure 4: _Congestor_ and _Victim_ tenants’ flows with equal priorities are mapped to two different SR-IOV VFs with equal shares of Ingress bandwidth. With the round-robin scheduling of per-flow queues, the _Congestor_ tenant with 2\(\times\) higher compute cost per packet occupies a proportionally larger number of cores than the _Victim_ tenant.
Figure 5: Slow-down of various IO operations (e.g., DMA and sending packets to Egress) initiated by the tenant’s kernel results in HoL-blocking small requests due to underlying IO path contention.
Figure 3: sNIC core (PU) processing time needed to serve 1 packet for common sNIC kernels. Workloads with triangle markers are compute-bound, and circular markers are IO-bound. All workloads with \(\leq 64\)B packet size (including 28 bytes IPv4/UDP-header) exceed PPB showing congestion at PUs when link bandwidth is fully utilized. Note that our setup supports Ethernet payload sizes below 64B to accommodate custom interconnects [41].
ventional host processing. As recent studies show [44]_effectiveness_ of kernel execution scheduling in terms of achieved maximum utilization while running on off-path sNICs supported by OS's like Linux is driven by the latency of context switching [44, 26]. PU cycles are wasted during context switching to transition between the kernel states. We benchmark context-switching of Linux running on host and off-path sNIC (Bluefield-2 ARM SoC). We compare these to the state-of-the-art Caladan scheduler we ported to the ARM ISA [26]. For reference, we also show the context switching latency of PULP cores as implemented in PsPIN used to evaluate OSMOSIS. Notably, we observe that the context switching latencies we report in Table 1 are higher or of the same order of magnitude as the PPB from the analysis presented in Figure 3.
Data path performance should not be impacted by overheads stemming from software scheduling policies, providing low-latency scheduling of kernel execution.
Control path priority:If a tenant on the sNIC exceeds compute or time budgets, an immediate response is needed from the host's control plane for _control traffic_. However, communication between sNIC and host uses system interconnect (e.g., PCIe), typically adding an overhead of 0.5 - 2 usec per read-/write request. Congestion in the interconnect (Figure 5, [1]) can lead to HoL-blocking of control traffic and unpredictable packet processing. Moving request execution to the host (e.g., iPipe [57]) to resolve this introduces added latency overheads for packet processing.
sNIC accelerated packet processing should prioritize control-path traffic and not rely on latency-introducing host CPU as a fallback path.
QoS API:NIC capabilities are exposed to tenants through a virtualization layer (OS hypervisor) that provides an illusion of full resource ownership. SR-IOV is a conventional way to implement NIC virtualization. In SR-IOV, each NIC physical function (PF) (such as TX and RX capabilities) is multiplexed between several virtual functions (VFs). Each VF is exposed to the tenant through an OS hypervisor as a stand-alone PCIe NIC. To our knowledge, existing production rNICs and sNICs support only Ingress and Egress bandwidth allocation on the coarse basis of VFs and not compute or DMA resources.
sNIC management plane should support conventional QoS provisioning mechanisms for all types of resources.
## 4 Existing Solutions are Insufficient
We summarize recent resource management research milestones for NIC resource management in Table 2.
Justitia [102] and PicNIC [51] are rNIC virtualization layers lacking on-NIC compute management. They function as software controllers between the NIC and host application, handling RDMA read/write operations atop the RDMA API. Lynx [90] focuses on sNIC GPU data movement offloading but similarly manages traffic at a per-message granularity and lacks detailed multi-tenancy issues analysis.
Floem [73], FairNIC [29], and iPipe [57] specifically target on-path SmartNICs programmability. FairNIC aims for multi-tenant use cases by statically allocating compute and IO bandwidth to flows. Such an approach can potentially cause under-utilization or unfairness [82, 44, 76]. Notably, all three solutions lack flow priorities implementation. Per-flow priority is found in PANIC [56] and Menshen [93], yet both solutions are specialized for Reconfigurable Match Tables (RMT) pipeline architecture with P4 programming model.
In contrast, OSMOSIS is a general-purpose resource manager for on-path offloading, applicable to C/C++ applications, and offers dynamic work-conservative load balancing of sNIC PUs with adjustable tenant/flow priorities.
## 5 Osmosis
We present OSMOSIS in Figure 6. We start by outlining how OSMOSIS handles sNIC resources and fulfills multi-tenancy requirements. This involves a two-part design: a host-based flexible software control plane for management and a
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**PU** & **Frequency** & **ISA** & **Linux** & **Caladan** & **RTOS** \\ \hline Host Ryzen 7 5700 & 3.8GHz & x86 & 28576 & 211 & – \\ BF-2 DPU A72 & 2.5GHz & ARMv8 & 13250 & 192 & – \\ PULP cores [6] & \multirow{2}{*}{1GHz} & \multirow{2}{*}{RISC-V} & \multirow{2}{*}{–} & \multirow{2}{*}{–} & \multirow{2}{*}{121} \\ (used in PsPIN) & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average latency of context switching between 2 processes. Measurements shown in PU cycles scaled to 1 GHz (i.e., 1 ns/cycle).
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Fairness & Efficiency & Deployment \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & & \\ \cline{2-4} & & & & \\ \cline{2-4} & & & & & \\ \cline{2-4}
hardware data plane for SLO policy enforcement. Within this section, each part is explained in depth.
### High-level Overview
**Flow execution context creation:** To utilize sNIC packet processing, tenants create a flow _execution context_ (ECTX). ECTX encapsulates the flow processing state, such as the SLO policy and the packet processing _kernel_, a piece of code compiled for the target PU architecture and describing the actions for each packet destined for the flow.
**ECTX initialization:** After the tenant provides the basic elements of an ECTX, OSMOSIS instantiates it. It allocates a virtualized sNIC interface through the host OS hypervisor and associates it with a tenant IP address and SLO policy. It also sets up the IOMMU to allow kernel access to specific host pages, _statically_ allocates on sNIC memory and loads the kernel binary into sNIC memory.
**Matching packets to flow management queue:** The sNIC matching engine filters packets that require sNIC processing. All incoming packets are matched against the three-tuple (in case of UDP) or five-tuple (in case of TCP) of active sNIC ECTXs. Once matched, _packet descriptors_ (e.g., pointer to packets in sNIC memory) are stored at one of the _flow management queues_ (FMQs). FMQs store all information regarding an active flow ECTX on the sNIC hardware. FMQs are organized as FIFO queues of packet descriptors with an additional memory state to store running execution information (e.g., BVT metric).
**PU scheduling:** Once a PU becomes available, OSMOSIS schedules the packet at the head of one of the FMQs. To achieve fair PU allocation, OSMOSIS implements a centralized, non-preemptive scheduler inspired by the Borrowed Virtual Time (BVT) policy [21, 44]. BVT aims to allow each tenant to obtain the same amount of access time to the scheduled resource by keeping track of their past usage. OSMOSIS FMQ scheduler _allocates sNIC PUs to FMQs with the smallest priority-adjusted past PU usage measured in cycles_ while maintaining the SLO policy specified by the sNIC administrator, such as the upper per-FMQ PU limit.
**Kernel execution and IO management:** Upon loading the packet into local PU memory, the PU can process it using the relevant kernel. As seen in Section 3, parallel kernel executions on different PUs can lead to head-of-line blocking (HoL-blocking) and uncertain tail latency for DMA to sNIC/host memory and egress data transfers. For example, kernels can pipeline large storage reads by overlapping asynchronous DMA reads of packet-sized payloads with egress packet sending. OSMOSIS mitigates this by fairly arbitrating IO paths, breaking sizable DMA requests into smaller transactions, and scheduling them with a near-perfect fairness-weighted round-robin (WRR) policy. FMQs supply DMA and egress engines with tenant IO priorities for initiated IO requests. This ensures each tenant obtains a priority-based fair bandwidth chunk.
### Flexible software control plane
OSMOSIS offers a host OS API for sNIC packet processing management, encompassing ECTX creation and offloading specific flow handling to the sNIC. Tenant-initiated offloading involves the creation of a flow ECTX. ECTX facilitates tenant control using the following components.
**SLO policy:** The SLO policy sets compute, DMA, and egress priorities, per-kernel cycle budget, packet buffer size, and on-sNIC memory. OSMOSIS offers transparent SLO management via SLO knobs indicated in Table 3. By default, all tenants' FMQs share equal priority. To achieve perfect fairness in such a scenario, all flows should get the same portion of PUs and IO bandwidth at any time. Increasing the priority of the ECTX leads to _proportionally_ more resources (PUs, bandwidth) allocated to the ECTX. A per-kernel cycle limit curbs excessive PU usage, adjustable for total or individual kernel execution times. We assess priority's impact on resource fairness in Section 7.
**Kernel binary:** kernel binary cross-compiled by the tenant is loaded into sNIC memory by the control plane and is later executed on the flow packets. The kernel binary can compute and schedule DMA and egress requests according to the tenant requirements.
**A virtualized sNIC device:** A virtualized device is allocated
Figure 6: Abstract model of OSMOSIS-enabled sNIC. Packets are mapped by Matching Engine to FMQs and dispatched for execution by scheduler.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & **PUs** & **DMA** & **Egress** & **Memory** \\ \hline \multirow{2}{*}{Scheduler} & \multirow{2}{*}{WLBVT} & \multirow{2}{*}{WRR} & \multirow{2}{*}{WRR} & \multirow{2}{*}{Static} \\ & Priority & & & \\ \multirow{2}{*}{SLO knob} & Kernel cycle limit & Priority & Priority & Allocation size \\ \multirow{2}{*}{Fulfilled requirements} & \multirow{2}{*}{OSMOSIS resource management principles with all six fulfilled multi-tenancy requirements.} & \multirow{2}{*}{OSMOSIS} & \multirow{2}{*}{OSMOSIS} \\ \hline \hline \end{tabular}
\end{table}
Table 3: OSMOSIS resource management principles with all six fulfilled multi-tenancy requirements.
for the tenant by OSMOSIS, e.g., SR-IOV VF. OSMOSIS associates an IP address with the virtualized device and uses it later for matching. The virtualized device is connected internally with a single FMQ.
**A matching rule:** The matching rule matches packets from the sNIC inbound stream to the ECTX and manages their processing within the same FMQ. A matching rule allows the tenants to open multiple ports on the same virtualized device. The matching engine can match packets based on their UDP/TCP header contents. For example, it can match the IP address and the destination port of the application.
**sNIC memory segments:** The sNIC memory segments are allocated statically to each kernel depending on the requested memory size. The kernels can store the application state in sNIC local memory, e.g., KVS-cache or packet filter table. The minimum allocation a valid ECTX has is the size of the kernel binary loaded into the sNIC memory by the control plane. An error is returned if the tenant uses too much memory or the kernel binary is larger than the SLO policy limits.
**Host memory pages:** The ECTX specifies which host pages can be accessed from the specific kernel via DMA. The DMA engine on the sNIC interfaces the host memory with an IOMMU, translating host virtual addresses to physical addresses. The IOMMU also checks whether the sNIC is accessing an allowed memory region. The control plane initializes the IOMMU with appropriate page tables during execution context creation.
**Event queue (EQ):** An event queue allows the user application to track events like kernel execution errors. When an error occurs (e.g., illegal memory access or exceeding execution time), OSMOSIS informs the host via an event in the kernel's ECTX EQ. A host OSMOSIS API call from the application checks this queue for error messages. EQ can be realized as contiguous sNIC memory mapped to the host virtual address space, akin to RDMA Verbs API EQ [41]. EQ control path traffic shares the sNIC DMA data path (e.g., PCIe or CXL) with regular kernel execution (e.g., DMA initiated within the kernel) but gets the highest IO priority due to tenants' immediate action needs.
### Hardware data plane
OSMOSIS provides low management overhead with a minimal hardware footprint. We present two key mechanisms that help us to achieve this goal: a hardware flow abstraction (FMQs) and scheduling algorithms suitable for hardware implementation (WLBVT and DWRR).
**Flow management queues** (FMQs) generalize a packet flow similarly to how a hardware thread generalizes a process. FMQs store matched packet descriptors in a FIFO queue and monitor the flow processing performance. The scheduler then uses these measures to allocate compute resources fairly and enforce per-flow priorities. Processing the FIFO queue triggers kernel executions on sNIC PUs, resembling program instruction execution flow in traditional OS processes.
FMQs also store part of the ECTX state, such as the matching rule, pointers to the kernel binary, and the SLO policy definition. The host-side control plane manages and initializes FMQs that appear as MMIO registers in SR-IOV VF address space. FMQs are highly extensible. For example, the OSMOSIS priority model is compatible with datacenter Ethernet [40]. In case of congestion on the FMQ FIFO queue, the packets can be marked with the appropriate Ethernet ECN congestion flag or can supply the per-FMQ telemetry information [25, 103, 2, 41, 55].
**FMQ Scheduler** allocates PUs across flows with different compute, DMA, and egress costs-per-packet that are not known _a priori_. Thus, to achieve fair compute utilization, the FMQ arbitration policy needs to be _invariant to the cost-per-byte of the packet_ (see Figure 4). OSMOSIS implements a hardware scheduler as simple and scalable as the deficit-weighted round-robin (DWRR) but with a minimal additional area footprint (see Section 6).
OSMOSIS utilizes a greedy _Weight Limited Borrowed Virtual Time_ (WLBVT) policy, a hybrid of the Weighted Fair Queuing (WFQ) model of FMQ weights and Borrowed Virtual Time (BVT) scheduler. We adopt the BVT algorithm to suit sNIC hardware implementation constraints [44, 21] and present our scheduler in pseudo-code Listing 1. Intuitively, our scheduler aims to allocate each tenant the same amount of PU processing time normalized by priority while ensuring that each tenant is served fairly during PU contention.
```
1defpu_limit(ActiveFMQs,fmq):
2prio_sum=0
3forfmqinFMQs:
4ifnoffmq.empty:
5prio_sum+=fmq.prio
6returnceil(len(FMQs)*fmq.prio/prio_sum)
7
8defupdate_tput(FMQs):#calledateachclockcycle
9forfmqinFMQs:
10find_total_pu_occup+fmq.cur_pu_occup
[email protected]_pu_occup>0:
12fmq.bwt+=1#updateonlyinactivestate
13fmq.tput=fmq.total_pu_occup/fmq.bwt
14
15defget_fmq_idx():#calledoncePUcoreisfree
16min_tput=MAX_INT
17forfmqinActiveFMQs:
18iffmq.pu_occup<pu_limit(activeFMQs,fmq):
19iffmq.tput/fmp.prio<min_tput:
20min_tput=fmq.tput/fmq.prio
21fmq_idx=fmq.idx
22returnfmq_idx
```
Listing 1: WLBVT FMQ scheduler procedural pseudocode.
An FMQ is in an active state if it contains packet descriptors in the FIFO queue or if its packets are currently being processed on any PU. Flow throughput is updated (update_tput) at each sNIC clock cycle only if the corre
sponding FMQ is active. The scheduler (get_fmq_idx) returns the index of the non-empty FMQ that fits the upper limit of weighted PU occupation (pu_limit called in line 21) and has the lowest current throughput normalized by FMQ priority (lines 22, 23).
The weighted PU occupation's upper limit guarantees fair QoS for tenants based on their priority. pu_limit is calculated with a _ceil_ function to ensure fairness in case of more active FMQs than PUs or non-integer division. The lowest priority normalized throughput equalizes access to oversubscribed PUs over time, favoring lower resource usage users. Our approach can also accommodate total virtual time per tenant (i.e., line 21), which could be useful for billing purposes, thus expanding policy flexibility.
**Kernel execution** is a short-lived event as each execution only processes one packet. In OSMOSIS, we run kernels to completion [8, 76]. We avoid context-switching for several reasons. As shown in Table 1, context switching can introduce significant overhead. It also increases the complexity of the hardware data path and requires additional states per each active kernel.
If a kernel exceeds a set time limit (e.g., per-FMQ watchdog timer), it's terminated with a hardware interrupt, and the host application receives notification via the corresponding EQ. We believe that run-to-completion semantics underpins the sNIC programming model that, together with OSMOSIS fair priority adjusted schedulers, ensures predictable packet processing tail latency and also excludes compute-intensive tasks better suited for GPUs or FPGAs [8, 76].
### Discussion
**Encrypted traffic:** The sNIC handles data movement and may also require accessing the packet contents. Hence, it should be able to decrypt packets (e.g., QUIC [99]). sNICs can support either per-PU cryptographic accelerators (e.g., Intel AES-NI [34]) or a shared accelerator for efficiency (e.g., like in Marvell LiquidIO [59]) exposed via ISA extensions. In the latter case, the cryptographic accelerator arbitration resembles PUs, making WLBVT scheduling suitable for access management.
**IO security:** Host memory is protected against unauthorized DMA transfers using an IOMMU setup by OSMOSIS when the host creates the flow context. Similarly, local sNIC memory accesses need to be protected. This can be achieved, for example, by a _Physical Memory Protection_ unit (PMP) [95] as shown in Section 6.1.
**Transport protocols:** While this work does not focus on sNIC transport protocols, OSMOSIS, by design, is compatible with conventional congestion signaling (e.g., ECN) and lossless flow control mechanisms (e.g., Ethernet DCB). It can also be deployed with DCQCN [103] and DCTCP [3]. From the transport protocol perspective, the packet queueing delay within the FMQs and the corresponding execution of the packet kernel is just another source of latency. For example, the FMQ abstraction deployed with Ethernet can support RED/ECN marking [41, 25]. Another mechanism that FMQ can easily support is supplying the P4 INT-MD telemetry information [2] to enable the HPCC protocol [55].
## 6 Implementation
We implement OSMOSIS atop PsPIN [32, 18], an open-source on-path sNIC. We adopt PsPIN as a backend for performance-critical operations within OSMOSIS by extending its host-side API to support multiple ECTXs and specify tenant SLOs using 335 lines of code (LOCs) in C. We integrated functional blocks of OSMOSIS (i.e., matching engine, WLBVT scheduler, and DMA request fragmentation) written in 1216 LOCs of C++ with cycle-accurate simulation PsPIN SystemVerilog backend. In addition, we also implemented these components as synthesizable SystemVerilog IP blocks for hardware cost estimations. These blocks can serve as a future prototype for ASIC or FPGA-based implementation of OSMOSIS.
### Implementing OSMOSIS on top of PsPIN
**Packet processing units:** OSMOSIS PsPIN architecture is based on scalable silicon-proven RISC-V PULP SoC [80, 52, 18]. The PUs are RI5CY 32-bit cores organized in clusters. Each PsPIN cluster contains 8 PUs clocked at 1GHz and coupled with a 1 cycle, multi-banked local scratchpad memory (referred to as _LI_). For our experiments, we use the default configuration of the PsPIN PU cluster with 1 MiB L1 data, and 4 KiB L1 instruction caches. Clusters share a global 4 MiB L2 packet buffer and a 4 MiB L2 kernel buffer, which can be used for local data storage.
**Portable programming API:** OSMOSIS utilizes PsPIN infrastructure to offload the processing packets to the PUs. The user writes a C kernel cross-compiled on the host for the RISC-V ISA architecture. The kernels are then loaded and executed on the flow packets according to the sPIN API [32].
**Kernel IO:** The PsPIN API enables blocking and non-blocking IO calls within kernel code. Each PsPIN cluster has a 512-bit AXI DMA interconnect connecting cluster scratchpad memories to the sNIC L2 kernel buffer, host DMA engine buffer, and sNIC egress engine buffer. This setup enables read and write transfers between these buffers, with PUs accessing other cluster memories and shared L2 kernel memory in 10 to 30 cycles. This design also transparently supports sNIC egress packet send: a DMA write from kernel scratchpad memory to the NIC egress engine buffer. PU core L1 scratchpad interfaces an Ethernet egress pipeline over the AXI protocol. PsPIN IO-calls configure a DMA command
with addresses, length, and a completion handle pointer. The cluster command FIFO queues outstanding IO commands, and a WRR policy arbitrates per-cluster queues for DMA engine access.
**Memory management:** Our implementation allows to specify the size of the L2 and L1 memories allocatable to tenants. We implement memory isolation using Physical Memory Protection (PMP) PsPIN PU unit. When the kernel accesses L1 and L2 memories, the virtual memory addresses are translated to physical addresses with relocation registers. The PMP then checks that the addresses are within the valid segment range. Like the relocation registers, the PMP unit does not increase the memory access latency [18].
### OSMOSIS Schedulers
**FMQ scheduling implementation:** FMQ encompasses a FIFO queue, ECTX (detailed in Section 5), and scheduling state. The FIFO queue holds packet descriptors, each containing a 32-bit pointer to the packet. The scheduling state includes a BVT counter tracking tenant resource use and a priority. We implemented the counter as a 64-bit register to avoid overflow2. A 16-bit register stores the FMQ priority. Our SystemVerilog WLBVT implementation with 128 FMQs synthesizes at 1 GHz, making a scheduling decision in five cycles. Most latency stems from the weight-limiting requiring integer division which is challenging for fast hardware implementation. We hide this latency using pipelining, overlapping FMQ arbitration with packet DMA from L2 packet buffer to the cluster scratchpad (at least 13 cycles for a 64-byte packet).
Footnote 2: The 64-bit counter overflow with updates done every cycle at 1 GHz will happen in \(2^{64}\div 10^{-9}\)s/op \(\div\)60s \(\div\)60m \(\div\)24h \(\div\)365.25d \(\approx\) 584yrs.
**Enhanced DMA engine:** To prevent HoL-blocking, OSMOSIS applies transfer fragmentation on both the host-interfacing DMA engine and the egress engine. We implement two modes of fragmentation: a _software_ fragmentation implemented within the kernel call for a DMA transfer and a _hardware_ fragmentation within the DMA engine. The software approach wrapspspin_dma_read/write and pspin_send_packet with a function, dividing larger requests into smaller chunks. We issue multiple non-blocking DMA requests of smaller sizes while internally maintaining the state for each transfer. While this optimization mitigates HoL-blocking (as shown in Section 7), it also hinders the throughput of large DMA requests. To minimize this, we expand the functional model of AXI to enable hardware DMA fragmentation offloading. This involves managing the state for multiple outstanding AXI write requests and arbitrating them with the WRR scheduler.
### Integration with other on-path SmartNICs
OSMOSIS could be applied to the on-path sNICs designs besides PsPIN. For example, the data path accelerator (DPA) introduced in Bluefield 3 DPU [69, 70] could be extended with OSMOSIS to enable kernel execution QoS. DPA invokes user-defined kernels upon completion of RDMA operations. Thus, FMQ abstraction could be 1:1 mapped to DPA-managed RDMA Completion Queues (CQs) that are arbitrated according to OSMOSIS WLBVT policy. Further, IO operations initiated from DPA cores during kernel execution, i.e., RDMA Work Requests (WRs), could be assigned with a desired Service Level (SL) mapped to the underlying RDMA Virtual Lane (VL), i.e., SL2VL mapping mechanism [41].
## 7 Evaluation
We study how OSMOSIS allocates sNIC resources under different traffic conditions and workload requirements. We investigate the following research questions:
1. How does the area of OSMOSIS-enabled sNIC chip scale up with the ingress link rates and the number of tenants?
2. What are the overheads of OSMOSIS compared to the reference PsPIN implementation?
3. What is the maximum load that OSMOSIS can sustain?
4. How fair are OSMOSIS resource allocations?
### Hardware Scaling
We synthesized OSMOSIS and PsPIN SystemVerilog IP blocks at 1GHz in GlobalFoundries 22nm node process to es
Figure 7: The cost model of sNIC SoC area synthesized in 22nm GF process, compared to the theoretical per packet budget (averaged for different packet sizes at \(64-4096\) B interval) achieved with 400/800/1600 Gbit/s ingress link rates.
timate hardware area costs using Synopsys Design Compiler NXT in topographic mode.
**sNIC area scaling with compute capacity:** PsPIN clusters utilize a hierarchical SoC-interconnect similar to Manticore scale-out study [101]. We group four clusters into a _quadrant_ sharing a local interconnect. Each quadrant connects to L2 memory, allowing all cores to access the shared packet buffer. Synthesis studies [18, 52] indicate negligible area increases and timing overheads when adding ports to L2. In Figure 7, PsPIN demonstrates linear compute capacity scaling relative to the core area. For instance, 4 PU clusters offer adequate per-packet budget (PPB) (Section 3) to sustain compute-bound Reduce workload with up to 512-byte packets.
**OSMOSIS Schedulers Scaling:** Figure 8 shows the hardware area consumption of OSMOSIS schedulers. We observe a linear scaling of the FMQ and DMA engine schedulers with the number of inputs. Compared to RR, WLBVT needs \(7\times\) more gates, yet with 128 FMQs, WLBVT area consumption takes only 1% of PsPIN cluster and L2 memory area.
### Experimental Methodology
We evaluate OSMOSIS runtime performance using cycle-accurate Verilator v4.228 SystemVerilog simulator [86]. Our experimental testbed features two setups: a _Reference (baseline) PsPIN implementation_, i.e., a conventional on-path sNIC without multi-tenant OS, and a _PsPIN implementation enhanced with OSMOSIS management_.
Both setups feature 4 PsPIN clusters of 8 1GHz cores, achieving 400 Gbit/s ingress/egress bandwidth. L2 and host memories can be accessed through 512 Gbit/s AXI interconnect. We used randomly pre-generated packet traces fully utilizing ingress link bandwidth. Packet arrival sequences follow a uniform distribution, and packet sizes are sampled from a lognormal distribution [10, 97, 81]. For fairness measurements, we use Jain's fairness metric [36]. It scales between 1 and 1/number of tenants: a metric of \(y\) implies \(y\)% fair treatment, leaving \((100-y)\)% starved. Fair treatment ensures equal priority-adjusted resource access for each tenant.
### Synthetic Benchmarks
We evaluate OSMOSIS on synthetic benchmarks to assess its overheads in a low-complexity environment.
**Fair HPU allocation:** We evaluate the WLBVT scheduler and compare it to the traditional RR. We run two applications, one with a larger _compute cost per byte_, the _Congestor_, and the other with a smaller one, the _Victim_. Both spin in a for loop to simulate a compute-bound task. Figure 9 shows how RR over-allocates PUs to the _Congestor_, leading to lower fairness, as shown by Jain's metric. WLBVT consistently splits all the resources equally between tenants. When the _Victim_ has no outstanding packets, WLBVT allows the _Congestor_ to overtake more PUs. WLBVT enables fair compute resource allocation within OSMOSIS and does not cause slowdowns within the benchmarks.
**Resolving HoL-blocking:** We evaluate the scaling of throughput of the _Congestor_ and the kernel completion time of the _Victim_ while conducting only Egress transfers that involve AXI writes. Figure 10 presents how OSMOSIS resolves HoL-blocking. Depending on the fragmentation method, the _Victim_'s kernel completion time can be reduced by order of magnitude while preserving a relative slowdown of only around \(2\times\). The throughput reduction stems from control traffic overhead related to fragmentation. When accessing local sNIC memories (i.e., remote scratchpads and L2), it can be mitigated through a custom SystemVerilog implementation of the PsPIN AXI protocol, allowing for parallel transfer states as proposed in other works [11, 43, 77]. Addressing this issue for host-side traffic that crosses AXI bus boundaries would require a fine-grained QoS protocol for standardized PCIe and CXL interconnects [1].
Figure 8: WLBVT and WRR schedulers exhibit linear area scaling in GF 22nm process. Bar captions indicate gate count and relative area compared to 4 PU clusters with 4 MiB L2.
Figure 9: The fairness of WLBVT and RR with two tenants of different compute cost per byte.
We also observed two bottlenecks: ingress and egress. In the ingress bottleneck, the incoming link bandwidth is the limit, while in the egress one, the AXI bus congestion causes slowdowns. While the overheads come from the interconnect, OSMOSIS scheduling does not introduce overheads, as evident for low _Congestor_ sizes.
### Datacenter Workloads
Additionally, we evaluate a set of real datacenter workloads supplied with the PsPIN benchmarking package [18]. We study the _Aggregation_[71], _Reduction_[9] and _Histogram_[7] benchmarks as examples of compute-bound workloads with incrementally increasing inter-kernel memory synchronization requirements, i.e., from local on-PU computation with one atomic operation in _Aggregation_, to random memory accesses, each with an atomic summation in _Histogram_.
We also evaluate an IO-bound benchmarking set. Our goal is to exercise NIC DMA read/write data paths towards the host memory, the pattern typical for data path offloading of storage RPCs and TCP segment delivery [62, 65, 74, 83]. While for _IO read/write_, a target memory location is stored directly in the packet application header, in the _Filtering_ benchmark, to lookup the destination DMA memory address (e.g., KVSCache location or packet forwarding table context address), the kernel needs to compute the hash of the L7-header used as a lookup table index stored in sNIC LLC.
**Management overheads:** To assess the influence of OSMOSIS management on applications' performance, we start by running them in isolation. Figure 11 displays how OSMOSIS does not introduce considerable overheads for compute-bound workloads. These oscillate within \(\pm\)3% of the baseline PsPIN implementation and reach the maximum of 310Mpps for the _Aggregation_ workloads. For IO-bound workloads, OSMOSIS introduces overheads stemming from the fragmentation, which have been discussed in Section 7.3. This can be resolved by extending the AXI bus protocol [77, 43]. While overheads reach from 23% to 2% and represent the cost of introducing fair and efficient multi-tenancy, the workloads still achieve 332Mpps in the _IO write_ case.
**Application mixtures:** Evaluating applications in isolation is not representative of real workloads which occur in multi-tenant datacenters for which OSMOSIS was designed and where multiple users contend for resources. We consider two application sets: a _compute-bound set_ and an _IO-bound_ set, each resulting in tenant resource contention.
The compute-bound set comprises the _Reduce_ and _Histogram_ workloads. Each is introduced as a _Victim_ (64B packets for _Reduce_ and 64-128 packets for _Histogram_) and _Congestor_ (4KB packets for _Reduce_ and 3072 - 4096 byte packets for _Histogram_). As Figure 12 shows, these workloads saturate the PUs of the sNIC within the first couple thousand cycles and introduce compute congestion. Using OSMOSIS WL-BWT scheduling, each tenant obtains an allocation that is, on average, 47% fairer than that of the typical RR implementation as measured using Jain's metric. Such allocations ensure SLO fulfillment and result in 39-7% faster _flow completion times_ (FCT) because of lower average contention while only sacrificing 3% of the _Histogram Congestor_. OSMOSIS thus achieves a fair and efficient resource allocation.
The IO-bound set consists of _IO read_ and _write_ workloads which are again introduced as both a _Victim_ and _Congestor_ with the same packet size parameters as the _Histogram_ workload. For the IO-bound workloads, we focus on the average throughput of each workload. Figure 13 shows that, similarly to the compute case, OSMOSIS obtains a consistently fairer allocation than a traditional RR scheduler (up to 83%) as measured by the average Jain's fairness metric.
OSMOSIS also manages to reduce FCT for all tenants by
Figure 11: The relative packet throughput of common datacenter workloads run in a standalone mode as a function of packet size with their raw performance in million packets per second (Mpps) at the top of the bars. Up to a 3% throughput increase with OSMOSIS compared to the PsPIN baseline stems from a kernel completion time variability introduced by the compute/IO schedulers.
Figure 10: The impact on the _Congestor_ throughput and the _Victim_ kernel completion time as a function of the _Congestor_ size and various fragment sizes.
up to 63%. Such large improvement comes from addressing the HoL-blocking problem, leading to a more efficient allocation. The _IO read Congestor_ is initially suppressed to let other tenants fairly finish their workloads and then obtains full exclusive utilization, eliminating contention and allowing it to regain the lost performance. On the other hand, the other tenants are fairly allocated and, as Figure 14 shows, they do not suffer from HoL-blocking.
Figure 14 also displays the true cost of the aforementioned gains. While the overall FCT is reduced for all tenants, the single kernel completion time shows a different story. The HoL-blocking is resolved for the _Victim_ tenants, for which the kernel completion time is reduced more than fivefold. However, the other _Congestor_ tenants display an up to eightfold increased median kernel completion time. While OSMOSIS increases the median single-packet processing time, it also achieves overall FCT gains for the IO set by allocating the resources fairly and more efficiently, and by parallelizing the packets appropriately.
## 8 Conclusions
Enabling user-level on-NIC processing in modern multi-tenant datacenters brings resource multiplexing challenges. OSMOSIS solves sNIC multi-tenancy by distributing sNIC resources, the egress and DMA bandwidth, and processing units, across flows with different priorities, input bandwidth, and computational requirements. To achieve a fair distribution of resources, OSMOSIS relies on sNIC-specific principles, such as work-conservative allocation of compute and IO resources. The evaluation shows that OSMOSIS efficiently redistributes resources, enabling QoS, performance isolation, and prioritization between various mixtures of flows. OSMOSIS improves FCT by up to 60% and is fairer by up to 83% than the typical schedulers. We believe that OSMOSIS could enable wider adoption of sNICs in cloud datacenters with low overhead.
## Acknowledgments
This project received funding from EuroHPC-JU under the grant agreements RED-SEA, No. 055776 and DEEP-SEA, No. 95560, the EuroHPC-JU "The European Pilot" project under the grant agreement No. 101034126 as part of the EU Horizon 2020 research and innovation programme, and a donation from Intel.
|
2309.14354 | Quantum optics in MATLAB | We provide a MATLAB numerical guide at the beginner level to support students
starting their research careers in theoretical quantum optics and related
areas. These resources are also valuable for undergraduate and graduate
students working on semester projects in similar fields. | Nilakantha Meher | 2023-09-21T15:20:52Z | http://arxiv.org/abs/2309.14354v2 | # Quantum optics in MATLAB
###### Abstract
We provide a MATLAB numerical guide at the beginner level to support students starting their research career in theoretical quantum optics. These resources are also valuable for undergraduate and graduate students working on semester projects in the field of quantum optics.
###### Contents
* I Introduction
* II Quantum states in MATLAB
* II.1 Number States
* II.2 Superposition of Number States
* II.3 Coherent States
* II.4 Thermal States
* II.5 Squeezed Vacuum States
* II.6 Number State Filtered Coherent State
* II.7 States of a two-level atom
* III Operators in MATLAB
* III.1 Annihilation, Creation and Number operators
* III.2 Hamiltonian for the electromagnetic field
* III.3 Pauli matrices for two-level atom
* IV Properties of quantum states
* IV.1 Photon number distribution
* IV.2 Average number of photons
* IV.3 Zero time-delay second-order coherence function
* V Atom-Field Interaction
* VI Two-mode field
* VI.1 Coupled cavities (energy exchange between two modes)
* VI.2 Beam splitter transformation
* VI.3 Mach-Zehnder interferometer
* VII Dissipative atom-field dynamics
* VII.1 Lindblad master equation
* VII.2 Monte-Carlo wavefunction method
* VIII Conclusion
* A Role of dimension of the field
## I Introduction
MATLAB is a user-friendly and robust framework for numerical computing based on matrix operations. Several numerical toolboxes or open-source packages written in MATLAB [1; 2; 3; 4; 5] have been designed to address analytically intractable problems in quantum mechanics.
In this tutorial, we present various numerical codes written in MATLAB to help students understand the basics of quantum optics. These codes can be easily extended to address a wide range of research problems in quantum optics that involve high-dimensional matrix manipulations. Importantly, they can be executed in any version of MATLAB without the need of pre-installed packages.
Before we start writing codes in MATLAB, we follow a few rules:
* Pure quantum states are represented by normalized column matrices with unit norm, while mixed states are represented by square matrices with unit trace.
* Quantum operators are represented by square matrices.
* We need to pre-decide the dimension of the matrices for the states and operators based on the specific problem at hand. The dimension refers to the number of rows or columns. If your results deviate from your expectations, you may consider increasing the dimension of the matrices. We will learn how to decide the appropriate dimensions of the matrices as we write code.
* We use 'clear' to clear memory and 'clc' to clear the output screen at the beginning of a code to prevent any numerical errors.
* To suppress output, add a semicolon ';' at the end of the respective line.
* The symbol % is used to comment a line. The sentence after % is used to explain the code to the readers. It will not be executed in the run of a program and will not be displayed at the output.
* We listed a few commands in Table. I that we use to perform calculations.
## II Quantum states in Matlab
### Number States
The number states are the most fundamental quantum states of a quantized electromagnetic (EM) field. The state \(|n\rangle\) represents a field having \(n\) photons.
In MATLAB, we represent a number state \(|n\rangle\) with a column matrix in which the element at the \((n+1)\)th position is 1, while the others are 0. The vacuum state is represented by a column matrix in which the first element is 1 and the others are 0.
\begin{table}
\begin{tabular}{|c|c|} \hline Operations & MATLAB commands \\ \hline \(\times\) & * \\ \hline \(\otimes\) (tensor product) & kron \\ \hline \(\sqrt{a}\) & sqrt(a) \\ \hline \(e^{a}\) & exp(a) \\ \hline \(x^{a}\) & x’a \\ \hline \(n!\) & prod(1:n) \\ \hline Identity matrix of dimension \(d\) & eye(d) \\ \hline \(e^{A}\) (exponential of matrix A) & expm(A) \\ \hline trace of a matrix A & trace(A) \\ \hline eigenvectors and eigenvalues of A & [vec, val]=eig(A) \\ \hline \(n\)th column of a matrix A & A(:,n) \\ \hline \end{tabular}
\end{table}
Table 1: Used operations and their MATLAB commands.
**Trick:** Consider each column of an identity matrix to represent a number state.
**Code for number states (vacuum state, single-photon state, and two-photon state):**
clear; % Clear memory clc; % Clear the command window/screen d=5; %dimension of the field I=eye(d); %Identity matrix of dimension d Vacuum=I(:,1) %First column of identity matrix: vacuum state Ket1=I(:,2) %Second column of identity matrix: single-photon state |1) Ket2=I(:,3) %Third column of identity matrix: two-photon state |2)
The outputs are:
Vacuum= Ket1= Ket2= 1 0 0 0 1 0 0 0 1 0 0 0
It is to be noted that the above column matrices are derived from a 5\(\times\)5 identity matrix (d=5). To create higher number states, we need to use a larger value for 'd'. For example, if we wish to create a photon number state 20, 'd'must be chosen greater than 20.
**Code for number state with photon number 20:**
clear; % Clear memory clc; % Clear the command window/screen d=21; %dimension of the field I=eye(d); % Identity matrix of dimension d Ket20=I(:,21) % number state |20)
The output is a column matrix with 21st element is 1 and the others are zero. This configuration represents the number state 20.
### Superposition of Number States
In the previous subsection, we mapped the number states to columns of an identity matrix. For example, \(n\)-photon state \(|n\rangle\) is mapped to \((n+1)\)th column. Now, using this mapping, let us we write the following superposition state in MATLAB:
\[|\psi\rangle=\frac{1}{\sqrt{3}}\left|2\right\rangle+\frac{1}{\sqrt{2}}\left|5 \right\rangle-\frac{1}{\sqrt{6}}\left|6\right\rangle. \tag{1}\]
**Code for a superposed state:**
clear; % Clear memory clc; % Clear the command window/screen d=7; %dimension of the field I=eye(d); % Identity matrix of dimension d Ket2=I(:,3); % two-photon state |2\rangle Ket5=I(:,6); % five-photon state |5\rangle Ket6=I(:,7); % six-photon state |6\rangle Psi=1/sqrt(3)*Ret2+1/sqrt(2)*Ret5-1/sqrt(6)*Ret6 %superposition state
The output is:
Psi = 0 0 0.5774 0 0 0 0.7071 -0.4082 The output is a column matrix in which the 3rd, 6th and 7th elements are non-zero. These non-zero values correspond to the amplitudes of the states \(|2\rangle\), \(|5\rangle\), and \(|6\rangle\) in the superposition state, respectively. As the largest number-state present in the superposition state is \(|6\rangle\), the dimension (d) of the field must be larger than 7. In general, if a superposition state contains the largest number-state to be \(|n\rangle\), then d\(\geq n+1\).
_Note: Any superposition of number states can be expressed as a linear combination of the columns of an identity matrix. The dimension (d) of the matrix must be larger than the largest number-state present in the superposition state._
### Coherent States
A coherent state \(|\alpha\rangle\) is a superposition of all the number states [6]. In the number basis, it is expressed as [Ch. 3 of Ref. [7]]
\[|\alpha\rangle=e^{-|\alpha|^{2}/2}\sum_{n=0}^{\infty}\frac{\alpha^{n}}{\sqrt{n! }}\left|n\right\rangle. \tag{2}\]
Therefore, to write this state in MATLAB, we need to sum all the columns of an identity matrix weighted by the coefficient \(e^{-|\alpha|^{2}/2}\frac{\alpha^{n}}{\sqrt{n!}}\). In principle, one needs to add infinite number of such column matrices. However, for a given \(\alpha\), the coefficient \(e^{-|\alpha|^{2}/2}\frac{\alpha^{n}}{\sqrt{n!}}\) becomes negligible as \(n\) increases. Thus, we truncate the sum so that the norm of the state remains equal to 1.
**Code for coherent state:**
clear; % Clear memory clc; % Clear the command window/screen d=10; %dimension of the field I=eye(d); alpha=0.6; %Amplitude of the coherent state Coh=0; %initialization for x=0:d-1; Coh=Coh+exp(-norm(alpha)^2/2)*alpha^x/sqrt(prod(1:x))*I(:,x+1); end Coh %it will display the coherent state N_c=norm(Coh) %Checking norm to be 1 The outputs are:
Coh= 0.8353 0.5012 0.2126 0.0737 0.0221 0.0059 0.0015 0.0003 0.0001
0.0000
N_c= 1
Although we have truncated the above state upto d=10 (or \(n=9\)), it is important to note that the norm N_c of this state remains equal to 1. If we take a larger value of 'alpha', the state with d=10 may not have the unit norm. For that case, we need to increase the value of 'd'.
_Note: If the norm of a state is found to be less than 1, it is necessary to increase the dimension of the field 'd'. Please refer to Appendix. A for an explanation of the significance of dimension 'd'. It illustrates the amount of error that may arise when choosing a smaller value for 'd'._
### Thermal States
The electromagnetic radiation from an object at a non-zero temperature is a thermal light. A thermal state in the number basis is [Ch. 2 of Ref. [7]]
\[\rho_{th}=\frac{1}{1+n_{th}}\sum_{n=0}^{\infty}\left(\frac{n_{th}}{1+n_{th}} \right)^{n}\left|n\right\rangle\left\langle n\right|, \tag{3}\]
where \(n_{th}\) is the average number of photons in the thermal state \(\rho_{th}\).
As the thermal states are mixed, we represent them by square matrices. These matrices are sum of square matrices \(\left|n\right\rangle\left\langle n\right|\) for all \(n\) with appropriate coefficients (probabilities). The matrix form of \(\left|n\right\rangle\left\langle n\right|\), for \(n=3\), is
\[\left|3\right\rangle\left\langle 3\right|=\left(\begin{array}{c}0\\ 0\\ 0\\ 1\\ 0\\ \vdots\end{array}\right)\left(\begin{array}{cccccc}0&0&0&1&0&\cdots\\ 0&0&0&0&0&\cdots\\ 0&0&0&0&0&\cdots\\ 0&0&0&1&0&\cdots\\ 0&0&0&0&0&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots\end{array}\right). \tag{4}\]
It is a square matrix whose 4th element of the diagonal is non-zero.
**Code for thermal state:**
clear; % Clear memory clc; % Clear the command window/screen d=10; I=eye(d); nth=0.5; %Average number of photons in thermal state RhoTh=0; %Initialization for x=0:d-1 RhoTh=RhoTh+nth^(x)/(1+nth)^(x+1)*I(:,x+1)*I(:,x+1)'; end RhoTh %it will display thermal state N_th=trace(RhoTh) %checking trace to be 1 The outputs are:
RhoTh=
0.666700000000000000.2222000000000.000.000.000.000.000.000.000.000.00.000.00.000.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.0.00.0.00.0.00.0.0.0.0.00.0.
0 0 0 0 0 0.0082 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.0000
N_th=
1
### Squeezed Vacuum States
A squeezed vacuum state in the number basis is [Ch. 7 of Ref. [7]]
\[\left|\xi\right\rangle=\frac{1}{\sqrt{\cosh r}}\sum_{n=0}^{\infty}(-1)^{n} \frac{\sqrt{(2n)!}}{2^{n}n!}e^{in\theta}(\tanh r)^{n}\left|2n\right\rangle, \tag{5}\]
where \(r\) and \(\theta\) are the squeeze parameters. This state is a superposition of all the even number states.
**Code for squeezed vacuum state:**
clear; % Clear memory clc; % Clear the command window/screen d=20; I=eye(d); r=0.3; %squeezing parameter theta=pi/4; %squeezing direction Sqz=0; %initialization for x=0:(d/2)-1; p=(1/sqrt(cosh(r)))*sqrt(prod(1:(2*x)))/(2^x*prod(1:x)); Sqz=Sqz+p*(-1)^x*exp(i*x*theta)*(tanh(r))^x*I(:,2*x+1); end Sqz %squeezed vacuum state output N_sqz=norm(Sqz) %checking norm to be 1
The outputs are:
Sqz =
0.9781 + 0.0000i 0.0000 + 0.0000i -0.1425 - 0.1425i 0.0000 + 0.0000i 0.0000 + 0.0508i 0.0000 + 0.0000i 0.0096 - 0.0096i 0.0000 + 0.0000i -0.0037 + 0.0000i 0.0000 + 0.0000i 0.0007 + 0.0007i 0.0000 + 0.0000i -0.0000 - 0.0003i 0.0000 + 0.0000i -0.0001 + 0.0001i 0.0000 + 0.0000i 0.0000 - 0.0000i -0.0000 + 0.0000i -0.0000 - 0.0000i
0.0000 + 0.0000i
N_sqz=
1
### Number State Filtered Coherent State
A number state filtered coherent state (NSFCS) in number basis is [87 ]
\[\left|\psi(\alpha,m)\right\rangle=\frac{e^{-\left|\alpha\right|^{2}/2}}{N_{m}} \sum_{n=0,n\neq m}^{\infty}\frac{\alpha^{n}}{\sqrt{n!}}\left|n\right\rangle. \tag{6}\]
This definition implies that the state is obtained if the number state \(\left|m\right\rangle\) is absent in the superposition.
**Code for NSFCS:**
clear; % Clear memory clc; % Clear the command window/screen d=15; %dimension of the field m=4; %number state 4 will be absent in the distribution I=eye(d); %identity matrix alpha=0.8; %Amplitude of the coherent state nsfs=0; %initialization for x=0:d-1; if x== msfs=nsfs+0*I(:,x+1); else nsfs=nsfs+exp(-norm(alpha)^2/2)*alpha^x/sqrt(prod(1:x))*I(:,x+1); end NSFS=nsfs/norm(nsfs) %this is to normalize the state
The output is:
NSFS =
0.7275 0.5820 0.3292 0.1521 0 0.0218 0.0071 0.0021 0.0006 0.0002 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
Note that the 5th element is zero indicating the absent of the number state \(\left|4\right\rangle\).
### States of a two-level atom
Consider a two-level atom whose excited and ground states are represented by \(\left|e\right\rangle\) and \(\left|g\right\rangle\) respectively. We represent these states by column matrices of dimension 2:
\[\left|e\right\rangle=\left(\begin{array}{c}1\\ 0\end{array}\right),\left|g\right\rangle=\left(\begin{array}{c}0\\ 1\end{array}\right). \tag{7}\]
Then, any superposition of these states is
\[\left|\psi\right\rangle=\alpha\left|e\right\rangle+\beta\left|g\right\rangle, \tag{8}\]
such that \(\left|\alpha\right|^{2}+\left|\beta\right|^{2}=1\) (normalization condition).
**Code for atomic states:**
clear; % Clear memory clc; % Clear the command window/screen es=[1;0] %excited state gs=[0;1] %ground state alpha=sqrt(0.4); %superposition coefficient beta=sqrt(0.6); %superposition coefficient Psi=alpha*es+beta*gs %superposition state
The outputs are:
es = gs= Psi= 1 0 0.6325 0 1 0.7746 The norm of 'Psi' is \(0.6325^{2}+0.7746^{2}=1\).
## III Operators in Matlab
The operators correspond to the quantized EM field will be written in number basis while the operators correspond to the two-level atoms will be written in atomic state basis (\(\left|e\right\rangle\) and \(\left|g\right\rangle\)).
### Annihilation, Creation and Number operators
The basic operators require to study the EM field in quantum optics are the annihilation operator, creation operator and the number operator [9]. To represent these operators in matrix form, we first need to know their action on number states.
The action of annihilation and creation operators on a number state is
\[\hat{a}\left|n\right\rangle =\sqrt{n}\left|n-1\right\rangle, \tag{9a}\] \[\hat{a}^{\dagger}\left|n\right\rangle =\sqrt{n+1}\left|n+1\right\rangle. \tag{9b}\]
The action of annihilation operator on a field state corresponds to the destruction of a photon from the field, while the action of creation operator corresponds to the creation of a photon in the field. The above two equations give the matrix form:
\[\hat{a}=\left(\begin{array}{ccccc}0&\sqrt{1}&0&0&\cdots\\ 0&0&\sqrt{2}&0&\cdots\\ 0&0&0&\sqrt{3}&\cdots\\ 0&0&0&\ddots&\cdots\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{array}\right),\hat{a}^{\dagger}= \left(\begin{array}{ccccc}0&0&0&\cdots\\ \sqrt{1}&0&0&0&\cdots\\ 0&\sqrt{2}&0&0&\cdots\\ 0&0&\sqrt{3}&\ddots&\cdots\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{array}\right). \tag{10}\]
_Note: The creation operator is adjoint of the annihilation operator._
The number operator is \(\hat{N}=\hat{a}^{\dagger}\hat{a}\), and its action on number state is
\[\hat{N}\ket{n}=n\ket{n}. \tag{11}\]
Thus, it is diagonal in number basis. The expectation value of the number operator in a number state counts the number of photons in that state, that is,
\[\bra{n}\hat{N}\ket{n}=n. \tag{12}\]
**Code for annihilation, creation and number operators:**
clear; % Clear memory clc; % Clear the command window/screen d=6; %dimension of the operator A = diag(sqrt(1:d-1), 1) % Annihilation operator Ad=A' %Creation operator (adjoint of annihilation operator) N=Ad*A %Number operator
The outputs are:
A =
0 1.0000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Ad =
0 0 0 0 0 0 0 1.0000 0 0 0 0 0 0 1.4142 0 0 0 0 0 1.7321 0 0 0 0 0 2.0000 0 0 0 0 0 0 0 0 0 0 2.2361 0
N =
0 0 0 0 0 0 0 0 0 1.0000 0 0 0 0 0 0 0 0 2.0000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
### Hamiltonian for the electromagnetic field
The Hamiltonian for a quantized electromagnetic field is [Ch. 2 of Ref. [7], Ch. 1 of Ref. [9]]
\[\hat{H}=\left(\hat{a}^{\dagger}\hat{a}+\frac{1}{2}\hat{I}_{f}\right)\hbar\omega, \tag{13}\]
where \(\hbar=6.626\times 10^{-34}\)Js (Planck's constant) and \(\omega\) is the frequency of the field. The operator \(\hat{I}_{f}\) is an identity operator. In many textbooks, they don't explicitly write the identity operator.
_Note: We often take \(\hbar\omega=1\) to normalize the output quantities in the unit of energy._
**Code for EM field Hamiltonian:**
clear; % Clear memory clc; % Clear the command window/screen d=6; %dimension of the operator I=eye(d); %identity matrix hbar=1; omega=1; %frequency of the cavity field A = diag(sqrt(1:d-1), 1); % Annihilation operator Ad=A'; %Creation operator (adjoint of annihilation operator) H=hbar*omega*(Ad*A+(1/2)*I) %Hamiltonian The output is: H = 0.5000 0 0 0 0 0 0 1.5000 0 0 0 0 0 0 2.5000 0 0 0 0 0 0 3.5000 0 0 0 0 0 0 4.5000 0 0 0 0 0 0 0 5.5000
### Pauli matrices for two-level atom
Let the excited and ground states of a two-level atom, represented by \(\left|e\right\rangle\) and \(\left|g\right\rangle\), have the energies \(\hbar\omega_{0}/2\) and \(-\hbar\omega_{0}/2\) respectively. Thus, the energy difference is \(\hbar\omega_{0}\). The Hamiltonian for a two-level atom is [10]
\[\hat{H}=\frac{\hbar\omega_{0}}{2}\hat{\sigma}_{z}, \tag{14}\]
where
\[\hat{\sigma}_{z}=\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right), \tag{15}\]
is a Pauli matrix. To account for the transition between the two energy levels \(\left|e\right\rangle\) and \(\left|g\right\rangle\), we consider the raising and lowering operators:
\[\hat{\sigma}_{+}=\left(\begin{array}{cc}0&1\\ 0&0\end{array}\right),\;\hat{\sigma}_{-}=\left(\begin{array}{cc}0&0\\ 1&0\end{array}\right), \tag{16}\]
respectively. Their actions are
\[\hat{\sigma}_{+}\left|g\right\rangle=\left|e\right\rangle,\hat{\sigma}_{-} \left|e\right\rangle=\left|g\right\rangle. \tag{17}\]
_Note: We often take \(\hbar\omega_{0}=1\) to normalize the output quantities in the unit of energy._
**Code for atomic Hamiltonian, raising and lowering operators:**
clear; % Clear memory clc; % Clear the command window/screen hbar=1; Omega_0=1; %Atomic transition frequency Sz=[1;0'-1] %Sigma_z operator Splus=[0 1;0 0] %Raising operator Sminus=[0 0;1 0] %Loveing operator H=hbar*omega_0/2*Sz %Hamiltonian for a two-level atom
The outputs are
Sz =
1 0 -1 Splus =
0 1 0 0
Sminus =
0 0 1 0
H =
0.5000 0 0 0 -0.5000
## IV Properties of quantum states
### Photon number distribution
Upon measurement on a number state \(|n\rangle\), we detect \(n\) photons with unit probability. However, there are several nonclassical fields that do not possess a definite number of photons [11]. When we measure the photon number on such fields, we collapse the field state to a particular number state with a definite number of photons. If we perform repeated measurements on an ensemble of such field states, we encounter different photon numbers with associated probabilities. For example, the photon-number probability distribution in a coherent state is Poissonian.
The probability of finding \(n\) photons in a given field state \(|\psi\rangle\) is
\[P_{n}=|\langle n|\psi\rangle|^{2}. \tag{18}\]
In particular, the probability of detecting \(n\) photons in a coherent state \(|\alpha\rangle\) is [Ch. 3 of Ref. [7]]
\[P_{n}=|\langle n|\alpha\rangle|^{2}=e^{-|\alpha|^{2}}\frac{|\alpha|^{2n}}{n!}, \tag{19}\]
and in a thermal state is [Ch. 2 of Ref. [7]]
\[P_{n}=\langle n|\rho_{th}|n\rangle=\frac{n_{th}^{n}}{(1+n_{th})^{(1+n)}}. \tag{20}\]
**Code for photon-number probability distributions in coherent and thermal states:**
clear; % Clear memory clc; % Clear the command window/screen d=15; %dimension of the field I=eye(d);
%****************************************************************% Coherent State alpha=2; %Amplitude of the coherent state Coh=0; %initialization for x=0:d-1; Coh=Coh+exp(-norm(alpha)^2/2)*alpha^x/sqrt(prod(1:x))*I(:,x+1); end
for n=0:d-1 PnC(n+1)=norm(I(:,n+1)'*Coh)^2; %Probability of |n> in coherent state end
%%%%%%%%%%%%%%%%%Thermal state nth=0.5; %Average number of photons in thermal state RhoTh=0; for x=0:d-1 RhoTh=RhoTh+nth^(x)/(1+nth)^(x+1)*I(:,x+1)*I(:,x+1)'; end for n=0:d-1 PnTh(n+1)=I(:,n+1)'*RhoTh*I(:,n+1); %Probability of |n> in thermal state end
%%%%%%%%%%%%%%%%%Bar plots n=0:d-1; figure(1) bar(n,PnC)
figure(2) bar(n,PnTh)
The output bar plots are given in Fig. 1. The height of a bar indicates the probability of detecting a corresponding number state in coherent state (left) and thermal state (right).
### Average number of photons
The average number of photons in a given field state \(\left|\psi\right\rangle\) is
\[\left\langle a^{\dagger}a\right\rangle=\left\langle\psi\right|a^{\dagger}a \left|\psi\right\rangle. \tag{21}\]
For number state \(\left|n\right\rangle\),
\[\left\langle a^{\dagger}a\right\rangle=\left\langle n\right|a^{\dagger}a \left|n\right\rangle=n, \tag{22}\]
which is the number of photons in the state.
The average number of photons in a coherent state is [Ch. 3 of Ref. [7]]
\[\left\langle a^{\dagger}a\right\rangle=\left\langle\alpha\right|a^{\dagger}a \left|\alpha\right\rangle=\left|\alpha\right|^{2}, \tag{23}\]
Figure 1: Photon number distribution \(P_{n}\) for coherent state \(\left|\alpha\right\rangle\) (\(\alpha=2\)) and for thermal state \(\rho_{th}\) with \(n_{th}=0.5\).
and in a thermal state is
\[\langle a^{\dagger}a\rangle=\mathrm{Tr}(a^{\dagger}a\rho_{th})=n_{th}. \tag{24}\]
**Code for calculating the average number of photons:**
clear; % Clear memory clc; % Clear the command window/screen d=20; %dimension of the field I=eye(d); %identity matrix
%%%% create number operator A = diag(sqrt(1:d-1), 1); % Annihilation operator Ad=A'; %Creation operator Ad=Ad*A; %Number operator
%% Create a number state Ket4=I(:,5); % four-photon state |4)
AdAlNumber=Ket4'*AdA*Ket4 %Average number of photons in the number state
%% Create a coherent state alpha=sqrt(3); %Amplitude of the coherent state Coh=0; %initialization for x=0:d-1; Coh=Coh+exp(-norm(alpha)^2/2)*alpha^x/sqrt(prod(1:x))*I(:,x+1); end
AdAdCoherent=Coh'*AdA*Coh %Average number of photons in coherent state
%%% Create a thermal state nth=0.85; %Assumed average number of photons in thermal state RhoTh=0; for x=0:d-1 RhoTh+nth^(x)/(1+nth)^(x+1)*I(:,x+1)*I(:,x+1)'; end
AdAdThermal=trace(AdA*RhoTh) % Average number of photons in thermal state
The outputs are:
AdAlNumber =
4
AdAdCoherent =
3.0000
AdThermal =
0.8500
In the above code, we selected the number state \(|4\rangle\) and calculated the average number of photons to be 4. This outcome illustrates that the number operator counts the number of photons in a number state. This is because the number states are eigenstates of the number operator. For coherent state, we take the amplitude to be \(\alpha=\sqrt{3}\) and calculated the average number of photons to be \(|\alpha|^{2}=3\). Similarly, we assumed \(n_{th}\) to be 0.85 to create a thermal state in the code and that is reflected at the output.
### Zero time-delay second-order coherence function
The zero time-delay second-order coherence function characterizes the photon statistics of a field [6; 12]. It is defined as [Ch. 5 of Ref. [7]]
\[g^{(2)}(0)=\frac{\langle a^{\dagger 2}a^{2}\rangle}{\langle a^{\dagger}a \rangle^{2}}, \tag{25}\]
where \(\langle a^{\dagger 2}a^{2}\rangle=\langle\psi|\,a^{\dagger 2}a^{2}\,|\psi\rangle\) and \(\langle a^{\dagger}a\rangle=\langle\psi|\,a^{\dagger}a\,|\psi\rangle\).
Coherent state \(|\alpha\rangle\):
\[g^{(2)}(0)=1\ \ \ \ \ (\mbox{Poissonian photon statistics}) \tag{26}\]
A field with \(g^{(2)}(0)<1\) exhibits sub-Poissonian photon statistics, and a field with \(g^{(2)}(0)>1\) possesses super-Poissonian photon statistics.
Number state \(|n\rangle\):
\[g^{(2)}(0)=1-(1/n),\ \ \ (\mbox{Less than 1 and hence, sub-Poissonian}) \tag{27}\]
Thermal state \(\rho_{th}\):
\[g^{(2)}(0)=2,\ \ \ \ (\mbox{Greater than 1 and hence, super-Poissonian}) \tag{28}\]
**Code for calculating \(g^{(2)}(0)\):**
clear; % Clear memory clc; % Clear the command window/screen d=25; %dimension of the field I=eye(d); %identity matrix
A = diag(sqrt(1:d-1), 1); % Annihilation operator Ad=A'; %Creation operator AdA=Ad*A; %Number operator Ad2A2=Ad*Ad*A*A; %A-dagger-square A-square
%% Create a number state Ket4=I(:,5); % four-photon state |4\rangle
AdANumber=Ket4'*AdA*Ket4; Ad2A2Number=Ket4'*Ad2A2*Ket4;
g2Number=Ad2A2Number/(AdANumber)^2 % g2(0) for number state
%% Create a coherent state alpha=sqrt(3); %Amplitude of the coherent state Coh=0; %initialization for x=0:d-1; Coh=Coh+exp(-norm(alpha)^2/2)*alpha^x/sqrt(prod(1:x))*I(:,x+1); end AdAdCoherent=Coh'*AdA*Coh;
Ad2A2Coherent=Coh'*Ad2A2*Coh;
g2Coherent=Ad2A2Coherent/(AdACoherent)^2 % g2(0) for coherent state
%%%% Create a thermal state nth=0.85; %Average number of photons in thermal state RhoTh=0; for x=0:d-1 RhoTh=RhoTh*thth^(x)/(1+nth)^(x+1)*I(:,x+1)*I(:,x+1)'; end AdAThermal=trace(AdA*RhoTh); Ad2A2Thermal=trace(Ad2A2*RhoTh); g2Thermal=Ad2A2Thermal/(AdAThermal)^2 %g2(0) for thermal state
The outputs are:
g2Number = 0.7500
g2Coherent = 1.0000
g2Thermal = 2.0000
## V Atom-field interaction
The strength of interaction (energy exchange) between an atom with an electromagnetic field in free space is small [13; 14]. This is enhanced by confining them in a cavity [Fig. 2(a), [14; 15]]. The Hamiltonian for an atom-cavity system is [Ch. 4 of Ref. [7], Ch. 12 of Ref. [16]]
\[\hat{H}=\frac{\hbar\omega_{0}}{2}\hat{\sigma}_{z}+\hbar\omega\hat{a}^{\dagger} \hat{a}+\hbar g(\hat{\sigma}_{-}\hat{a}^{\dagger}+\hat{\sigma}_{+}\hat{a}). \tag{29}\]
This is the Jaynes-Cummings (JC) Hamiltonian [17]. The first and second terms are the energy operators for a two-level atom and the cavity field, respectively. We have omitted the constant \(1/2\hbar\omega\) from the energy operator of the cavity field since it doesn't significantly affect the time dynamics. The last term represents the interaction between the field and atom, facilitating energy exchange in the form of photons. The atom absorbs a photon from the field and subsequently re-emits it into the field.
**Important Note:** When dealing with two physical systems in quantum mechanics, we need to be careful while writing their combined operators and states. Consider two subsystems A and B. An operator for a composite system is written as a tensor product of the operators of the individual sub-systems, that is,
\[\hat{C}=\hat{A}\otimes\hat{B}, \tag{30}\]
where \(\hat{A}\) and \(\hat{B}\) are the operators corresponding to the sub-systems A and B, and \(\hat{C}\) is an operator for the composite system.
Similarly, a state of the combined system is also a tensor product of the states of the individual sub-systems, that is,
\[\left|\psi\right\rangle=\left|\psi_{A}\right\rangle\otimes\left|\psi_{B} \right\rangle. \tag{31}\]
It is important to note that we follow the order of the operators and the states to be the same, that is, we place the operator and the state of the sub-system A before those of the sub-system B.
We may note in the last term of Eq. (29), particularly the terms \(\hat{\sigma}_{-}\hat{a}^{\dagger}\) and \(\hat{\sigma}_{+}\hat{a}\), the atomic operators are placed first and then the field operators. Are we not following this rule in the first and second terms of Eq. (29)? We are following, and to see it, we need to re-write the JC Hamiltonian as follows:
\[\hat{H}=\frac{\hbar\omega_{0}}{2}(\hat{\sigma}_{z}\otimes\hat{I}_{f})+\hbar \omega(\hat{I}_{a}\otimes\hat{a}^{\dagger}\hat{a})+\hbar g(\hat{\sigma}_{-} \otimes\hat{a}^{\dagger}+\hat{\sigma}_{+}\otimes\hat{a}), \tag{32}\]
where \(\hat{I}_{a}\) and \(\hat{I}_{f}\) are the identity operators correspond to two-level atom and the field. The dimension of \(\hat{I}_{a}\) is 2 and the dimension of \(\hat{I}_{f}\) depends on the dimension of the field state.
To study the time dynamics of the atom-cavity system, we define the unitary operator to be
\[\hat{U}=e^{-i\hat{H}t/\hbar}, \tag{33}\]
where \(\hat{H}\) is given above. Let the combined initial state of atom and field be
\[\left|\psi_{in}\right\rangle=\left|e\right\rangle\otimes\left|n\right\rangle, \tag{34}\]
where the atom is in its excited state \(\left|e\right\rangle\) and the cavity field has precisely \(n\) photons. Note that we have followed the same order for the state too (atomic state and then field state), which is necessary to have a meaningful action of the operators on the state.
For resonant case, that is, \(\omega=\omega_{0}\), the combined state evolves under the JC Hamiltonian as [Ch. 4 of ref. [7]]
\[\left|\psi(t)\right\rangle=e^{-iHt/\hbar}\left|\psi_{in}\right\rangle=\cos(gt \sqrt{n+1})\left|e,n\right\rangle-i\sin(gt\sqrt{n+1})\left|g,n+1\right\rangle, \tag{35}\]
where \(\left|e,n\right\rangle=\left|e\right\rangle\otimes\left|n\right\rangle\) and \(\left|g,n+1\right\rangle=\left|g\right\rangle\otimes\left|n+1\right\rangle\). At \(t=0\), the atom is in excited state and the cavity field is in \(n\)-photon state. At time \(t=\pi/(2g\sqrt{n+1})\), the atom goes to its ground state by emitting a photon into the field. Then, at time \(t=\pi/(g\sqrt{n+1})\), the atom comes back to its initial state \(\left|e\right\rangle\) by absorbing a photon from the field. Thus, the atom will be periodically exchanging energy with the field [see Fig. 2(b)].
The probability of finding the atom in excited state and the field in \(n\)-photon state is [Ch. 12 of [16]]
\[P_{e}(t)=\left|\left\langle e,n\right|\psi(t)\right\rangle\right|^{2}=\cos^{2} (gt\sqrt{n+1}), \tag{36}\]
and the probability of finding the atom in ground state and the field in \((n+1)\)-photon state is
\[P_{g}(t)=\left|\left\langle g,n+1\right|\psi(t)\right\rangle\right|^{2}=\sin^{ 2}(gt\sqrt{n+1}). \tag{37}\]
To study the time dynamics of atom-cavity system using MATLAB, we set \(n=4\) (number of photons). The following code calculates \(P_{e}\) and \(P_{g}\) for time \(t=0\) to \(T\), and plot them in Fig. 2(b). Therefore, \(P_{e}=\cos^{2}(\sqrt{5}gt)\) and \(P_{g}=\sin^{2}(\sqrt{5}gt)\). However, one may study atom-field interaction by considering other initial number states too, and their superposition.
**Code for atom-field interaction:**
clear; % Clear memory clc; % Clear the command window/screen d=10; %dimension of the cavity field hbar=1; W0=1; %atomic frequency WF=1; %cavity field frequency g=0.1; %coupling constant A = diag(sqrt(1:d-1), 1); % Annihilation operator Ad=A'; %Creation operator Sz=[1,0;0,-1]; %Sigma z Splus=[0,1;0,0]; %sigma plus Sminus=[0,0;1,0]; %sigma minus gs=[0;1]; %ground state es=[1;0]; %excited state
I_a=eye(2); %Identity operator for the atom I_f=eye(d); %Identity operator for the field
Hatom=(1/2)*hbar*W0*kron(Sz,I_f); %Atomic Hamiltonian Hfield=hbar*Wf*kron(I_a,Ad*A); %Field Hamiltonian Hint=hbar*g*(kron(Splus,A)+kron(Sminus,Ad)); %interaction Hamiltonian H=Hatom+Hfield+Hint; %JC Hamiltonian (total)
Figure 2: (a) An atom is trapped inside a cavity. (b) Time evolution of the probabilities \(P_{e}\) and \(P_{g}\) for atom-field coupling strength \(g=0.1\) and the initial photon number \(n=4\). (c) Two cavities Cavity 1 and Cavity 2 are placed near to each other (coupled) to exchange energy. (d) Time evolution of the probabilities \(P_{10}\) and \(P_{01}\) for cavity-cavity coupling strength \(J=0.1\). The photon is periodically exchanged between the cavities.
n=4; %initial number of photons in the cavity
en=kron(es,I_f(:,n+1)); % Atom in excited state, field has n photons
gn=kron(gs, I_f(:,n+2)); % Atom in ground state and field has n+1 photons
Psi=en; % Initial state
dt=0.1; %time step
U=expm(-i*H*dt/hbar); %Unitary operator
T=0:dt:30; % Total evolution time
for t=1:length(T); Pe(t)= norm(en'*Psi)^2; %Probability of |e n> Pg(t)=norm(gn'*Psi)^2; %Probability of |g n+1> Psi=U*Psi; %Time evolved state Psi=Psii/norm(Psii); %Normalizing the state end
plot(T,Pe,'r',T,Pg,'k') %plotting Pe in red colour and Pg in black colour
The output plot is shown in Fig. 2(b). At \(t=0\), \(P_{e}=1\) and \(P_{g}=0\).
At \(t=\pi/(2g\sqrt{n+1})=\pi/(2\times 0.1\times\sqrt{4+1})\approx 7.02\), \(P_{g}=1\) and \(P_{e}=0\).
Now, consider the initial state of the field to be in a coherent state and the atom in its excited state. We calculate the expectation value of the atomic operator \(\sigma_{z}\) to be [Ch. 4 of Ref. [7]]
\[W=\langle\hat{\sigma}_{z}(t)\rangle=e^{-|\alpha|^{2}}\sum_{n=0}^{\infty}\frac{| \alpha|^{2n}}{n!}\cos(2gt\sqrt{n+1}). \tag{38}\]
**Code for calculating expectation value of the atomic operator \(\hat{\sigma}_{z}\) when the field is in coherent state:**
clear; % Clear memory
clc; % Clear the command window/screen
d=50; % dimension of the cavity field
hbar=1;
W0=1; %atomic frequency
Wf=1; %cavity field frequency
g=0.1; %coupling constant
A = diag(sqrt(1:d-1), 1); % Annihilation operator
Ad=A'; %Creation operator
Sz=[1,0;0,-1]; %Sigma z
Splus=[0,1;0,0]; %sigma plus
Sminus=[0,0;1,0]; %sigma minus
gs=[0;1]; %ground state
es=[1;0]; %excited state
I_a=eye(2); %Identity operator for the atom
I_f=eye(d); %Identity operator for the field
Hatom=(1/2)*hbar*W0*kron(Sz,I_f); %Atomic Hamiltonian Hfield=hbar*Wf*kron(I_a,Ad*A); %Field Hamiltonian Hint=hbar*g*(kron(Splus,A)*kron(Sminus,Ad)); %interaction Hamiltonian H=Hatom+Hfield+Hint; %JC Hamiltonian alpha=3; %coherent state amplitude Coh=0; for x=0:d-1; Coh=Coh+exp(-norm(alpha)^2/2)*alpha^x/sqrt(prod(1:x))*I_f(:,x+1); end Psi=kron(es,Coh); % Initial state: atom in |e> and field in coherent state dt=0.1; %time step U=expm(-j*H*dt); % Unitary operator T=0:dt:500; % Total evolution time for t=1:length(T) W(t)=Psi'*kron(Sz,I_f)*Psi; %Sz average (atomic inversion) Psi=U*Psi; %time evolution of the state Psi=Psi/norm(Psi); %normalize the state end plot(T,W) The output plot is shown in Fig. 3. As can be seen in the plot, the oscillation of \(\langle\sigma_{z}(t)\rangle\) (atomic inversion) collapses (W=0) for a duration of time and then revives. For more details on this interesting observation, specifically the collapse and revival of \(\langle\sigma_{z}(t)\rangle\) (atomic inversion), we recommend the readers to see the standard textbooks [7; 9; 16].
Figure 3: Time evolution of \(\langle\sigma_{z}(t)\rangle\) (atomic inversion) for the initial state: the atom is in excited state and field is in a coherent state. We set \(\alpha=3\) and atom-cavity coupling strength \(g=0.1\).
In addition, the readers may study this model by considering other inputs such as squeezed vacuum, NSFCS, etc.
## VI Two-mode field
If two electromagnetic fields differ in either frequency, propagation direction, or polarization, they are referred to as two-mode fields. In such cases, we use different field operators to represent them. Let the annihilation operators for a two-mode field be \(\hat{a}_{1}\) and \(\hat{a}_{2}\), and the creation operators be \(\hat{a}_{1}^{\dagger}\) and \(\hat{a}_{2}^{\dagger}\).
### Coupled cavities (energy exchange between two modes)
These two fields exchange energy if we confine them in two coupled cavities [Fig. 2(c)]. The Hamiltonian for this configuration is [15]
\[\hat{H}=\hbar\omega\hat{a}_{1}^{\dagger}\hat{a}_{1}+\hbar\omega\hat{a}_{2}^{ \dagger}\hat{a}_{2}+\hbar J(\hat{a}_{1}^{\dagger}\hat{a}_{2}+\hat{a}_{1}\hat{a }_{2}^{\dagger}), \tag{39}\]
where \(\omega\) is the resonant frequencies of both the cavities, and \(J\) is the coupling strength between them.
Let the first cavity contains \(N-n\) photons and the second cavity contains \(n\) photons, such that, the total number of photons is \(N\). Then, the time evolved state is [18]
\[\ket{\psi(t)}=e^{-i\hat{H}t/\hbar}\ket{N-n,n}=e^{-iN\omega t}\sum _{k,k^{\prime}=0}^{N-n,n}{}^{N-n}C_{k}\ ^{n}C_{k^{\prime}}(\cos Jt)^{N-(k+k^{\prime})}(-i\sin Jt)^{k+k^{\prime}}\] \[\times\sqrt{\ \frac{{}^{N}C_{n}}{{}^{N}C_{n+k-k^{\prime}}}}\ket{N-(n+k-k^{ \prime}),n+k-k^{\prime}}. \tag{40}\]
To study the cavity-cavity dynamics using MATLAB, we consider the simple case, that is, one photon is in the first cavity and the second cavity is in vacuum (no photons), such that, the total number of photons in two cavities is \(N=1\). Then, the initial state is
\[\ket{\psi_{in}}=\ket{1,0}. \tag{41}\]
The state at a later time \(t\) is
\[\ket{\psi(t)}=e^{-i\hat{H}t/\hbar}\ket{1,0}=\cos Jt\ket{1,0}-i\sin Jt\ket{0,1 }. \tag{42}\]
The probability of finding the photon in the first cavity is
\[P_{10}(t)=\ket{\bra{1,0}\psi(t)}^{2}=\cos^{2}Jt, \tag{43}\]
and the probability of finding the photon in the second cavity is
\[P_{01}(t)=\ket{\bra{0,1}\psi(t)}^{2}=\sin^{2}Jt. \tag{44}\]
#### Code for coupled-cavity dynamics:
clear; % Clear memory clc; % Clear the command window/screen d=10; %dimension of the cavity field hbar=1; %Resonance frequency of the first cavity W2=1; %Resonance frequency of the first cavity J=0.1; %inter-cavity coupling constant A = diag(sqrt(1:d-1), 1); % Annihilation operator Ad='; %Creation operator
I_f=eye(d); %Identity operator for the fields
H=hbar*W1*kron(Ad*A,I_f)+hbar*W2*kron(I_f,Ad*A)+hbar*J*(kron(Ad,A)+kron(A,Ad));
S10=kron(I_f(:,2),I_f(:,1)); %First cavity in |1> photon and second cavity in |0> S01=kron(I_f(:,1),I_f(:,2)); %Second cavity in |1> and first cavity in |0>
Psi=S10; %Initial state
dt=0.1; %time step U=expm(-i*H*dt/hbar); % Unitary operator
T=0:dt:50; % Total evolution time
for t=1:length(T)
P10(t)=norm(S10'*Psi)^2; %Prob. of finding the photon in first cavity
P01(t)=norm(S01'*Psi)^2; %Prob. of finding the photon in second cavity
Psi=U*Psi; %Time evolution
Psi=Psi/norm(Psi); %Normalization
end
plot(T,P10,'r',T,P01,'k') %Plotting in red and black
The output plot is shown in Fig. 2(d). The photon, initially which was in first cavity, is completely transferred to the second cavity at time \(t=\pi/2J\).
### Beam splitter transformation
A beam splitter has two input ports and two output ports [see Fig. 4(a)]. When a beam of light is directed into one of the input ports, the beam splitter divides it into two separate beams. The intensity of the resulting output beams is determined by the splitting ratio of the beam splitter. In case of a 50:50 ratio beam splitter, the intensities of the output beams are equal.
If we consider a quantum field (nonclassical light), for example a number state, input to the beam splitter, then the beam splitter will distribute the photons in two outputs with different probabilities. For instance, if the input is number state \(\left|5\right\rangle\), the possible outputs are: \(\left|5,0\right\rangle,\left|4,1\right\rangle,\left|3,2\right\rangle,\left|2, 23\right\rangle,\left|1,4\right\rangle,\left|0,5\right\rangle\). The probabilities of these outputs depend on the beam splitter ratio. However, once we do a measurement at the output, we realize one of them.
Consider a beam splitter as shown in Fig. 4(a) with its unitary transformation [Ch. 6 of Ref. [7], Ch. 5 of Ref. [16]]
\[\hat{U}_{BS}=e^{i\theta(\hat{a}^{\dagger}\hat{b}+\hat{a}\hat{b}^{\dagger})}, \tag{45}\]
where \(\hat{a}\) and \(\hat{b}\) are the annihilation operators for mode-a and mode-b, respectively. The parameter \(\theta\) decides the beam splitting ratio, that is, \(\cos^{2}\theta\) and \(\sin^{2}\theta\) are the reflectivity and transmissivity of the beam splitter respectively. We consider the beam splitter to be 50:50 by taking \(\theta=\pi/4\). As a result, it splits the beam in to two with equal intensities.
Let the input be \(\left|n,0\right\rangle\), where \(n\) photons are in mode-a and the mode-b is in vacuum. Thus, the input intensities are \(\langle\hat{a}^{\dagger}\hat{a}\rangle=n\) and \(\langle\hat{b}^{\dagger}\hat{b}\rangle=0\). The output state will be a superposition of all possible combinations generated from the distribution of \(n\) photons in two modes [Ch. 5 of Ref. [16]]:
\[\left|\psi_{out}\right\rangle=\hat{U}_{BS}\left|n,0\right\rangle=\frac{1}{ \sqrt{2^{n}}}\sum_{k=0}^{n}i^{k}\sqrt{\frac{n!}{(n-k)!k!}}\left|n-k,k\right\rangle. \tag{46}\]
The average number of photons (\(\sim\) intensity) at the output port \(a_{out}\) is
\[I_{a}=\left\langle\psi_{out}\right|(\hat{a}^{\dagger}\hat{a}\otimes\hat{I}_{f })\left|\psi_{out}\right\rangle=\frac{n}{2}, \tag{47}\]
and at the output port \(b_{out}\) is
\[I_{a}=\left\langle\psi_{out}\right|(\hat{I}_{f}\otimes\hat{b}^{\dagger}\hat{b}) \left|\psi_{out}\right\rangle=\frac{n}{2}. \tag{48}\]
These results agree that, after the 50:50 beam splitter, each output gets half of the input intensity.
For the input \(\left|n,m\right\rangle\), the output state after the beam splitter is [Ch. 5 of Ref. [16]]
\[\hat{U}_{BS}\left|n,m\right\rangle=\sum_{k,k^{\prime}=0}^{n,m} \ {}^{n}C_{k}\ {}^{m}C_{k^{\prime}}(\cos\theta)^{m+k-k^{\prime}}(i\sin\theta)^{n-k+k^{ \prime}}\] \[\times\sqrt{\frac{(k+k^{\prime})!(n+m-k-k^{\prime})!}{n!m!}} \left|k+k^{\prime},n+m-k-k^{\prime}\right\rangle. \tag{49}\]
For 50:50 beam splitter, use \(\theta=\pi/4\).
As an example, for \(n=2\) and \(m=0\), the input state is \(\left|2,0\right\rangle\). The output state can be calculated using Eq. (46) to be
\[\left|\psi_{out}\right\rangle=\hat{U}_{BS}\left|2,0\right\rangle=\frac{1}{2} \left|2,0\right\rangle+\frac{i}{\sqrt{2}}\left|1,1\right\rangle-\frac{1}{2} \left|0,2\right\rangle. \tag{50}\]
Therefore, the probabilities of getting \(\left|2,0\right\rangle,\left|1,1\right\rangle\) and \(\left|0,2\right\rangle\) are \(1/4\), \(1/2\) and \(1/4\) respectively. The average number of photons (\(\sim\) intensity) at the output port \(a_{out}\) is
\[I_{a}=\left\langle\psi_{out}\right|(\hat{a}^{\dagger}\hat{a}\otimes I_{f}) \left|\psi_{out}\right\rangle=1, \tag{51}\]
and at the output port \(b_{out}\) is
\[I_{b}=\left\langle\psi_{out}\right|(I_{f}\otimes\hat{b}^{\dagger}\hat{b}) \left|\psi_{out}\right\rangle=1, \tag{52}\]
which are halves of the input intensity.
**Code for beam splitter transformation:**
clear; % Clear memory clc; % Clear the command window/screen d=10; %dimension of the cavity field A = diag(sqrt(1:d-1), 1); % Annihilation operator Ad=A'; %Creation operator Ad=Ad*A; %Number operator theta=pi/4; %beam splitter parameter UBS=expm(i*theta*(kron(Ad,A)+kron(A,Ad))); %unitary operator for beam splitter
I_f=eye(d); %Identity operator for the fields
S20=kron(I_f(:,3),I_f(:,1)); %mode-a has two photons and mode-b is in vacuum S11=kron(I_f(:,2),I_f(:,2)); %mode-a has one photon and mode-b has one photon S02=kron(I_f(:,1),I_f(:,3)); %mode-b has two photons and mode-a is in vacuum Psi=S20; %Input state
PsiOut=UBS*Psi; %beam splitter transformation
P20=norm(S20'*PsiOut)^2 %Probability of 20
P11=norm(S11'*PsiOut)^2 %Probability of 11 P02=norm(S02'*PsiOut)^2 %Probability of 02
AdAdout=PsiOut'*kron(AdA,I_f)*PsiOut %Intensity at port-a BdBout=PsiOut'*kron(I_f,AdA)*PsiOut %Intensity at port-b
The outputs are:
P20 =
0.2500
P11 =
0.5000
P02 =
0.2500
AdAut =
1
Figure 4: (a) A beam splitter: two input ports \(a_{in},b_{in}\) and two output ports \(a_{out},b_{out}\). (b) A schematic of Mach-Zehnder interferometer consisting of two beam splitters, four mirrors and a phase shifter. (c) Interference pattern: output intensity at \(a_{out}\) as a function of \(\phi\).
BdBout =
1
### Mach-Zehnder interferometer
The Mach-Zehnder interferometer (MZI) is one of the simplest devices that demonstrates interference of an electromagnetic field even at a single photon level [Ch. 6 of [7]]. It consists of two beam splitters, mirrors, and a phase shifter [See Fig. 4(b)]. The first beam splitter divides the input beam into two. A phase shifter in one of the arms/paths introduces a phase shift to the field passing through it, and the final beam splitter recombines both the beams. The each mirror contributes a phase shift of \(\pi/2\) in both the paths, amounting to an irrelevant global phase, that we omit.
To observe interference pattern at the output ports of the MZI [Fig. 4(b)], we consider a single-photon state input to the port \(a_{in}\) and a vacuum state to the other port \(b_{in}\). Therefore, the input state is
\[\left|\psi_{in}\right>=\left|1,0\right>. \tag{53}\]
The state after the first beam splitter is (from Eq. (46))
\[\left|\psi_{1}\right>=\hat{U}_{BS}\left|1,0\right>=\frac{1}{\sqrt{2}}(\left|1, 0\right>+i\left|0,1\right>), \tag{54}\]
The phase shifter, placed in the left arm, introduces a phase \(\phi\) if a photon passes through it. In general, it shifts a phase of \(n\phi\) if \(n\) photons pass through it. Thus, the unitary operator for the phase-shifter is \(e^{i\phi\hat{a}^{\dagger}\hat{a}}\). In our case, \(n=1\) and therefore, we have
\[\left|\psi_{2}\right>=\frac{1}{\sqrt{2}}(e^{i\phi}\left|1,0\right>+i\left|0,1 \right>). \tag{55}\]
The final beam splitter transforms the above state to
\[\left|\psi_{out}\right> =\frac{1}{\sqrt{2}}\left(e^{i\phi}\frac{1}{\sqrt{2}}(\left|1,0 \right>+i\left|0,1\right>)+i\frac{1}{\sqrt{2}}(i\left|1,0\right>+\left|0,1 \right>)\right)\] \[=\frac{1}{\sqrt{2}}\left((e^{i\phi}-1)\left|1,0\right>+i(e^{i \phi}+1)\left|0,1\right>\right), \tag{56}\]
where we applied the beam splitter transformation on each components \(\left|1,0\right>\) and \(\left|0,1\right>\).
The probability of finding the photon at \(a_{out}\) is
\[P_{10}=|\left<1,0|\,\psi_{out}\right>|^{2}=\frac{1}{2}(1-\cos\phi), \tag{57}\]
and the probability of finding the photon at \(b_{out}\) is
\[P_{01}=|\left<0,1|\,\psi_{out}\right>|^{2}=\frac{1}{2}(1+\cos\phi). \tag{58}\]
The average number of photons (\(\sim\) intensity) at the output port \(a_{out}\) is
\[I_{a}=\left<\psi_{out}\right|(a^{\dagger}a\otimes I_{f})\left|\psi_{out}\right> =\frac{1}{2}(1-\cos\phi), \tag{59}\]
and at the output port \(b_{out}\) is
\[I_{b}=\left<\psi_{out}\right|(I_{f}\otimes b^{\dagger}b)\left|\psi_{out}\right> =\frac{1}{2}(1+\cos\phi). \tag{60}\]
It is to be noted that, as the input contains one photon, the probability and the average number of photons give the same result. Importantly, all the above calculated quantities oscillate between 1 and 0 when \(\phi\) varies. The value 1 corresponds to constructive interference and the value 0 corresponds to destructive interference.
**Code for the Mach-Zehnder interferometer:**
clear; % Clear memory clc; % Clear the command window/screen d=10; %dimension of the cavity field I_f=eye(d); %Identity operator for the fields A = diag(sqrt(1:d-1), 1); % Annihilation operator Ad=A'; %Creation operator Ad=Ad*A; %Number operator
theta=pi/4; %beam splitter parameter UBS=expm(i*theta*(kron(Ad,A)+kron(A,Ad)));
S10=kron(I_f(:,2),I_f(:,1)); %mode a has one photon and mode b is in vacuum SO1=kron(I_f(:,1),I_f(:,2)); %mode b has one photon and mode a is in vacuum
Psi=S10; %Input state |10>
phir=0:pi/20:2*pi; %phase shift running from 0 to 2*pi
for x=1:length(phir) phi=phir(x); %phase shift Uphi=expm(i*phi*kron(Ad,I_f)); %phase shift unitary operator
PsiOut=UBS*Uphi*UBS*Psi; %Output state after MZI
P10(x)=norm(S10'*PsiOut)^2; %Probability of 10 P01(x)=norm(S01'*PsiOut)^2; %Probability of 01
AdAdout(x)=PsiOut'*kron(Ad,I_f)*PsiOut; %Intensity at output port-a BdBout(x)=PsiOut'*kron(I_f,AdA)*PsiOut; %Intensity at output port-b end plot(phir,AdAdout,'k')
The plot for the output intensity at \(a_{out}\) is shown in Fig. 4(c).
## VII Dissipative atom-field dynamics
Complete isolation of a system from its surrounding is not possible. Interaction between a system and its surrounding (environment) leads to a dissipative dynamics of the system. There are several approaches for studying the effect of dissipation on quantum systems. Among them, the Lindblad master equation and Monte-Carlo wavefunction method are the most used approaches in quantum optics.
### Lindblad master equation
In this method, the evolution equation for the reduced density matrix of a system is obtained by tracing over the states of the environment. For an atom-cavity system, the master equation is [19]
\[\frac{d}{dt}\rho_{s}=\frac{1}{i\hbar}[\hat{H},\rho]+\mathcal{L}_{cavity}(\rho) +\mathcal{L}_{atom}(\rho), \tag{61}\]
where
\[\mathcal{L}_{cavity}(\rho) =\frac{\kappa}{2}(2\hat{a}\rho\hat{a}^{\dagger}-\rho\hat{a}^{ \dagger}\hat{a}-\hat{a}^{\dagger}\hat{a}\rho), \tag{62}\] \[\mathcal{L}_{atom}(\rho) =\frac{\gamma}{2}(2\hat{\sigma}_{-}\rho\hat{\sigma}_{+}-\rho\hat {\sigma}_{+}\hat{\sigma}_{-}-\hat{\sigma}_{+}\hat{\sigma}_{-}\rho), \tag{63}\]
are the Lindblad superoperators [20]. Here, \(\gamma\) and \(\kappa\) are the atomic and cavity decay rates respectively. The Hamiltonian \(\hat{H}\) is given in Eq. (32).
The time evolution of the expectation value of any system operator can be calculated as
\[\langle\hat{O}(t)\rangle=\text{Tr}(\hat{O}\rho_{s}(t)). \tag{64}\]
In the following code, for simplicity, we consider only the cavity is dissipative. So, we take cavity decay rate to be \(\kappa\neq 0\), and the atomic decay rate to be \(\gamma=0\). To solve the evolution equation given in Eq. (61), we use Runge-Kutta method (RK4) by considering the initial state to be \(\ket{e,0}\). We calculate the probability of finding the atom in excited state through
\[P_{e}(t)=\bra{e}\rho_{s}(t)\ket{e}. \tag{65}\]
and show in Fig. 5 (black continuous line). The energy of the atom is dissipated due to cavity decay.
**Code for master equation:**
clear; % Clear memory clc; % Clear the command window/screen d=10; % dimension of the cavity field hbar=1; WO=1; %atomic frequency Wf=1; %cavity field frequency g=0.1; %coupling constant kappa=0.05; %cavity decay rate A = diag(sqrt(1:d-1), 1); % Annihilation operator Ad=A'; %Creation operator
Sz=[1,0;0,-1]; %Sigma z Splus=[0,1;0,0]; %sigma plus Sminus=[0,0;1,0]; %sigma minus
gs=[0,1]; %ground state es=[1;0]; %excited state
I_a=eye(2); %Identity operator for the atom I_f=eye(d); %Identity operator for the field
Hatom=(1/2)*hbar*W0*kron(Sz,I_f); %Atomic Hamiltonian Hfield=hbar*Wf*kron(I_a,Ad*A); %Field Hamiltonian Hint=hbar*g*(kron(Splus,A)+kron(Sminus,Ad)); %interaction Hamiltonian H=Hatom+Hfield+Hint; %JC Hamiltonian
Psi=kron(es,I_f(:,1)); %Atom in excited state and cavity in vacuum Rho=Psi*Psi'; %initial density matrix
dt=0.1; %time step T=0:dt:150; % Total evolution time
%Re-define the annihilation and creation operators %to match the total dimension (dim(atom)*dim(field))
An=kron(I_a,A); Adn=kron(I_a,Ad);
% The master equation is solved by Runge-Kutta method (RK4)
for t=1:length(T) Pe(t)= kron(es,I_f(:,1))'*Rho* kron(es,I_f(:,1)); %Probability of atom in |e> %RK4 method K1=-i*(H*Rho-Rho*H)+kappa/2*(2*An*Rho*Adn-Adn*An*Rho-Rho*Adn*An); Rho1=Rho+1/2*dt*K1; K2=-i*(H*Rho1-Rho1*H)+kappa/2*(2*An*Rho1*Adn-Adn*An*Rho1-Rho1*Adn*An); Rho2=Rho+1/2*dt*K2; K3=-i*(H*Rho2-Rho2*H)+kappa/2*(2*An*Rho2*Adn-Adn*An*Rho2-Rho2*Adn*An); Rho3=Rho+dt*K3; K4=-i*(H*Rho3-Rho3*H)+kappa/2*(2*An*Rho3*Adn-Adn*An*Rho3-Rho3*Adn*An); Rho=Rho+1/6*(K1+2*K2+2*K3+K4)*dt; end plot(T,Pe)
### Monte-Carlo wavefunction method
When the system has finite number of levels, the MCWF approach can be used for investigating the effect of dissipation [21]. This method is also known as 'quantum jump approach'. In this method, the quantum state is evolved under a non-Hermitian Hamiltonian and quantum jumps are randomly introduced. This evolution of the quantum state forms a quantum trajectory, and ensemble average over many realizations of such trajectories reproduces the
Figure 5: The probability of finding the atom in its excited state, calculated using master equation approach (black continuous line), as a function of time \(t\) in the presence of cavity dissipation. The result is compared with the Monte-Carlo wavefunction method (red dashed line). We set \(\kappa=0.05,\gamma=0\), and \(g=0.1\). For MCWF, the number of realizations is \(N=5000\). The inset shows a single trajectory generated using MCWF method. A quantum jump occurs at \(t\sim 56\).
results of the master equation.
Steps for MCWF approach in an atom-cavity system (A detailed study can be found in Ref. [21]):
* Consider initially the atom-cavity system be in a normalized state \(\ket{\psi(0)}\). In order to decide the state at later time \(\delta t\), the state \(\ket{\psi(0)}\) evolves under a non-Hermitian Hamiltonian \[\hat{H}_{NH}=\hat{H}-\frac{i\hbar\kappa}{2}\hat{a}^{\dagger}\hat{a},\] (66) where \(\hat{H}\) is given in Eq. (32). Here, we consider only the cavity is dissipating. For \(\delta t\ll 1\), the state at a time \(\delta t\) is \[\ket{\psi(\delta t)}=e^{-i\hat{H}_{NH}\delta t/\hbar}\ket{\psi(0)}=\left(\hat {I}-\frac{i\delta t}{\hbar}\hat{H}_{NH}\right)\ket{\psi(0)},\] (67) where \(\hat{I}=\hat{I}_{a}\otimes\hat{I}_{f}\) is the identity matrix for the atom-cavity system. As the Hamiltonian \(\hat{H}_{NH}\) is non-Hermitian, the evolution does not preserve the norm and the norm will be less than 1. So, the missing norm is \[\delta p=1-\bra{\psi(\delta t)}\ket{\psi(\delta t)},\] (68) which will be much less than 1 for \(\delta t\ll 1\).
* Once \(\delta p\) is calculated, it will be compared with a random number \(r\). If \(r>\delta p\), no jump occurs and the state \(\ket{\psi(\delta t)}\) will be normalized (divide the state by its norm). Whereas, if \(r<\delta p\), a quantum jump occurs, and the normalized state at \(\delta t\) will be \[\ket{\psi(\delta t)}=\frac{\hat{a}\ket{\psi(0)}}{\sqrt{\bra{\psi(0)}\hat{a}^{ \dagger}\hat{a}\ket{\psi(0)}}}.\] (69)
* We can calculate the expectation value of any observables through \[\bra{\hat{O}}=\bra{\psi(\delta t)}\hat{O}\ket{\psi(\delta t)}.\] (70)
* One needs to follow the same procedure above for calculating the state at next step: evolve under a non-Hermitian Hamiltonian, calculate the missing norm and compare with a random number to decide the state at a later time. This procedure has to be followed upto a time \(T=n\delta t\) with a step of \(\delta t\). This forms a single trajectory. But, one has to get many such quantum trajectories from the initial state \(\ket{\psi(0)}\), and the ensemble average over many such trajectories produces the exact evolution for the initial quantum state.
The following code considers the dissipation of an atom through cavity (\(\kappa\neq 0,\gamma=0\)). The probability of finding the atom in excited state after each MCWF step is calculated through
\[P_{e}(\delta t)=|\bra{e}\psi(\delta t)\rangle|^{2}, \tag{71}\]
upto time \(T\) with a step of \(\delta t\). This forms a single trajectory. We calculate 5000 such trajectories and taken average. The result is compared with the result of the master equation in Fig. 5. The inset shows one of the MCWF trajectories in which a quantum jump (photon loss) occurs at \(t\sim 56\). After the quantum jump, the atom and cavity reach to their ground state.
clear; % Clear memory clc; % Clear the command window/screen d=10; % dimension of the cavity field bbar=1; W0=1; %atomic frequency Wf=1; %cavity field frequency g=0.1; %atom-cavity coupling constant kappa=0.05; %cavity decay rate
A = diag(sqrt(1:d-1), 1); % Annihilation operator Ad=A'; %Creation operator Ad=Ad*A; %number operator
Sz=[1,0;0,-1]; %Sigma z Splus=[0,1;0,0]; %sigma plus Sminus=[0,0;1,0]; %sigma minus
gs=[0;1]; %ground state es=[1;0]; %excited state
I_a=eye(2); %Identity operator for the atom I_f=eye(d); %Identity operator for the field
Hatom=(1/2)*hbar*W0*kron(Sz,I_f); %Atomic Hamiltonian Hfield=hbar*Wf*kron(I_a,Ad*A); %Field Hamiltonian Hint=hbar*g*(kron(Splus,A)+kron(Sminus,Ad)); %interaction Hamiltonian H=Hatom+Hfield+Hint; %JC Hamiltonian
HNH=H-i*kappa/2*kron(I_a,AdA); %non-Hermitian Hamiltonian
%Psi=kron(es,I_f(:,1)); %Atom in excited state and cavity in vacuum
dt=0.001; %time step is taken to be very small
T=0:dt:150; % Total evolution time
%Re-define the annihilation and creation operators %to match the total dimension (dim(atom)*dim(field))
An=kron(I_a,A); Ad=kron(I_a,Ad);
N=5000; %number of realization PeAvg=0; %initialization to zero for x=1:N Psi=kron(es,I_f(:,1)); %Initial state: Atom in |e> and cavity in |0>
for t=1:length(T) Pe(t)=norm(kron(es,I_f(:,1))'*Psi)~2; %Probability of atom in |e> and field |0>
PsiNH=(kron(I_a,I_f)-i*dt*HNH)*Psi;
dp=1-PsiNH'*PsiNH; %missing norm r=rand; %random number if dp<r %comparing with a random number Psi=PsiNH/norm(PsiiNH); %normalized the state else PsiJump=kron(I_a,A)*PsiNH; %quantum jump occurs Psi=PsiiJump/norm(PsiiJump); %normalize end end PeAvg=PeAvg+Pe; %adding the probability of all the realizations to find the average x end plot(T,PeAvg/N,'k') %Plot after averaging to number of realizations
## VIII Conclusion
We have presented a beginner-level numerical guide written in MATLAB, which can serve as a basic toolkit for addressing research problems on quantum optics and related areas such as quantum many-body physics, quantum information processing and quantum computation. The provided codes will be highly useful for graduate students and researchers embarking on their careers in these fields. The codes can be easily extended to tackle problems involving high-dimensional matrices. Importantly, they can be executed in any version of MATLAB without requiring pre-installed packages.
**Acknowledgment:** The author acknowledges Dr. Saikat Sur, Dr. Pritam Chattopadhyay, and Dr. Binay K. Sahu for useful suggestions and discussions.
## Appendix A Role of dimension of the field
The following code gives a coherent state:
clear; % Clear memory clc; % Clear the command window/screen d=10; %dimension of the field I=eye(d); alpha=2; %Amplitude of the coherent state Coh=0; %initialization for x=0:d-1; Coh=Coh+exp(-norm(alpha)^2/2)*alpha^x/sqrt(prod(1:x))*I(:,x+1); end Coh %it will display the coherent state N_c=norm(Coh) %checking norm which must be 1
In the above code, we have taken the amplitude of the coherent state to be alpha=2 and the dimension of the field to be d=10. The outputs are
Coh = 0.1353 0.2707 0.3828 0.4420 0.3953 0.3228 0.2440 0.1725 0.1150
N_c = 0.9959
We see that the norm of the state is not equal to 1. Therefore, we will not find correct results if we do any further calculation using this state. To make it correct, the dimension of the field has to be increased. Check by taking \(d=15\), we will get the norm N_c to be very close to 1. |
2310.20192 | Shaping Opinions in Social Networks with Shadow Banning | The proliferation of harmful content and misinformation on social networks
necessitates content moderation policies to maintain platform health. One such
policy is shadow banning, which limits content visibility. The danger of shadow
banning is that it can be misused by social media platforms to manipulate
opinions. Here we present an optimization based approach to shadow banning that
can shape opinions into a desired distribution and scale to large networks.
Simulations on real network topologies show that our shadow banning policies
can shift opinions and increase or decrease opinion polarization. We find that
if one shadow bans with the aim of shifting opinions in a certain direction,
the resulting shadow banning policy can appear neutral. This shows the
potential for social media platforms to misuse shadow banning without being
detected. Our results demonstrate the power and danger of shadow banning for
opinion manipulation in social networks. | Yen-Shao Chen, Tauhid Zaman | 2023-10-31T05:28:18Z | http://arxiv.org/abs/2310.20192v4 | Shaping Opinions in Social Networks with Shadow Banning Yen-Shao Chen1* and Tauhid Zaman1
## Abstract
The proliferation of harmful content and misinformation on social networks necessitates content moderation policies to maintain platform health. One such policy is _shadow banning_, which limits content visibility. The danger of shadow banning is that it can be misused by social media platforms to manipulate opinions. Here we present an optimization based approach to shadow banning that can shape opinions into a desired distribution and scale to large networks. Simulations on real network topologies show that our shadow banning policies can shift opinions and increase or decrease opinion polarization. We find that if one shadow bans with the aim of shifting opinions in a certain direction, the resulting shadow banning policy can appear neutral. This shows the potential for social media platforms to misuse shadow banning without being detected. Our results demonstrate the power and danger of shadow banning for opinion manipulation in social networks.
## 1 Introduction
The digital age has borne witness to the rapid rise of social networks which influence the dynamics of public conversation. Inherent to their structure and expansive reach, these platforms possess the potential to shape public discourse, often yielding influence that transcends geographical boundaries. However, this powerful capacity can also serve as a conduit for the propagation of harmful content or disinformation. The ramifications of this can be significant, including societies being ensnared in a web of misinformation, or becoming perilously polarized to the brink of internal conflict. This necessitates strategies designed to stymie the potential exploitation and misuse of these influential platforms.
For content that manifestly constitutes a threat, platforms have the authority to expunge the user responsible. This course of action is typically employed in scenarios involving explicit threats of violence, unequivocal disinformation posing potential danger, or other instances that breach the platform's stipulated policies. However, not all content resides within such clear-cut boundaries of propriety.
Certain types of content may straddle the periphery of policy violation without crossing the explicit threshold. Even though such content does not transgress the policies directly, its rampant dissemination can subtly skew the tenor of online discourse in ways that could engender undesirable outcomes. Consider, for instance, an ongoing political debate on the platform marked by heightened tension and polarization. In such circumstances, the platform may deem it necessary to curb the spread of emotionally-charged
content that could further inflame the situation and potentially instigate violent acts. The content in question here may not represent a clear-cut violation of the platform's policies. Nonetheless, its unchecked proliferation could exacerbate polarization at a critical juncture, thereby posing potential risks.
To address content that does not directly breach the platform's rules but still presents certain risks, the platform can employ various content moderation strategies. One of these strategies is referred to as _shadow banning_. Characterized by its clandestine nature, shadow banning operates by limiting the visibility of a user's content, effectively curtailing the user's reach without their awareness [1]. Shadow banning allows platforms to exert control over the content they host, without disrupting user engagement significantly. It can be employed at different levels of precision. The platform could limit the visibility of all content of a user, or it could be more selective and limit the visibility of the user's content to a set of specific users. In either case, the net effect is that certain content posted by a user will not be seen by others.
Shadow banning can serve as an effective strategy for maintaining the health of social media discourse. It was found that Twitter shadow banned accounts that exhibited automated or bot-like behavior, along with offensive posts [2]. While there are obvious benefits to shadow banning, it is not without potential drawbacks. Of significant concern is the potential to upset users who may perceive this practice as a form of covert censorship, infringing on their freedom of expression. Given the clandestine nature of shadow banning, users may feel betrayed or manipulated upon discovering that their content reach has been curtailed without their knowledge. These sentiments could lead to decreased user engagement, trust erosion, and potentially a mass exodus from the platform. For instance, many conservative users of Twitter accused the platform of shadow banning them as an exercise of political censorship [3]. Instagram has been accused of disproportionately shadow banning women in an attempt to limit the spread of inappropriate content, accessions which Instagram denies [4]. The negative stigma surrounding shadow banning has caused Elon Musk, owner of Twitter, to publicly state that he will not allow shadow-banning on the platform [5]. Thus, while the judicious application of shadow banning policies can be effective at content moderation, it is imperative that such measures are deployed sparingly and transparently. This careful balancing act between user satisfaction and content moderation underscores the intricate challenge of managing contemporary social media platforms.
The danger of content moderation policies such as shadow banning is that they can result in the manipulation of opinions by the platform. Traditionally, opinion manipulation has been considered from the perspective of a user in the network. The goal is to select target users to receive content in order to maximize an objective, such as the reach of the content [6, 7, 8, 9, 10, 11] or the mean opinion in the network [12, 13, 14]. In contrast, content moderation is done by the platform itself and does not introduce new content into the network. Rather, it modifies the audience for existing content. However, content moderation can still manipulate opinions. For instance, one type of content moderation is recommendations, where the platform uses an algorithm to choose what content to show users. The recommendation algorithm is typically designed to show users content they are likely to prefer. Many studies have found that the bias of content recommendation algorithms creates a positive feedback loop that can lead to increased polarization [15, 16, 17, 18, 19, 20]. From these results it is clear that content moderation can manipulate opinions in a social network, even when this was not the intention.
The ability of content moderation to affect opinions is very concerning. It raises the question of whether or not a social media platform could design content moderation policies with the explicit objective of manipulating the opinions into an arbitrary target distribution. If this was possible, it could be very dangerous for a society. Furthermore, one can ask whether this opinion manipulation could be done without being
detected. For instance, can a social media platform deploy content moderation methods, like shadow banning, with a partisan intent, yet still uphold an outward semblance of political neutrality? This scenario suggests that a society could be covertly swayed by a social media platform, with the populace remaining unaware until potentially harmful consequences have firmly taken hold.
In this work, we demonstrate how a social media platform can employ a different form of content moderation, specifically shadow banning, to arbitrarily shape the opinions of its users. We frame this as an optimization problem which allows one to calculate shadow banning policies that shape opinions into a specified target distribution. The shadow banning policy is obtained by solving a simple linear program. Because of this, our approach can scale to large networks and can accommodate a variety of opinion dynamics models encompassing complex phenomena such as bounded confidence [21]. When determining shadow banning policies, we focus on two principal characteristics of the opinion distribution: the mean and the variance.
Altering the mean enables the platform to steer the prevalent sentiment regarding a topic in a designated direction. When utilized with upright intentions, this can permit the platform to curtail the spread of hazardous content and diminish the influence of misinformation. However, if employed with unethical intentions, manipulating the mean may allow the platform to forge an artificial bias either in favor or against a particular topic. This holds a potential for substantial risk, especially if, for example, implemented during an election year.
In contrast, manipulating the variance does not generate a bias towards a topic, but instead alters the overall character of the online dialogue. Reducing the variance has the potential to moderate online polarization and suppress intense sentiment. On the other hand, amplifying the variance enhances polarization and escalates the severity of sentiment. There are scarce justifiable reasons to amplify variance unless the objective is to destabilize a populace via information warfare. Nonetheless, it is an action that can be effortlessly executed through shadow banning. This shows the potential risks of shadow banning and emphasizes that it must be used with great care.
This paper is organized as follows. We begin by presenting the underlying opinion dynamics model used in our analysis. We then show how to calculate shadow banning policies by solving a linear program. Shadow banning policies are calculated for synthetic networks to provide intuition for their behavior. We then calculate shadow banning policies on two large-scale Twitter networks for multiple opinion objectives. We find that substantial manipulation of opinions can be achieved over time, even with limited shadow banning. Finally, we show that if one shadow bans with a politically biased objective in mind, such as maximizing the opinion mean, the resulting shadow banning policy appears to be politically neutral, or biased in a counter-intuitive way.
## 2 Methods
### Opinion Dynamics Model
Shadow banning can be used to control the movement of opinions. However, we must first have a model for the underlying dynamics of the opinions in order to apply shadow banning. There are a variety of such models in existence, but they can all be reduced to a set of continuous time differential equations. We now present this differential equation framework and our choice of opinion dynamics model.
We represent the social network as a directed graph \(G=(V,E)\) where \(V\) is the set of vertices, which are the users of the social network platform, and \(E\) is the set of edges which represent following relationships. This model is appropriate with social networks with a follower/following structure, such as Twitter, Instagram, or TikTok. An edge
\((i,j)\) pointing from user \(i\) to user \(j\) means that user \(j\) follows user \(i\), and subsequently will be shown content posted by \(i\). User \(i\) posts content to user \(j\) at a rate \(\lambda_{ij}\) posts per unit time. In practice this rate would only depend on \(i\) as it would correspond to his posting rate. However, it is possible that the rate could vary with \(j\), if for instance \(j\) does not check the platform often and thus does not see all of \(i\)'s content. Therefore, we can consider \(\lambda_{ij}\) as an effective posting rate from \(i\) to \(j\). Also, we will consider shadow banning policies that limit the rate at which content flows along individual edges in the social network, so separating posting rates by edge simplifies our analysis.
Each user \(i\) has a time dependent latent opinion \(\theta_{i}(t)\) which is a real number. The opinion of content posted by a user at any given time matches their latent opinion. More general models allow for the content to have a random opinion which equals \(\theta_{i}(t)\) in expectation [13]. However, we will not consider such stochastic generalizations here.
Each time a user \(i\) posts in the network, all users update their opinions. Assume \(i\) posts at time \(t\) and consider a user \(j\). If \(j\) does not follow \(i\) then there is no change in \(j\)'s opinion. However, if \(j\) does follow \(i\), then \(j\) changes his opinion by an amount given by \(f(\theta_{i}(t)-\theta_{j}(t))\), where \(f\) is the _opinion shift function_ and its argument is the difference of the opinions of \(i\) and \(j\). This form for the opinion shift function is in accordance with many popular opinion dynamics models [21, 22].
To simplify this analysis, we approximate the opinions as continuous functions. This is a good approximation for large networks. We first assume that users independently post content according to a Poisson process. Then the number of posts in the entire network is a merged Poisson process of the individual user posting processes. We define \(\delta\) as the mean time between posts in the network. First consider the case where posts on each edge are independent. In this case, \(\delta=1/\sum_{(i,j)\in E}\lambda_{ij}\). Second, consider the more realistic case where users post independently, but their posts are broadcast to all of their followers simultaneously. In this case \(\delta=1/\sum_{i\in V}\lambda_{i}\) where \(\lambda_{i}\) is the posting rate of user \(i\) (and \(\lambda_{ij}=\lambda_{i}\)). In either case, we see that as the network grows large, \(\delta\) becomes increasingly small. Therefore, for large networks, a continuous time approximation is reasonable. We assume there was a post in the network at time \(t+\delta\) and write down the update rule for user \(j\)'s opinion as
\[\theta_{j}(t+\delta)=\theta_{j}(t)+X_{ij}f(\theta_{i}(t)-\theta_{j}(t)).\]
The random variable \(X_{ij}(t)\) is one if there is a post on edge \((i,j)\) and zero otherwise. Given that a post occurred, the mean value of \(X_{ij}(t)\) is \(\lambda_{ij}\delta\) by properties of merged Poisson processes [23]. Taking the expectation over \(X_{ij}(t)\) and doing some simple manipulations, the update rule becomes
\[\frac{\theta_{j}(t+\delta)-\theta_{j}(t)}{\delta}=\lambda_{ij}f(\theta_{i}(t)- \theta_{j}(t)).\]
As the network size increases, \(\delta\) will approach zero, and the term on the left can be replaced with a time derivative \(d\theta_{j}/dt\). This then gives us our continuous time opinion dynamics model
\[\frac{d\theta_{i}}{dt}=\sum_{j\in V}\lambda_{ji}f(\theta_{j}-\theta_{i})\ \ \forall i\in V. \tag{1}\]
This differential equation model is a good approximation to the opinion dynamics on large networks. In our application, we are considering a platform shadow banning users in the entire social network, so this approximation is valid.
The last piece to specify in this model is the opinion shift function \(f\). There are several options here. The classic DeGroot model has \(f(x)=\omega x\) for some non-negative constant \(\omega\) which measures how much a single post can shift one's opinion [22]. This
term is capturing how reliable one considers the opinions of others. Hearing an opinion from someone deemed more reliable will cause one to change their opinion more than an opinion from someone unreliable. DeGroot's model leads to opinion consensus on most networks. This is one flaw of the model, as many researchers have observed persistent polarization in real social networks [24, 25, 26, 27, 28]. Another flaw of the model is the fact that the opinion shift is proportional to the difference between the opinion of the post and one's own opinion. However, it is unlikely that an opinion vastly different from one's own would be persuasive in modern online social media. Instead, these opinions may be ignored.
To allow for persistent polarization and limit the persuasive power of posts with opinions with vastly different from their audience, the bounded confidence model was proposed [29, 21]. In this model the shift function is given by
\[f(x)=\begin{cases}\omega x&\text{if }|x|\leq\epsilon\\ 0&\text{otherwise}\end{cases} \tag{2}\]
where \(\epsilon\) is the size of the confidence interval. The bounded confidence model places a limit on the range of trusted opinions. Opinions deviating too far (by more than \(\epsilon\)) from one's own opinion have no persuasive power. The bounded confidence model can result in consensus or persistent polarization depending upon the value of the confidence interval, the initial opinions, and the network structure [30, 31]. It is a more complex model that better captures behavior in real social networks. In this work we use the bounded confidence model for the opinion dynamics.
### Shadow Banning Control
We can easily incorporate shadow banning into our opinion dynamics model. We define the shadow banning strength on an edge \((i,j)\) at time \(t\) as \(u_{ij}(t)\) which is a real number between zero and one. Shadow banning reduces the posting rate \(\lambda_{ij}\) by a multiplicative factor \(1-u_{ij}(t)\). At one extreme, \(u_{ij}(t)=1\) corresponds to total censorship of content from \(i\) to \(j\). At the other extreme, \(u_{ij}(t)=0\) corresponds to no shadow banning. Under shadow banning, the opinion dynamics model is slightly modified to become
\[\frac{d\theta_{i}}{dt}=\sum_{j\in V}\lambda_{ji}(1-u_{ji})f(\theta_{j}-\theta_ {i}),\ \ \forall i\in V. \tag{3}\]
where we have dropped the time arguments to simplify notation.
To determine the shadow banning policy, the social network platform must have a target distribution for the opinions of its users. This is described by an objective function, or an instantaneous reward, \(r(\theta(t))\) of the opinions, where \(\theta(t)\) refers to the opinions of each user in the network at time \(t\). The objective can be any function of the opinions, but here we will consider the important cases where the objective is the opinion mean, variance or negative variance. The negative values allow the platform to minimize the variance under our objective maximization framework.
The platform can have different types of goals with respect to the objective. One possible goal is to maximize the objective at a final or terminal time \(T\). This can be formulated as the following control theory problem:
\[\max_{u} r(\theta(T))\] subject to \[\frac{d\theta_{i}(t)}{dt}=\sum_{j\in V}\lambda_{ji}(1-u_{ji}(t))f( \theta_{j}(t)-\theta_{i}(t)),\ \ \ \forall i\in V.\]
Solving this problem is non-trivial, but could possibly be done using techniques from control theory [32]. However, there is an issue with scalability. The shadow banning control problem has one control variable for each edge in the network and one state variable for each user in the network. If one is performing shadow banning on an entire social media platform, this can result in hundreds of millions of state variables and billions of control variables. Standard control theory techniques will not work on such large problems. To avoid this issue, we use the following approximation. We can rewrite the final objective as
\[r(\theta(T))=\int_{0}^{T}\frac{dr}{dt}dt.\]
An optimal solution to this problem will choose the shadow ban controls in a manner to maximize this integral. However, the size of the problem prevents such a solution from being found. A more scalable approach is to find a greedy solution. Instead of maximizing the integral, we maximize the integrand at each time step sequentially. This means we choose the shadow banning policy to maximize \(dr/dt\) at each time \(t\). It turns out that this objective can be maximized in a manner that scales to large networks. To see why, we can rewrite it as
\[\frac{dr(\theta(t))}{dt} =\sum_{i\in V}\frac{\partial r}{\partial\theta_{i}(t)}\frac{d \theta_{i}(t)}{dt}\] \[=\sum_{i\in V}\frac{\partial r}{\partial\theta_{i}(t)}\sum_{j\in V }\lambda_{ji}(1-u_{ji}(t))f(\theta_{j}(t)-\theta_{i}(t))\] \[=\sum_{(j,i)\in E}B_{ji}(t)(1-u_{ji}(t))\]
where we have defined \(B_{ji}(t)=\frac{\partial r}{\partial\theta_{i}(t)}\lambda_{ji}f(\theta_{j}(t) -\theta_{i}(t))\). Above we have used equation (3) for \(d\theta_{i}/dt\). We see from this expression that the shadow banning appears linearly in the reward derivative through the opinion dynamics. This observation gives us an efficient method to find the shadow banning policy. At time \(t\) we maximize the time derivative of the instantaneous reward. Because the derivative is a linear function in the shadow ban controls, this maximization can be done by solving a linear program, which scales to large networks.
In addition to maximizing the objective, the platform also has constraints on the shadow banning policy. If the shadow banning is too strong, the user experience will be affected negatively. Therefore, the constraints limit the strength of the shadow banning. This limitation can be done at different levels. One can set a limit on the mean shadow banning strength in the entire network, or one can limit the shadow banning strength on individual edges. We refer to these limits as \(s_{network}\) for the network average and \(s_{edge}\) for individual edge. Combining these constraints with the greedy approximation leads to the following linear program for the shadow banning policy:
\[\max_{u(t)} \sum_{(j,i)\in E}B_{ji}(t)(1-u_{ji}(t))\] subject to \[\sum_{(j,i)\in E}u_{ji}(t)\leq s_{network}|E|\] \[\quad 0\leq u_{ji}(t)\leq s_{edge},\ \ \forall(j,i)\in E,\]
where \(u(t)\) denotes the set of \(u_{ji}(t)\) for every \((j,i)\in E\). The first inequality corresponds to the limit of the mean shadow banning strength in the network, while the second
inequality corresponds to the limit of the shadow banning strength on each individual edge.
Formulating this linear program requires the coefficients \(B_{ji}(t)\). These values are obtained from the opinions in the network \(\theta(t)\), the network structure (which is contained in the edge set \(E\)), the posting rates \(\lambda_{ji}\), the opinion shift function \(f\), and the derivative of the reward with respect to the opinions \(\frac{\partial r}{\partial\theta_{i}(t)}\). The solution of the linear program gives the shadow ban policy at time \(t\). Solving this linear program at every time step will give the complete dynamic shadow banning policy. The policy is dynamic because as time progresses, the user opinions change, leading to a potentially different shadow banning policy.
The impact of the particular choice of reward function on the resulting shadow banning policy is expressed through the partial derivative of the reward with respect to the opinions. We list the partial derivatives for the objectives we consider in Table 1. One nice feature of our approach is that the shadow banning policy can be found by solving a linear program for any objective function. This allows one to use more novel objective functions beyond those considered here.
We can gain insight to the behavior of the shadow banning policy for different objectives by examining the linear program. Consider an edge \((j,i)\) corresponding to node \(i\) following node \(j\). Because we want to maximize the time derivative of the reward, we will shadow ban edges where the coefficient \(B_{ji}(t)\) is negative (recall that \(u_{ji}(t)=1\) corresponds to maximum shadow banning). We first consider the case where the goal is to maximize the opinion mean. Using the partial derivative of the mean in Table 1 and the definition of \(B_{ji}(t)\), we find that there will be shadow banning on edge \((j,i)\) when \(f(\theta_{j}(t)-\theta_{i}(t))\) is negative. The opinion shift functions we consider have odd symmetry and their sign matches the sign of their argument. This means that there is shadow banning if \(\theta_{i}(t)>\theta_{j}(t)\). In this case node \(j\) is pulling down the opinion of node \(i\), which decreases the opinion mean. Therefore, the policy shadow bans the edge.
To understand the variance shadow banning policies, it is useful to define \(\mu(t)\) as the mean of the user opinions at time \(t\). If the goal is to minimize the opinion variance, then \(B_{ji}(t)\) is negative when the partial derivative of the reward is negative and the opinion shift is positive, or vice versa. This corresponds to \(\theta_{i}(t)>\mu(t)\) and \(\theta_{i}(t)<\theta_{j}(t)\), or \(\theta_{i}(t)<\mu(t)\) and \(\theta_{i}(t)>\theta_{j}(t)\). In the first case node \(i\)'s opinion is above the mean and it is being pulled up by node \(j\), which increases the variance. In the other case, \(i\)'s opinion is below the mean and it is being pulled down by \(j\), which also increases the variance. Therefore, under either of these conditions this edge gets shadow banned. A similar analysis for maximizing the variance shows that the edges which are shadow banned correspond to a node being above the mean and being pulled down or a node being below the mean and being pulled up.
Comparing the mean and variance policies, we see that the policy for the mean
\begin{table}
\begin{tabular}{|c|c|c|} \hline Objective & Objective function \(r\) & \(\frac{\partial r}{\partial\theta_{i}}\) \\ \hline Maximize mean & \(\frac{1}{|V|}\sum_{i\in V}\theta_{i}\) & \(\frac{1}{|V|}\) \\ \hline Minimize variance & \(-\frac{1}{|V|-1}\sum_{i\in V}(\theta_{i}-\mu)^{2}\) & \(-\frac{2}{|V|-1}(\theta_{i}-\mu)\) \\ \hline Maximize variance & \(\frac{1}{|V|-1}\sum_{i\in V}(\theta_{i}-\mu)^{2}\) & \(\frac{2}{|V|-1}(\theta_{i}-\mu)\) \\ \hline \end{tabular}
\end{table}
Table 1: Table of the partial derivatives for different objective functions. We have used \(\mu\) to refer to the mean of the opinions.
depends only on the opinion shift on an edge since the goal is to shift the distribution in a particular direction. The global position of an opinion is not relevant. However, the policy for the variance is more complex since the goal is to either stretch out or compress the opinion distribution around the global mean. In this case the policy takes into account the position of the opinions relative to the mean in addition to the shift direction on the edge.
## 3 Results
We test our shadow banning algorithm in a variety of networks with different opinion objectives. We consider maximizing the mean, minimizing the variance, and maximizing the variance. We first calculate shadow banning policies on small synthetic networks to illustrate some of the intuition for the policies discussed earlier. We then calculate shadow banning policies on larger Twitter networks to demonstrate the scalability of the algorithm and show how it performs on real network topologies and opinion distributions. In our analysis we update the shadow banning policies daily, as this is a practical implementation scheme for social media platforms.
We use the bounded confidence model for the opinion dynamics. We must choose the parameters \(\epsilon\) and \(\omega\) to specify the opinion dynamics model. Larger values for these parameters correspond to stronger persuasion between nodes (wider confidence interval and larger shift magnitude). We choose to be conservative and choose small values for both parameters to limit the speed of the natural opinion dynamics and keep persistent polarization that is observed in real social networks. For \(\epsilon\) we choose \(0.1\) so the confidence interval is fairly narrow (the initial opinions are distributed between zero and one). We set \(\omega=0.003\), which indicates that users have much more confidence in their own opinions relative to the opinions of others. This aligns with several studies of persuasion which have found a single message can cause a very small opinion shift in a controlled environment [33, 34, 35, 36]. We use a value for \(\omega\) less than what is implied in these works as we expect there to be many factors that reduce the persuasive power of social media posts, such as user's not seeing a post at all or scrolling past it without reading it. Note that we use constant values for \(\epsilon\) and \(\omega\) for all users at all time steps. In reality, users are likely to have heterogeneous and time-varying values for these parameters [37]. We do not have a good sense of how these parameters are distributed, so we instead choose to use a constant value for all users. However, if such information was available, it can easily be incorporated into our simulation framework.
### Synthetic Networks
#### 3.1.1 Path Network
We begin with a path network shown in Figure 1. The network has 11 nodes whose opinions increase linearly from zero at one end to one on the opposite end. The posting rate of each user is set to one post per day. Our simulation will run for 365 days, with the shadow banning being updated daily. We set no limits on the maximum edge shadow banning strength (\(s_{edge}=1\)), but we limit the maximum mean shadow banning strength to \(s_{network}=0.5\). To avoid issues with numerical rounding, we set \(\epsilon=0.101\) so the confidence interval strictly greater than the difference between neighboring opinions.
We calculate the shadow banning policy for each objective function and show the evolution of the resulting opinions in Figure 2. With no shadow banning, the opinions do not converge and the mean is slightly above \(0.5\). When trying to maximize the mean, the opinions are pulled up to \(0.6\). For minimizing the variance, we see the opinions converge to \(0.5\), but with less polarization than with no shadow banning. For maximizing the
variance, the opinions become more polarized as the simulation progresses. For each objective we also show the mean shadow banning strength in the network. As can be seen, the shadow banning remains near the 0.5 limit set by the linear program throughout the simulations. This is because the opinions move slowly, so the shadow banning can continue to increase the objective over the simulation duration.
To understand what the different shadow banning control policies are doing for each objective, we visualize the initial decisions (\(t=0\)) of the shadow banning policy. We draw the network keeping only non-shadow banned edges to show where the shadow banning occurred. We show the resulting networks in Figure 1. The structure of these shadow banned networks reflects the intuition from the linear program. We see that to maximize the mean, the control shadow bans edges pointing from a lower opinion node to a higher opinion node. This is being done to prevent any nodes from pulling their neighbor opinions down. For minimizing the variance, we see that initially the edges pointing from more extreme opinions are shadow banned, causing the opinions to shift towards the middle. By having all opinion shifts point towards the middle, the opinions will converge to 0.5 more quickly. For maximizing the variance, the opposite edges are blocked, causing the opinions to drift towards the extremes.
Figure 1: Path network with edges untouched by the initial shadow banning policy for different objectives. The node colors indicate the opinion (lower are blue, higher are red). The direction of the edges indicates the flow of information on the network. The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance.
#### 3.1.2 Stochastic Block Model Network
Real-world networks exhibit an assortative structure where users of similar opinions exist in distinct clusters in the network, often referred to as echo chambers [24, 25, 27, 38, 39, 40, 41, 42, 43]. One popular model for this network structure is known as the stochastic block model [44]. In this model, one specifies the number of clusters and the number of nodes in each cluster. Then one specifies a \(k\times k\) probability matrix \(p\) where element \(p_{ab}\) is the probability of an edge between a node in cluster \(a\) and a node in cluster \(b\). All edges are formed independently. If all values in \(p\) are equal, then the stochastic block model reduces to the well-known Erdos-Renyi model [45]. Generally, the off-diagonal elements of \(p\) are less than the diagonal elements to make intra-cluster edges more likely than inter-cluster edges. This is how the assortativity structure is achieved.
We utilize a stochastic block model network with ten nodes equally divided between two clusters. The intra-cluster probabilities are one and the inter-cluster probabilities
Figure 2: Opinion distributions and mean shadow ban strength versus time under shadow banning control policies for different objective functions on a path network. For the opinions, the purple region is the 25th to 75th quantiles, and the pink region is the 5th to 95th quantiles. The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance.
are 0.05. This produces a network of two cliques connected by a small number of directed edges, as shown in Figure 3. The nodes in each cluster have the same opinion, which is 0.35 in cluster one and 0.65 in cluster two. We chose these values so that they are close enough to allow some persuasion between the clusters under our model specification. To allow for non-trivial opinion dynamics, we set \(\epsilon\) equal to 0.4. This allows persuasion to occur between the clusters under natural dynamics. Otherwise the two clusters do not interact in any meaningful way.
The shadow banning is applied in the same manner as with the path network, (daily update of policy with \(s_{edge}=1\) and \(s_{network}=0.5\)). We calculate the shadow banning policy for each objective function and show the resulting evolution of the opinions in Figure 4. With no shadow banning the network slowly approaches consensus at 0.5. When maximizing the mean the shadow banning is able to push the opinions near 0.65, which is the maximum value in the initial opinions. Minimizing the variance causes the opinions to approach consensus, but faster than without any shadow banning as can be seen by the narrower spread in the final opinion distribution. When maximizing the variance, the opinions in each cluster stay at their initial values throughout the simulation. Like with the path network, the mean shadow banning strength stays above zero for the simulation duration for each objective. However, the value is lower than for the path network because fewer edges are shadow banned, as we will discuss next.
We visualize the early shadow banning policies for the stochastic block model network in Figure 3 as was done for the path network. The networks shown correspond to policies at \(t=10\). We did not use \(t=0\) because the equality of the initial opinions within the clusters resulted in no shadow banning. As the dynamics evolve the opinions take on different values and we obtain non-trivial shadow banning policies. For maximizing the mean, the shadow banning policy blocks the inter-cluster edges pointing from the lower opinion cluster to the higher opinion cluster. This prevents the lower opinion cluster from pulling down the higher opinion cluster. Within the lower opinion cluster, edges pointing from the more extreme nodes to the boundary nodes are blocked to avoid these boundary connectors being pulled away from their higher opinion neighbors. Minimizing variance removes the edges pointing to the nodes on the cluster boundaries. These edges pull the opinions to the extremes, so when trying to minimize the variance it is expected that they will be blocked. When maximizing the variance, the policy blocks all inter-cluster edges. This is to be expected as those are the only edges that pull the opinions together. In fact we see in Figure 4 that theses edges remain blocked for the entire simulation. In contrast, the other two objective functions have the shadow banning turn off once the opinions reach consensus.
One important observation here is that shadow banning does not move the opinions beyond the maximum and minimum values in the initial condition. Shadow banning cannot drive anyone to an extreme opinion unless those opinions already exist in the network. This is in contrast to other methods to shift opinions which utilize bots that can drive opinions to arbitrary extremes [13]. The difference is that bots inject new content into the network which can have an extreme opinion. Shadow banning can only remove content produced naturally in the network, so it cannot move anyone beyond the bounds determined by the users' initial opinions.
Figure 3: Stochastic block model network with edges untouched by the shadow banning policy for different objectives. The node colors indicate the opinion (lower are blue, higher are red). The direction of the edges indicates the flow of information on the network. The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance. For the no shadow banning policy, the node colors correspond to opinions at time \(t=0\). For the other objectives, the node colors correspond to opinions at time \(t=10\).
### Twitter Networks
#### 3.2.1 Datasets
We now apply shadow banning to a set of Twitter networks which have been utilized in previous studies on opinion dynamics [13, 46]. These datasets are ideal for us as they provide a network structure, posting rates, and opinions for a set of social media users engaged in an online conversation on politically polarizing topics. The topics these networks cover are the 2016 United States presidential election and the Gilets Jaunes protests in France. The raw datasets include tweets and also the follower graph formed by the users posting these tweets. For each dataset, tweets were collected that contained specific keywords (a full list of keywords can be found in [46]).
Each user's posting rate was set equal to the number of their tweets in the dataset divided by the data collection period length. The tweets' opinions on specific topics were
Figure 4: Opinion distributions and mean shadow ban strength versus time under shadow banning control policies for different objectives on a stochastic block model network. For the opinions, the purple region is the 25th to 75th quantiles, and the pink region is the 5th to 95th quantiles. The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance.
measured using a neural network trained on a set of hand-labeled tweets. The opinions were real numbers between zero and one. For the U.S. election dataset an opinion of one represented a pro-Trump sentiment. For the Gilets Jaunes dataset an opinion of one represented a pro-Gilets Jaunes sentiment. Each user's opinion was calculated as the mean of the opinions of their tweets in the dataset. Specific details on the collection and processing of these datasets can be found in [46]. We provide some summary statistics about the datasets in Table 2.
The U.S. election dataset consists of tweets by Twitter users who posted about the second debate of the 2016 U.S. presidential election between Hillary Clinton and Donald Trump. This dataset has 2.4 million tweets posted by 77,563 users. The resulting follower graph contained 5.4 million edges. Gilets Jaunes, or Yellow Vests, is a French populist movement that started in November 2018. Although it was initially a response to the sudden rise in fuel prices, it quickly became a generalized protest against the government of president Emmanuel Macron. The Gilets Jaunes dataset consists of tweets between January 26th, 2019 to April 29th, 2019 that contained Gilets Jaunes related keywords. The resulting dataset contained 2.3 million tweets, 40,456 users, and 4.6 million edges in the associated follower graph.
For our simulations we use the subgraph induced by a random subset of users for each dataset. Each subgraph has 15,000 users with opinions less than or equal to 0.5 and 15,000 users with opinions greater than 0.5. For the U.S. election dataset, the resulting sampled network has 30,000 users and 844,563 edges. The Gilets Jaunes sampled network has 30,000 users and 1,084,678 edges. The network sizes are chosen to resemble the size of the networks used in a field experiment and observational study concerning content moderation. The field study in [47] recruited 23,377 US-based adult Facebook users to assess the impact of modifying the polarity of content seen by users on their political polarization. The observational study in [2] audited a random sample of 25,000 Twitter accounts to identify if they were shadow banned. In addition to replicating the size of networks in these works, using a subgraph of our data also reduces the computational time of the simulations.
#### 3.2.2 Simulation Results
Our shadow banning simulations have a similar form to that for the synthetic networks. Shadow banning policies are calculated daily, with the maximum mean shadow banning strength \(s_{network}\) set to 0.05, and no limit to the shadow banning strength on each individual edge (\(s_{edge}=1\)). The maximum mean shadow banning strength of 5% is chosen based on [2] that found 6.2% of sampled Twitter accounts were shadow banned at least once within a year of data collection. In addition, [48] estimated that between 0.5% to 2.3% of users were banned in the Twitter networks they studied. We do not limit \(s_{edge}\) as shadow banning usually lasts at least 24 hours and up to two weeks on commonly used social media platforms, and we update our control daily. Our simulations cover 365 days. The opinion dynamics model is the bounded confidence model with \(\epsilon=0.1\)
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Event & Data collection period & Number of tweets & Number of follower edges & Number of users \\ \hline U.S. presidential election & Jan. 2016 - Nov 2016 & 2.4M & 5.4M & 78K \\ \hline Gilets Jaunes & Jan. 2019 - Apr. 2019 & 2.3M & 4.6M & 40K \\ \hline \end{tabular}
\end{table}
Table 2: Basic information about the Twitter datasets. M is millions and K is thousands.
and \(\omega=0.003\). We also repeat our analysis on variations of these model parameters (see Appendix).
We plot the terminal objective value in the simulation for each dataset in Figure 5. As can be seen, the shadow banning policy is able to improve the objective value relative to no shadow banning by 7% to 60% depending on dataset and objective. We see that the shadow banning policy is able to shift the opinion mean, decrease the variance, and also increase the variance. Therefore, we see the variety of opinion manipulations we can achieve with shadow banning, even with limited mean shadow banning strength. Next we explore the evolution of the opinions in more details to understand how the shadow banning is affecting the opinions.
We begin with the U.S. presidential election dataset. We show the opinion evolution under no shadow banning and with shadow banning for different objectives in Figure 6. We also show density plots of the initial and final opinions in Figure 7. Our first observation is that over the one year simulation the opinion quantiles show a very small movement. This is due to our bounded confidence model specification as we would expect social media users not to experience a major change in opinion over this time period. While the changes are small, they differ significantly depending on the shadow banning objective. With no shadow banning, the opinions show a slight movement towards the center. The density plot shows that opinions are converging around significant values present in the initial distribution, resulting in three major modes: left, center, and right. This pattern closely resembles real-world election polls. When maximizing the mean, we see that the 75th quantile is driven upwards from an initial value of 0.65 to a final value of 0.75. Looking at the density plots, we see that the increase in the upper quantiles is primarily due to the creation of a mode centered around 0.8. Minimizing the variance does not impact the median opinion, but slightly pulls in the 25th and 75th quantiles. From the density plot we see that the shadow banning has pulled the opinions towards the center at 0.5. Maximizing the variance appears to widen the 25th and 75th quantiles over time. The density plot shows that this policy has removed opinions from the center.
The mean shadow banning strength shows different behavior for the different objectives. For minimizing variance, the shadow banning strength decays to zero. For
Figure 5: Bar plots of terminal objective values with (blue) no shadow banning versus (orange) shadow banning for the U.S. election and Gilets Jaunes datasets, with objectives being (left) maximize mean, (middle) minimize variance, and (right) maximize variance. For the variance objectives, the terminal variances are reported. The objective improvements by shadow banning compared to no shadow banning are (by U.S. election and Gilets Jaunes) 9% and 12% for maximizing mean, 7% and 23% for minimizing variance, and 40% and 60% for maximizing variance.
maximizing the mean and variance, it remains at maximum strength over time. The opinion dynamics are attractive, so less shadow banning is needed for minimizing variance as the natural dynamics assists in driving the variance towards zero. However, maximizing variance requires driving opinions apart, which goes against the natural opinion dynamics, and so constant shadow banning is needed. We see the same behavior for maximizing the mean, and this is due to the initial opinion distribution in the network not having a large proportion of users with high opinions. The constant shadow banning is needed so that these users are not pulled down and can continuously pull other users up.
Figure 6: Opinion distributions and mean shadow ban strength versus time under shadow banning control policies for different objectives on the 2016 U.S. presidential election Twitter network. For the opinions, the purple region is the 25th to 75th quantiles, and the pink region is the 5th to 95th quantiles. The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance.
We next look at the Gilets Jaunes Twitter network, with the opinion evolutions shown in Figure 8 and initial and final opinion densities shown in Figure 9. The natural dynamics of the network do not appear to move the opinion quantiles much. Maximizing the mean and variance result in similar final opinion distributions. The difference between the final 25th and 75th quantiles is large for both objectives, but slightly larger for maximizing the variance. The final median is slightly higher for maximizing the mean. Apart from these differences, we find that maximizing either objective results in a network with a slightly increased opinion median and highly polarized opinions. Looking at the final opinion densities in Figure 9 we see that they are nearly the same for the two objectives. Minimizing the variance results in the opinion becoming concentrated at the center, as can be seen by the decrease in separation between the 25th and 75th quantiles in Figure 8. From the density plot we see that the shadow banning policy is creating a mode at 0.5 with a narrow width.
The mean shadow ban strength has a similar behavior for the Gilets Jaunes network as for the U.S. presidential debate network. Minimizing the variance requires less shadow banning as the natural dynamics assist in creating consensus. Maximizing the mean
Figure 7: Initial and final opinion densities under shadow banning control policies for different objectives on the 2016 U.S. presidential election Twitter network. The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance.
and variance require more shadow banning. The difference is that the shadow banning strength for maximizing the variance decreases slowly over time as the opinions reach the extreme ends of the spectrum. The reason for the shadow banning decay is that once the opinions are away from the middle, then the natural attractive opinion dynamics takes over, pulling the opinions towards the extremes.
Figure 8: Opinion distributions and mean shadow ban strength versus time under shadow banning control policies for different objectives on the Gilets Jaunes Twitter network. For the opinions, the purple region is the 25th to 75th quantiles, and the pink region is the 5th to 95th quantiles. The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance.
#### 3.2.3 Partisan Bias in Shadow Banning
One can choose an objective with a partisan bias when shadow banning. For instance, one can make the objective function be the mean (or negative mean) if one wants to shift the opinions up (or down). This is a clearly biased objective favoring one extreme of a topic. However, the implemented policy will not appear overtly partisan. To measure the overt partisan nature of a shadow banning policy at any given point in time, we segment users into two political groups based on their current opinion. For the U.S. presidential election dataset, we label Democrats as those with opinion less than or equal to 0.5, and Republicans as those with opinion greater than 0.5. For the Gilets Jaunes dataset, Gilets Jaunes opponents have opinion less or equal to 0.5, and Gilets Jaunes supporters have opinion greater than 0.5. We then look at the fraction of users shadow banned in each political group at a given time, which we refer to as the _shadow ban rate_. A user \(i\) is considered shadow banned at time \(t\) if the shadow ban strength \(u_{ij}(t)\) is greater than zero for at least one \(j\). This means at least one follower of \(i\) is not seeing all content posted by \(i\).
Figure 9: Initial and final opinion densities under shadow banning control policies for different objectives on the Gilets Jaunes Twitter network. The objectives are (top left) no shadow banning, (top right) maximize mean, (bottom left) minimize variance, and (bottom right) maximize variance.
We would expect a political bias in the shadow ban rates given that the objective is to maximize the opinion mean. However, we find this is not the case. We plot the shadow ban rate at the initial time (\(t=0\)) in our simulations in Figure 10. For the U.S. presidential election dataset, the two values are nearly identical, with the Republicans being shadow banned at a slightly higher rate than the Democrats. A more extreme result is found for Gilets Jaunes. We see that the pro-Gilets Jaunes users are shadow banned at nearly three times the rate of the anti-Gilets Jaunes users. These findings are counter-intuitive as they indicate that the shadow banning policies have a bias that is opposite the bias of the objective. However, from the opinion evolution plots in Figures 6 and 8, we see that these policies lead to opinions distributions which exhibit the bias suggested by the objective.
To understand why a shadow banning policy can appear unbiased while being very biased, it is useful to consider again the path network discussed earlier. The initial shadow banning policy for maximizing the mean is shown in Figure 1. From the figure, we see that every node is shadow banned except for the red node with the highest opinion located at the end of the path. Specifically, the edge pointing to the neighbor with higher opinion is shadow banned. The remaining edges indicate that the posts can flow from nodes with higher opinion to those with lower opinion. This has the effect of only allowing upward opinion shifts, which causes the opinion mean to increase over time. However, every node (except for the maximum opinion node) has a neighbor with higher opinion. This means that all of these nodes are shadow banned, which causes the policy to appear unbiased.
In general, for maximizing the opinion mean, the shadow banning policy blocks any edge which pulls opinions downwards. These edges can be incident on nodes of either partisan group. In this manner the policy appears unbiased, or even possibly biased in the opposite direction depending upon the network structure and opinion distribution. The natural approach is to consider which users are shadow banned, which would allow biased shadow banning to go undetected. Our results suggest that to measure a bias
Figure 10: Bar plots of shadow ban rates by partisan group at \(t=0\) for the (left) U.S. election and (right) Gilets Jaunes datasets. The shadow banning objective is to maximize the opinion mean. For the U.S. election, this means to shift the mean towards Republicans. For Gilets Jaunes this means to shift the opinions towards pro-Gilets Jaunes. Shadow ban rate here is calculated by the fraction of number of accounts, or vertices, that have at least one out-degree edge that is shadow banned. Error bars indicate the 95% confidence interval of the mean estimate.
in a shadow banning policy, one must look at the edges which are shadow banned, and not the users. In particular, one must look at the sign of the opinion shift among the shadow banned edges to identify the bias.
Our result shows the danger of shadow banning. One would think that if a social media platform employed an overtly biased content moderation policy, this bias would be easily observed. However, we find that the platform can employ a shadow banning policy which appears to be unbiased, yet over time creates a bias in the users' opinions. The platform's efforts at shifting opinions would likely go undetected as the actual implemented policy seems unbiased, or even biased in the opposite direction. One would not realize there was a bias in the policy until after it has been employed for a long period of time.
#### 3.2.4 Sensitivity Analysis
We investigate the sensitivity of the performance of shadow banning policies as a function of the maximum mean shadow ban strength \(s_{network}\) and edge shadow ban strength \(s_{edge}\). We investigate the sensitivity with respect to the opinion dynamics model parameters in Appendix.
We first see how performance changes if we vary \(s_{network}\) with \(s_{edge}=1\). We consider the terminal value of the objective over the duration of the simulation. Figure 11 shows the relative change of the different objectives in the simulation relative to no shadow banning as \(s_{network}\) is increased for the two datasets. We find that the objectives plateaus for values of \(s_{network}\) greater than 10% for both networks. There appears to be no benefit to applying stronger global shadow banning beyond this value. This most likely occurs because the shadow banning is applied to a limited number of critical edges at each time step. Therefore, the opinions can be shifted without shadow banning a significant fraction of the edges.
We next investigate the impact of \(s_{edge}\) on performance. We provide plots of how the terminal objective changes with respect to both \(s_{edge}\) and \(s_{network}\) for each dataset in Figures 12 and 13. We find that for values of \(s_{edge}\) less than 0.5 there is very little change in the objective relative to no shadow banning. For larger values of \(s_{edge}\) we see the shadow banning causing a non-trivial change in the objective values. This shows that
Figure 11: The terminal objective values as a function of \(s_{network}\) for the (left) U.S. election and (right) Gilets Jaunes datasets (\(s_{edge}=1\)). The y-axis shows the relative magnitude of the objective value compared to that of no shadow banning.
strong shadow banning needs to be allowed on the targeted edges in order to produce a non-trivial shift in the opinion distribution. Therefore, while the shadow banning strength can be very low across the network, the critical edges that are targeted require a substantial amount of shadow banning.
## 4 Discussion
Our findings show the power and flexibility of shadow banning as a content moderation tool for online social media platforms. Precise shadow banning policies can be easily calculated for large networks by solving a linear program. By applying these shadow banning policies, platforms can exert delicate influence over the distribution of user opinions. While this can serve goals like reducing polarization or curbing misinformation, it also holds the potential for misuse. Shadow banning can intensify polarization within a network. Platforms might use shadow banning to steer opinions towards or away from specific topics. Additionally, platforms might employ this biased shadow banning and remaining unnoticed due to the outward appearance of political neutrality.
The danger of a social media platform engaging in biased shadow banning is significant. The effects are slow, and the bias can be undetected. Over time, this can lead to dangerous outcomes for which it is too late to prevent. Election outcomes can potentially be changed by such manipulation. Societies can be polarized to the point of instability. Intelligent policies should be enacted to prevent such abuse by social media platforms. Conventional measures such as shadow ban rates may not reveal the bias exerted by the platforms. However, more precise measures, such as shadow ban rates for edges of different opinion shift polarity, can reveal this bias. Such measures should be employed to ensure that social media platforms use shadow banning to maintain platform health and safety and not for other malicious purposes.
Figure 12: Terminal objective values for the U.S. election dataset as a function of \(s_{network}\) and \(s_{edge}\). The objectives are (top) maximize mean, (middle) minimize variance, and (bottom) maximize variance. Values in the cells are the magnitude relative to no shadow banning in percent.
Figure 13: Terminal objective values for the Gilets Jaunes dataset as a function of \(s_{network}\) and \(s_{edge}\). The objectives are (top) maximize mean, (middle) minimize variance, and (bottom) maximize variance. Values in the cells are the magnitude relative to no shadow banning in percent.
Appendix
### Robustness for Variations of the Bounded Confidence Model
We present here the performance of different shadow banning objectives on the U.S. presidential election and Gilets Jaunes Twitter networks under different specifications of the bounded confidence model. All combinations of \(\epsilon\in[0.01,0.1,0.3,0.5,1]\) and \(\omega\in[0.001,0.003,0.01]\) are simulated and the terminal objective values compared to no shadow banning are illustrated in the heat maps in Figures 14 and 15 for each dataset. Shadow banning strength limits are fixed with \(s_{network}=0.05\) and \(s_{edge}=1\).
Our first observation is that regardless of the choice of \(\epsilon\) and \(\omega\) in the bounded confidence model, our policy leads to an improvement in the objective relative to no shadow banning. This shows that our shadow banning policies show some level of robustness with respect to the bounded confidence model.
For maximizing the mean, the objective smoothly increases as the persuasion strength is increased. However, the variance objectives show some more interesting behavior. In the U.S. election dataset, when minimizing the variance, larger \(\epsilon\) values do not offer as much improvement as \(\epsilon=0.1\). This is because stronger opinion dynamics play a more dominant role in determining the location of opinion consensus, overshadowing the impact of shadow banning. When maximizing the variance, the most substantial terminal variance increase occurs at \(\epsilon=0.5\) when \(\omega=0.001\) and \(0.003\), and at \(\epsilon=0.1\) when \(\omega=0.01\). However, at \(\epsilon=1\), the improvements are smaller due to the network's increased resistance to polarization under stronger attractive opinion dynamics. Similar trends are observed in the Gilets Jaunes dataset. For minimizing variance, \(\epsilon=0.1\) leads to the smallest terminal variance, while for maximizing variance, \(\epsilon=0.3\) results in the largest terminal variance.
These findings provide useful guidance when designing shadow banning policies. For objectives involving the opinion mean, the precise choice of opinion dynamics parameters is not critical. For objectives involving the opinion variance, one must decide if the opinion dynamics shows strong or weak persuasion, as strong persuasion makes shadow banning harder to overcome the natural attractive opinion dynamics. Since real-world social networks exhibit persistent polarization, better shadow banning policies will be calculated if using opinion dynamics models with weak persuasion.
Figure 14: Terminal objective values for the U.S. election dataset as a function of \(\epsilon\) and \(\omega\). The objectives are (top) maximize mean, (middle) minimize variance, and (bottom) maximize variance. Values in the cells are the magnitude relative to no shadow banning in percent.
Figure 15: Terminal objective values for the Gilets Jaunes dataset as a function of \(\epsilon\) and \(\omega\). The objectives are (top) maximize mean, (middle) minimize variance, and (bottom) maximize variance. Values in the cells are the magnitude relative to no shadow banning in percent. |
2309.12043 | Dark Matter Annihilation via Breit-Wigner Enhancement with Heavier
Mediator | We propose a new scenario that both the dark matter freeze-out in the early
Universe and its possible annihilation for indirect detection around a
supermassive black hole are enhanced by a Breit-Wigner resonance. With the
mediator mass larger than the total initial dark matter mass, this annihilation
is almost forbidden at late times. Thus, the stringent cosmic microwave
background and indirect detection constraints do not apply. However, a
supermassive black hole can accelerate the dark matter particles to reactivate
this resonant annihilation whose subsequent decay to photons leaves a unique
signal. The running Fermi-LAT and the future COSI satellites can test this
scenario. | Yu Cheng, Shao-Feng Ge, Jie Sheng, Tsutomu T. Yanagida | 2023-09-21T13:12:01Z | http://arxiv.org/abs/2309.12043v1 | # Dark Matter Annihilation via Breit-Wigner Enhancement with Heavier Mediator
###### Abstract
We propose a new scenario that both the dark matter freeze-out in the early Universe and its possible annihilation for indirect detection around a supermassive black hole are enhanced by a Breit-Wigner resonance. With the mediator mass larger than the total initial dark matter mass, this annihilation is almost forbidden at late times. Thus, the stringent cosmic microwave background and indirect detection constraints do not apply. However, a supermassive black hole can accelerate the dark matter particles to reactivate this resonant annihilation whose subsequent decay to photons leaves a unique signal. The running Fermi-LAT and the future COSI satellites can test this scenario.
**Introduction** - More than 80% of the matter in our Universe today is dark matter (DM) [1, 2]. One major hunting strategy is the DM direct detection that uses the DM scattering with nuclear or electron. Currently, the Xenon-based experiments have reached ton-scale [3, 4, 5]. However, for the non-relativistic DM in our galaxy only those with mass \(\gtrsim\mathcal{O}(1)\,\mathrm{GeV}\) carry large enough kinetic energy to make the nuclear recoil from elastic scattering to overcome the detection threshold. While DM above GeV scale is already highly constrained, the sub-GeV region still has large parameter space [6]. Thus, light DM is becoming more popular nowadays [7].
In the standard freeze-out scenario, the DM annihilation cross section for obtaining the observed relic abundance should be around \(\langle\sigma v\rangle\sim 10^{-26}\,\mathrm{cm}^{3}\mathrm{s}^{-1}\)[8]. The same annihilation into the Standard Model (SM) particles can still happen at late times. Its subsequent electromagnetic energy injection into the environment modifies the ionization history of the Universe and finally affects the observed cosmic microwave background (CMB). Thus, the DM freeze-out scenario receives stringent constraint from CMB [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20].
One solution is the forbidden-type DM [21, 22, 23, 24] whose annihilation is kinematically prohibited at late times. Unfortunately, this also makes it difficult to leave indirect detection signals today. The only place for the forbidden annihilation to re-open is around a supermassive black hole (SMBH) [25, 26]. The strong gravitational force can accelerate the DM particles to overcome the annihilation threshold. The resultant gamma-ray is then a unique signal that only appears around SMBH but not anywhere else.
In this paper, we propose an alternative scenario to escape the CMB constraint and leave indirect detection signals around the SMBH based on the Breit-Wigner enhancement [27, 28] for the DM annihilation. Its original form of the Breit-Wigner scenario uses a light mediator (mediator mass \(m_{\phi}\) smaller than \(2m_{\chi}\)) to enhance the annihilation at late times to explain the electron-positron excess observed by the PAMELA [29], ATIC [30], and PPB-BETS [31] cosmic-ray experiments. We do the opposite with a heavy mediator, \(m_{\phi}>2m_{\chi}\). The DM annihilation during freeze-out is enhanced by its thermal energy that compensates the mass difference to reach the \(s\)-channel resonance pole. When temperature cools down, the DM annihilation moves away from the resonance pole and becomes greatly suppressed at late times to escape the CMB constraint.
This new Breit-Wigner scenario with a heavy mediator can reactivate around an SMBH and the subsequent decay of the final-state SM particles can leave a unique signature. In addition to the existing gamma-ray telescopes, such as Fermi-LAT [32] and H.E.S.S. [33] that can search for DM signals around the SMBH \(\mathrm{Sgr}\,A^{*}\) in the energy range of \(\mathcal{O}(100)\,\mathrm{MeV}\) to TeV, the upcoming telescope Compton Spectrometer and Image (COSI) aims at detecting the soft gamma-ray of \(0.2\sim 5\,\mathrm{MeV}\)[34]. This opens a new window for testing our new scenario.
**DM Production with Breit-Wigner Resonance** - In the Breit-Wigner mechanism, the DM \(\chi\) with mass \(m_{\chi}\) annihilates into SM particles through an \(s\)-channel \(\phi\). Although this mediator \(\phi\) can have arbitrary spins in principle, we assume a scalar mediator for simplicity. The general form of the annihilation cross section with a Breit-Wigner resonance is [27, 28],
\[\sigma=\frac{16\pi\beta_{f}}{s\bar{\beta_{i}}\bar{\beta_{f}}\beta_{i}}\frac{m_{ \phi}^{2}\Gamma_{\phi}^{2}}{\left(s-m_{\phi}^{2}\right)^{2}+m_{\phi}^{2} \Gamma_{\phi}^{2}}B_{i}B_{f}, \tag{1}\]
where \(\Gamma_{\phi}\) is the total decay rate of the mediator \(\phi\) while \(s\) is the center-of-mass energy squared \(s\equiv E_{\mathrm{cm}}^{2}\). The initial/final-state phase space factor \(\beta_{i(f)}\equiv\sqrt{1-4m_{i(f)}^{2}/E_{\mathrm{cm}}^{2}}\) is evaluated with the center-of-mass energy and \(\bar{\beta}_{i(f)}\equiv\sqrt{1-4m_{i(f)}^{2}/m_{\phi}^{2}}\) with the mediator mass. The branching ratios of the mediator \(\phi\) decaying into the initial- and final-state particles are denoted as \(B_{i(f)}\equiv\Gamma_{i(f)}/\Gamma_{\phi}\). A large boost factor to enhance the indirect detection signal can be achieved with a light mediator, \(m_{\phi}<2m_{\chi}\), in the original Breit-Wigner scenario [27, 28]. Since \(s\) is always larger than \(m_{\phi}^{2}\), it keeps
decreasing and approaching the pole when temperature cools down.
Our scenario takes instead a heavy mediator, \(m_{\phi}>2m_{\chi}\), for the resonance. In this case, the \(s\)-channel resonance can be achieved only when the DM has high enough kinetic energy. To make the resonance feature transparent, we parametrize the mediator mass as \(m_{\phi}^{2}\equiv 4m_{\chi}^{2}(1+\delta)\) with \(\delta\) denoting the mass difference and the non-relativistic \(s\equiv 4m_{\chi}^{2}+m_{\chi}^{2}\left(\vec{v}_{\rm rel}\right)^{2}\) with the relative velocity \(\vec{v}_{\rm rel}\equiv\vec{v}_{1}-\vec{v}_{2}\). Then the cross section expression Eq. (1) becomes,
\[\sigma=\frac{16\pi\beta_{f}}{s\bar{\beta}_{i}\bar{\beta}_{f}\beta_{i}}\frac{ \gamma^{2}}{\left(\frac{\vec{v}_{\rm rel}^{2}/4-\delta}{1+\delta}\right)^{2} +\gamma^{2}}B_{i}B_{f}, \tag{2}\]
with the normalized decay width \(\gamma\equiv\Gamma_{\phi}/m_{\phi}\). For a heavy mediator, \(\delta>0\). We can see that the cross section reaches the Breit-Wigner resonance pole when \(|\vec{v}_{\rm rel}|^{2}/4=\delta\). For illustration, the black curve in Fig. 1 shows that the annihilation cross section with \(\delta=0.05\) peaks at exactly \(v_{\rm rel}^{2}=0.2\) as expected.
In the early Universe, the DM number density evolution is determined by the thermally averaged annihilation cross section,
\[\langle\sigma v_{\rm rel}\rangle\equiv\frac{x^{3/2}}{2\pi^{1/2}}\int_{0}^{ \infty}dv_{\rm rel}v_{\rm rel}^{2}\left(\sigma v_{\rm rel}\right)e^{-xv_{\rm rel }^{2}/4}, \tag{3}\]
where \(x\equiv m_{\chi}/T_{\chi}\) and the DM phase space distribution has been approximated by the Maxwell distribution. With Breit-Wigner resonance, the thermally averaged \(\langle\sigma v_{\rm rel}\rangle\) is maximized when the Maxwell distribution (blue dash-dotted) peak overlaps with the pole (black solid), \(T_{\chi}\simeq\delta\times m_{\chi}\). This happens around \(x\sim 20\) in the early Universe with the \(\delta=0.05\) adopted by Fig. 1.
When the temperature keeps decreasing, the velocity distribution softens. For illustration, the DM with temperature \(T_{\chi}=m_{\chi}/1000\) (green dotted) has almost no overlap with the Breit-Wigner resonance (black solid). At the time of CMB formation, the Universe temperature has already dropped to \({\cal O}(1)\,\)eV. The DM kinetic energy is then not enough to activate the Breit-Wigner enhancement with a heavy mediator. Thus, its annihilation is highly suppressed and can naturally escape the CMB constraint.
A broader Breit-Wigner peak can increase its overlap with the low-temperature DM distribution. However, the overlap is still small since the normalized decay width \(\gamma\) (equivalently the cross section) can not be too large in order to produce the correct relic density.
In the freeze-out scenario, the evolution of the DM yield, \(Y\equiv n_{\chi}/s\), follows the Boltzmann equation [35],
\[\frac{dY}{dx}=-\frac{\bar{\lambda}}{x^{2}}\left(Y^{2}-Y_{\rm eq}^{2}\right), \tag{4}\]
with \(Y_{\rm eq}\equiv 0.145(g/g_{s*})x^{3/2}e^{-x}\). The thermally averaged cross section has been redefined as,
\[\bar{\lambda} \equiv \sqrt{\frac{\pi}{45}}\frac{g_{s*}}{\sqrt{g_{*}}}\left[1+\frac{T} {3}\frac{\rm d}{\rm dT}\ln\left(g_{s*}\right)\right]m_{\chi}M_{\rm pl}\left\langle \sigma v\right\rangle, \tag{5}\]
where \(M_{\rm pl}=1.22\times 10^{19}\,\)GeV is the Planck mass and \(g_{*}\) (\(g_{s*}\)) the effective relativistic energy (entropy) degrees of freedom. When the DM is in thermal equilibrium, its yield \(Y\) tracks \(Y_{\rm eq}\). As temperature cools down, the DM number density drops exponentially and it begins to freeze out once the annihilation rate is comparable to the Hubble rate, \(n_{\chi}\left\langle\sigma v\right\rangle\simeq H\). We take the freeze-out criterion \(Y-Y_{\rm eq}\simeq Y_{\rm eq}\) to estimate the freeze-out time [35],
\[x_{f}\equiv\ln\frac{0.038gM_{\rm pl}m_{\chi}\langle\sigma v\rangle}{g_{*}^{1/ 2}x_{f}^{1/2}}. \tag{6}\]
This iterative equation gives that the DM freeze-out point is roughly \(x_{f}\sim 25\). By solving the differential equation \(dY/dx=-\bar{\lambda}Y^{2}/x^{2}\) from the freeze-out point to infinity, we can get the DM yield today,
\[Y_{\chi}=\frac{Y(x_{f})}{1+Y(x_{f})\int_{x_{f}}^{\infty}\frac{\bar{\lambda}}{x ^{2}}dx}. \tag{7}\]
The DM relic density is related to \(Y_{\chi}\) as,
\[\rho_{\chi}=m_{\chi}s_{0}Y_{\chi},\quad\Omega_{\chi}h^{2}=\frac{\rho_{\chi}}{ \rho_{c}/h^{2}}, \tag{8}\]
Figure 1: The overlap between different DM velocity distributions \(f(v_{\rm rel}^{2})\) and the DM annihilation cross section \(\sigma\) as a function of the relative velocity squared \(v_{\rm rel}^{2}\). The black curve clearly shows the Breit-Wigner resonance with a positive mass difference \(\delta=0.05\). The blue dash-dotted and green dotted curves are the DM velocity distributions when the DM temperature is just \(4\%\) (\(x=25\) which is equivalently \(T_{\chi}=m_{\chi}/25\)) or \(0.1\%\) (\(x=1000\) or \(T_{\chi}=m_{\chi}/1000\)) of its mass, respectively. The red dashed curve shows the case around an SMBH at the place with the largest DM annihilation rate per unit radius.
where \(s_{0}=2891.2\) cm\({}^{-3}\) is the entropy density today and \(\rho_{c}=1.05\times 10^{5}h^{2}\) GeV/cm\({}^{3}\) is the critical density of the Universe.
There are four independent parameters, \(m_{\chi}\), \(\delta\), \(\gamma\), and \(B_{i}B_{f}\). However, these parameters are not totally free since the inequality \(\Gamma_{\phi}^{2}=(\Gamma_{i}+\Gamma_{f})^{2}\geq 4\Gamma_{i}\Gamma_{f}\) requires \(B_{i}B_{f}\leq 0.25\). The parameter values to produce the correct relic density \(\Omega_{\chi}h^{2}=0.12\) are shown in Fig. 2. For illustration, we fix the product of branching ratios \(B_{i}B_{f}=0.01\) and vary the DM mass \(m_{\chi}\) as well as the normalized decay width \(\gamma\). Then, the observed DM relic density uniquely determines the mass difference \(\delta\). We can first see that a larger normalized decay width \(\gamma\) requires a larger DM mass which is the same feature as in the standard case. Second, a larger mass difference \(\delta\) prefers a larger normalized decay width \(\gamma\). This is because a big mass difference makes the DM difficult to touch the Breit-Wigner resonance with a heavy mediator which can be compensated by a larger coupling strength that is encoded in \(\gamma\).
**The Right-Handed Neutrino DM Model** - One interesting model to further illustrate our scenario is a right-handed neutrino (RHN) DM [36; 37; 38]. The heavy right-handed Majorana neutrinos are widely considered as a key ingredient beyond the SM. They explain not only the observed tiny neutrino masses via the seesaw mechanism [39; 40; 41; 42], but also the baryon asymmetry of our Universe through leptogenesis [43]. Normally, we assume three heavy RHNs. However, two heavy Majorana neutrinos are already sufficient to explain baryon asymmetry and the observed neutrino mass squared differences from oscillation [44; 45; 46; 47; 48]. The remaining right-handed neutrino \(N\) can serve as a DM candidate. A byproduct is that we have an anthropic argument [49] for the presence of three families of quarks and leptons [50].
We assume a \(Z_{2}\) parity acting only on this RHN DM \(N_{\chi}\) to make it stable [36]. To ensure that \(N_{\chi}\) is in equilibrium at the early Universe, we introduce a scalar mediator \(\phi\) that couples to \(N_{\chi}\) through a Majorana-type Yukawa coupling \(\phi N_{\chi}^{T}\epsilon N_{\chi}\). Notice here that \(N_{\chi}\) is a right-handed two-component Weyl fermion. Since the RHN DM \(N_{\chi}\) is assumed to carry odd parity under \(Z_{2}\), \(\phi\) is even and can couple with a pair of the SM Higgs bosons, \(H\) and \(H^{\dagger}\), via \(\phi H^{\dagger}H\). Thus, the interaction Lagrangian is,
\[\mathcal{L}_{\text{int}}=(y\phi N_{\chi}^{T}\epsilon N_{\chi}+h.c.)+\lambda m _{\phi}\phi H^{\dagger}H. \tag{9}\]
With a heavy mediator, \(m_{\phi}>2m_{\chi}\), the DM decay receives two contributions, \(\Gamma_{\phi}=\Gamma_{\phi\to N_{\chi}N_{\chi}}+\Gamma_{\phi\to f\bar{f}}\),
\[\Gamma_{\phi\to N_{\chi}N_{\chi}} =y^{2}\frac{m_{\phi}}{4\pi}\left(1-\frac{4m_{\chi}^{2}}{m_{\phi}^ {2}}\right)^{3/2}, \tag{10a}\] \[\Gamma_{\phi\to f\bar{f}} =(\lambda^{\prime})^{2}\frac{m_{\phi}}{8\pi}\left(1-\frac{4m_{f}^ {2}}{m_{\phi}^{2}}\right)^{3/2}, \tag{10b}\]
where \(\lambda^{\prime}\equiv m_{f}\sin\theta/v\approx\lambda m_{f}/(2m_{\phi})\) is the Yukawa coupling between the mediator \(\phi\) and the SM fermions. The parameter, \(\theta\equiv\frac{1}{2}\arctan[\lambda m_{\phi}v/(m_{\phi}^{2}-m_{h}^{2})]\), is the mixing angle between \(\phi\) and the SM Higgs particle \(h\). The DM decay branching ratios are, \(B_{i}\equiv\Gamma_{\phi\to N_{\chi}N_{\chi}}/\Gamma_{\phi}\) and \(B_{f}\equiv\Gamma_{\phi\to f\bar{f}}/\Gamma_{\phi}\) with \(\Gamma_{i}=\Gamma_{\phi\to N_{\chi}N_{\chi}}\) and \(\Gamma_{f}=\Gamma_{\phi\to f\bar{f}}\).
The coupling \(\lambda\) with the SM Higgs boson should maintain \(\phi\) in thermal equilibrium at the early Universe. In other words, the decay rate of \(\phi\) to the SM fermions \(\Gamma_{\phi\to f\bar{f}}\) should be larger than the Hubble rate \(H\propto T_{f}^{2}/M_{\text{pl}}\). Suppose \(\phi\) decouples around \(T_{f}\sim m_{\phi}/25\), thermal equilibrium requires \(\lambda\gtrsim(10^{-8}\sim 10^{-7})\) depending on the mediator mass \(m_{\phi}\). Further through the first Yukawa term in Eq. (9) and the resultant \(\phi\to N_{\chi}+N_{\chi}\) decay, \(N_{\chi}\) can also get in thermal equilibrium. The parameter space of this RHN DM model to give the correct relic density can be directly read off from Fig. 2.
**Reactivation around SMBH** - In the current Universe, the DM particles have already become non-relativistic, which means almost no DM particles can gain enough kinetic energy to touch the Breit-Wigner resonance pole. Thus, the DM annihilation cross section is highly suppressed to leave almost no signal in indirect detection. However, a strong gravitational source, such as an SMBH, can accelerate the DM particles to reactivate their annihilation [25]. For illustration, we show the velocity distribution at the radius where the DM annihilation rate per unit radius, \((4\pi r^{2}\rho_{\chi}^{2}/m_{\chi}^{2})\langle\sigma v\rangle\), is maximized as the red curve in Fig. 1. We can see that the DM velocity distribution near SMBH also has a big overlap with the Breit-Wigner resonance peak. Therefore, the DM will gradually annihilate into the SM fermions around
Figure 2: The parameter space to generate the correct relic density \(\Omega_{\chi}h^{2}=0.12\) for the Breit-Wigner resonance scenario with a heavy mediator. We fix \(B_{i}B_{f}=0.01\) while varying the DM mass \(m_{\chi}\) (horizontal axis) and the normalized decay width \(\gamma\) (vertical axis). Then, the mass difference \(\delta\) is uniquely determined by the DM relic density whose value can be read off according to the blue color bar on the right side.
SMBH and the subsequent decays produce gamma rays as observable signal. We take the SMBH in our galaxy center, \(\mathrm{Sgr}\,A^{*}\), as an example for analysis.
The differential gamma ray flux from the DM annihilation and subsequent decay is,
\[\frac{dF_{\gamma}}{dE_{\gamma}}= \int_{0}^{1}\!\!dV_{r}dV_{c}\mathcal{P}_{\mathrm{r}}\left(V_{r},V_ {c}\right)(\sigma v)_{\mathrm{cm}}\frac{dN_{\gamma}}{dE_{\gamma}}(V_{r},V_{c}), \tag{11}\]
with the joint probability distribution,
\[\mathcal{P}_{\mathrm{r}}\left(V_{r},V_{c}\right)=\frac{x^{2}\gamma_{r}^{2}( \gamma_{r}^{2}-1)(1+\gamma_{r})V_{c}^{2}}{2K_{2}^{2}(x)(1-V_{c}^{2})^{2}}e^{-x \sqrt{\frac{2+2\gamma_{r}}{1-V_{c}^{2}}}}. \tag{12}\]
The temperature parameter \(x=x(r)\) now has radius \((r)\) dependence through the DM temperature \(T_{\chi}=\frac{1}{2}m_{\chi}v_{d}^{2}(r)\) where \(v_{d}\) is the DM velocity dispersion and \(dN_{\gamma}/dE_{\gamma}\) the boosted photon spectrum from \(\phi\) decay. Note that the cross section \((\sigma v)_{\mathrm{cm}}\) is defined in the center-of-mass frame of DM collision. For the mass range \(m_{\phi}>10\,\mathrm{GeV}\), the dominate channel is \(\phi\to b\bar{b}\). We use the PPPC4DMID package [51] to generate the photon spectra and further boost it to the lab frame [52; 53].
The observed gamma ray flux is an integrated total result over the radius \(r\) from \(r_{0}\equiv 4GM\) to \(r_{b}\),
\[\frac{d\Phi_{\gamma}}{dE_{\gamma}}=\frac{1}{4\pi D^{2}}\frac{1}{2m_{\chi}^{2} }\int_{4GM}^{r_{b}}4\pi r^{2}dr\rho^{2}(r)\frac{dF_{\gamma}}{dE_{\gamma}}(r). \tag{13}\]
Here, \(\rho(r)\) is the DM density around the SMBH [54; 55]. For the innermost capture region, \(r<r_{0}\), all particles are attracted to fall into the SMBH so that there is no DM. When the gravitational influence of SMBH no longer dominates for \(r>r_{b}\equiv 0.2GM/v_{0}^{2}\) where \(v_{0}\) is the DM velocity dispersion, the DM halo simply follows the generalized NFW profile [56]. In between, a DM spike forms. The full DM profile around a SMBH is,
\[\rho(r)=\begin{cases}0,&r<r_{0},\ \text{(Capture Region)},\\ \frac{\rho_{\mathrm{m}}(r)\rho_{\mathrm{m}}(t,r)}{\rho_{\mathrm{m}}(r)+\rho_{ \mathrm{m}}(t,r)},&r_{0}\leq r<r_{b},\ \text{(Spike)},\\ \rho_{b}(r_{b}/r)^{\gamma_{c}}&r_{b}<r<D,\ \text{(Halo)}.\end{cases} \tag{14}\]
The handy spike profile \(\rho_{\mathrm{sp}}(r)\equiv\rho_{b}(r_{b}/r)^{\gamma_{\mathrm{sp}}}\) with \(\rho_{b}\equiv 0.3\,\mathrm{GeV}/\mathrm{cm}^{3}\times(D/r_{b})^{\gamma_{c}}\) is a function of the distance \(D=8.5\,\mathrm{kpc}\) between the Milky Way galaxy center and our solar system. The two power indices \(\gamma_{sp}\) and \(\gamma_{c}\) are adjustable parameters with \(0.9\leq\gamma_{c}\leq 1.2\)[57; 58]. The annihilation plateau density, \(\rho_{\mathrm{ann}}\equiv m_{\chi}/\left\langle\sigma v\right\rangle t\), determined by the SM annihilation cross section is scaled by the power index \(\gamma_{\mathrm{in}}=1/2\) to give the inner profile \(\rho_{\mathrm{in}}(r,t)\equiv\rho_{\mathrm{ann}}(t)(r/r_{\mathrm{in}})^{- \gamma_{\mathrm{in}}}\).
In principle, we should include all the contributions along the line of sight. However, the annihilation outside spike is negligible due to lack of the Breit-Wigner enhancement. So we take the integration upper limit in Eq. (13) as the outer boundary \(r_{b}\) of spike.
Requiring the predicted gamma-ray flux to be below the Fermi-LAT data (4FGL J1745.6-2859 around \(\mathrm{Sgr}\,A^{*}\)[59; 60] from August 4, 2008 to may 26, 2023) at 95% C.L. for each bin, the allowed model parameter space is shown in Fig. 3 as colored regions. To make an illustration, we have taken \(\gamma_{c}=1.2\). For \(\gamma_{\mathrm{sp}}=1.8\), the mass difference \(\delta\) can be either large or small. A small \(\delta\) increases the annihilation cross section but decreases the DM spike density. On the other hand, a large \(\delta\) makes it difficult for DM particles to touch the resonance pole and annihilate. Both cases lead to smaller photon flux. With increasing \(\gamma_{\mathrm{sp}}\) (deeper color), the DM density profile becomes sharper and the constraint becomes stronger.
**Conclusions and Discussions** - We propose for DM annihilation a new Breit-Wigner enhancement scenario with the mediator mass larger than two times of the DM mass. While the DM annihilation is highly suppressed to evade the CMB constraint when the temperature cools down, the DM freeze-out in the early Universe and its reactivation around an SMBH is enhanced by the \(s\)-channel resonance with large enough kinetic energy. For illustration, we construct a UV complete RHN DM model for the mass range from \(\mathcal{O}(0.1)\,\mathrm{GeV}\) to \(\mathcal{O}(100)\,\mathrm{GeV}\).
If sourced by DM annihilation, the observed positron excess requires a DM mass above \(10\,\mathrm{GeV}\). Although the DM annihilation in our scenario can not reach the resonance pole to give a large boost factor at late stages, astrophysics sources can explain the positron excess equally well[61; 62]. So our DM scenario with a heavy mediator is consistent with all these observations.
Figure 3: The allowed parameter space not exceeding the constraints from each data bin of the Fermi-LAT gamma-ray observation at 95 % C.L. With fixed \(\gamma_{c}=1.2\) and \(B_{i}B_{f}=0.01\), the three colored regions correspond to three typical values \(\gamma_{sp}=(1.8,1.9,2.0)\), respectively.
## Acknowledgements
Yu Cheng and Jie Sheng would like to thank Prof. Shigeki Matsumoto for useful discussions and hospitality during their stay at Kavli IPMU where this paper was partially completed. SFG is supported by the National Natural Science Foundation of China (12375101, 12090060 and 12090064) and the SJTU Double First Class start-up fund (WF220442604). T. T. Y. is supported by the China Grant for Talent Scientific Start-Up Project and by Natural Science Foundation of China (NSFC) under grant No. 12175134, JSPS Grant-in-Aid for Scientific Research Grants No. 19H05810, and World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. Both SFG and T. T. Y. are affiliated members of Kavli IPMU, University of Tokyo.
|
2309.14432 | Quantum Memory: A Missing Piece in Quantum Computing Units | Memory is an indispensable component in classical computing systems. While
the development of quantum computing is still in its early stages, current
quantum processing units mainly function as quantum registers. Consequently,
the actual role of quantum memory in future advanced quantum computing
architectures remains unclear. With the rapid scaling of qubits, it is
opportune to explore the potential and feasibility of quantum memory across
different substrate device technologies and application scenarios. In this
paper, we provide a full design stack view of quantum memory. We start from the
elementary component of a quantum memory device, quantum memory cells. We
provide an abstraction to a quantum memory cell and define metrics to measure
the performance of physical platforms. Combined with addressing functionality,
we then review two types of quantum memory devices: random access quantum
memory (RAQM) and quantum random access memory (QRAM). Building on top of these
devices, quantum memory units in the computing architecture, including building
a quantum memory unit, quantum cache, quantum buffer, and using QRAM for the
quantum input-output module, are discussed. We further propose the programming
model for the quantum memory units and discuss their possible applications. By
presenting this work, we aim to attract more researchers from both the Quantum
Information Science (QIS) and classical memory communities to enter this
emerging and exciting area. | Chenxu Liu, Meng Wang, Samuel A. Stein, Yufei Ding, Ang Li | 2023-09-25T18:00:08Z | http://arxiv.org/abs/2309.14432v2 | # Quantum Memory: A Missing Piece in Quantum Computing Units
###### Abstract
Memory is an indispensable component in classical computing systems. While the development of quantum computing is still in its early stages, current quantum processing units mainly function as quantum registers. Consequently, the actual role of quantum memory in future advanced quantum computing architectures remains unclear. With the rapid scaling of qubits, it is opportune to explore the potential and feasibility of quantum memory across different substrate device technologies and application scenarios. In this paper, we provide a full design stack view of quantum memory. We start from the elementary component of a quantum memory device, quantum memory cells. We provide an abstraction to a quantum memory cell and define metrics to measure the performance of physical platforms. Combined with addressing functionality, we then review two types of quantum memory devices: random access quantum memory (RAQM) and quantum random access memory (QRAM). Building on top of these devices, quantum memory units in the computing architecture, including building a quantum memory unit, quantum cache, quantum buffer, and using QRAM for the quantum input-output module, are discussed. We further propose the programming model for the quantum memory units and discuss their possible applications. By presenting this work, we aim to attract more researchers from both the Quantum Information Science (QIS) and classical memory communities to enter this emerging and exciting area.
## I Introduction
Emergence of quantum computing and quantum networking has sparked tremendous excitement in the scientific and technological communities due to their potential to revolutionize various fields. Quantum computing harnesses the principles of quantum mechanics to perform computations exponentially faster than classical computers, offering the possibility of solving complex problems in cryptography [1; 2], optimization [3; 4], quantum chemistry [5; 6], and finance [7; 8], etc. Furthermore, quantum networks enable the transmission of quantum information across long distances [9; 10; 11; 12], facilitating secure communication [13; 14; 15; 16] and the creation of sophisticated quantum network protocols and quantum internet [17; 18; 19; 20; 21; 22]. The usefulness of quantum computing systems and their connected networks lies in their ability to tackle computational challenges that are currently intractable, paving the way for significant advancements in science, industry, and society as a whole.
One of the main goals of quantum computing research is to scale up the quantum computing systems and build a fault-tolerant large-scale quantum computer. In recent years, a lot of efforts have been demonstrated. IBM-Q demonstrates the Eagle device featuring 127 physical qubits [23] and the Osprey device with 433 qubits [24], Google's Sycamore consists of 54 qubits [25], Quantum-num has its 'H2' device with 32 trapped ion qubits [26], IonQ devices can hold more than 20 qubits [27], QuEra also demonstrated its 256-qubit quantum simulator [28; 29], etc. Although integrating thousands of qubits into a single quantum chip is possible [30] in the near future, there are still challenges in building such a large-scale fault-tolerant quantum computer along the current route.
In the current route to reach this goal, one of the main challenges comes from the physical difficulty of integrating a huge number of physical qubits as quantum registers into a single quantum device. For example, the physical size of the quantum chip limits the number of superconducting qubits on the same chip [30; 31], while the electromagnetic trap size limits the number of ions living inside a single trap [32]. Meanwhile, cross-talk also hinders fast gate operations in largely integrated quantum systems. However, using Shor's algorithm to break RSA may require thousands of logical qubits made by millions of physical qubits [33; 34; 35].
On the other hand, integrating a large number of quantum registers usually has a trade-off with fast and reliable gate operations on any pair of computing registers. In the 'noisy intermediate-scale quantum' (NISQ) era [36], where quantum registers are made by physical qubits, the limitation is mainly reflected by slow gate operations and limited communication fidelity. For example, in superconducting devices, the coupling between qubits on different chips hinders fast and precise gate operations [30]. In trapped ion systems, increasing the number of ions in a single trap prolongs the two-qubit gate operations [31; 37]. In the fault-tolerant quantum computing (FTQC) era, where logical qubits are protected by quantum error correction (QEC) codes [38; 39; 40; 41], maintaining gate speed between remote logical qubits is even more challenging compared to the NISQ devices, due to the limited number of physical qubits allowed in a single device. Furthermore, the limited connectivity of the physical platforms requires a large number of SWAP gates to perform a gate between two remote qubits. The SWAP gate number is also proportional to the size of the quantum computing device, which further prolongs the gate
operations.
To resolve these issues, we take inspiration from classical computing systems, where classical memory plays the central role [42; 43]. The key insight is the CPU-memory separation in the von Neumann architecture (see Fig. 1a). Similarly, in the quantum computing system design, the separation of the computing and memory devices is also desired (see Fig. 1b). With this separation, future quantum computing devices can contain two main units: a quantum processing unit for computing and a quantum memory unit for information storage. The quantum processing unit contains a small number of computing registers, which can support a set of fast and reliable universal gates. The quantum memory unit, in constant, is designed for storing quantum information. It can contain a large number of quantum registers, which do not necessarily support a universal gate set.
In addition, due to the computing-storage separation, the issue of maintaining both fast gate and large-scale integration of qubits in quantum computing devices is avoided. The quantum memory unit only needs to have reliable communication with the QPU to store and load the quantum information. By separating the requirements on computing and information storage, the QPU and the quantum memory can be realized using different physical techniques, one with fast gate operations for QPU and one with a long coherence time for quantum memory. In addition, this design can benefit more in the FTQC systems. Because of the Eastin-Knill theorem, where a transversal universal gate set is not possible [44], implementing a universal QEC gate set requires techniques like magic state injection [38; 45], code deformation [46; 47; 48; 49], code switching [50; 51], etc., which further increases the complexity. However, these techniques are not necessary in quantum memory as a universal gate set is not required. It is also possible to help reduce the complexity of the QEC code itself [52; 53; 54; 55].
The main focus of the current development of quantum computing systems is to build a fault-tolerant and fast quantum processing unit. The discussion and demonstration of building quantum memory devices have attracted lots of attention in the physical community [52; 56; 57; 58]. However, the systematic discussion of the role of quantum memory in the quantum computing system architecture, whether a quantum memory unit should be considered, and how it should be utilized in quantum programs is still lacking. Therefore, in this paper, we attempt to follow a bottom-up manner through the design stack of quantum memory shown in Fig. 1c. We not only survey the current quantum technologies and their possible usage in building quantum memory devices, but also consider how the quantum memory modules can be utilized in higher stacks, e.g., in quantum programs and software. We hope our paper can fill the gap between the physical layer and software layer of the development of quantum memory and provide useful insights for research on both frontiers.
Specifically, at the bottom of the design stack, in Sec. II, we survey the existing quantum technologies for building quantum registers and discuss their suitability of building the most elementary units of quantum memory, which is named 'quantum memory cells' (QMCs). In order to unify the discussion across various physical platforms, we abstract the quantum memory cell concepts and explore the metrics that describe the essential properties of the QMCs. We then discuss quantum memory devices that are built on QMCs in Sec. III. We specifically introduce two quantum memory devices, a random access quantum memory (RAQM) and a quantum random access memory (QRAM). We highlight their abstract models and their possible applications. These quantum memory devices can then be utilized to construct quantum memory modules in the future design of quantum computing architectures, which is discussed in Sec. IV. We specifically give four examples, including the main quantum memory unit, quantum cache, quantum buffers, and QRAMs in quantum input-output modules. With the quantum memory modules available, we then discuss how the quantum memory can be utilized in the quantum program design. We discuss the quantum memory programming model in Sec. V. We discuss their possible application in Sec. VI. We conclude our paper in Sec. VII.
## II Quantum memory cells
A quantum memory cell (QMC) is the fundamental element in quantum memory devices, analogous to classical memory cells consisting of one or a few transistors for storing a single classical bit. QMCs can be made using a quantum register or a qubit. However, they have
Figure 1: The usage of quantum memory and the stack of our consideration of quantum memory. In (a), we show the outline of the classical quantum computing systems, while in (b), we show our vision of quantum computing systems, where quantum memory plays the centered role in the quantum computing system. The stack of our discussion on the role of quantum memory in future quantum computing is shown in (c). This paper’s structure is aligned with the design stacks.
unique requirements distinct from qubits used in quantum computing. Despite variations in physical systems for QMCs, we establish an abstract model to evaluate their performance uniformly and distinguish them from computing registers. Additionally, we introduce two metrics for quantitatively comparing different physical systems for quantum memory cells.
In Sec. II.1, we define the QMC abstract model along with the performance metrics. Subsequently, we summarize the main results and discussions comparing various physical systems in quantum computing and quantum information processing in Sec. II.2. A concise summary of key properties and metrics can be found in Table 1 and Fig. 3. For completeness of this section, we briefly survey the physical systems one by one in the rest of the section from Sec. II.3 to Sec. II.1. These subsections are intended for readers with a particular interest in the specific quantum technologies and seeking an in-depth exploration of the references to the experiments included in our comparison. Readers who are not directly engaged with the specific physical realization or desire to focus on the broader context may choose to omit these subsections without compromising the overall coherence of the paper.
### Define a QMC
A QMC can be made of a single qubit or quantum register to store one bit of quantum information. Unlike quantum computing registers, entangling gates between QMCs are not necessary. Instead, the core functionality of a QMC only includes (1) storing quantum information, (2) controlled operations to save quantum information to the QMC, and (3) load the quantum information from QMCs.
The basic structure of a QMC and its related components are shown in Fig. 2. The QMC (blue box) has an interaction interface to transfer quantum information, which is the bus qubit (red). Due to the quantum no-cloning theorem, reading and writing (RW) processes cannot copy information and hence can only be realized using quantum operations. For example, in optical systems, RW operations can be realized by photon emission and absorption, while in gate-based systems, SWAP gates or iSWAP gates between the QMC and the bus qubit can be utilized. Using SWAP gates as an example, the RW process of a QMC can be described as
\[f_{\mathrm{QMC}}(\ket{\psi}^{\mathrm{(b)}},\ket{\phi}^{\mathrm{ (QMC)}}) =\mathrm{SWAP}_{\mathrm{b,QMC}}\ket{\psi}^{\mathrm{(b)}}\otimes \ket{\phi}^{\mathrm{(QMC)}}\] \[=\ket{\phi}^{\mathrm{(b)}}\otimes\ket{\psi}^{\mathrm{(QMC)}} \tag{1}\]
where the super-indices are for the physical qubits, 'b' stands for bus qubits and SWAP is a swap gate. Although we show both qubits in pure states, if the bus (memory) qubit is already entangled with other qubits, the entanglement will be swapped to the memory (bus) qubit after the SWAP gate.
The difference between the reading and writing processes of QMC lies in which part carries nontrivial quantum information. In the memory writing process, the bus qubit is in a useful quantum state to be stored, while in the memory reading process, the state of the QMC is important. One of the unique features of a QMC using SWAP gates as its RW operations is that reading and writing processes can be completed in a single SWAP, which is unlike the classical counterpart where an extra register and two separate operations are typically needed.
A QMC should satisfy the following requirements in terms of its functionalities.
1. Quantum information storage: A QMC can store quantum information for an extended period of time. A good QMC should have a long storage time to preserve the quantum information. For NISQ devices, long storage time necessitates the physical qubit itself to have long coherence times, while in the FTQC era, it requires the logical qubit to have a small enough logical error rate.
2. Reading and Writing (RW) operations: A good QMC requires to have fast and accurate RW operations.
3. Integration capability: As QMCs are employed to construct large quantum memory devices, integrating a large number of QMCs becomes essential. Therefore, a promising candidate of QMCs should have large integration capabilities.
There usually are tradeoffs between achieving a long storage time and fast RW operations while maintaining good integration capability. For a physical qubit, improving the coherence time involves isolating it from its environment, while fast quantum operations require strong coupling to other physical degrees of freedom to perform fast quantum gates. For a logical qubit encoded in some QEC codes, it can have smaller error rates by increasing its code distance, which may increase the number of
Figure 2: Demonstration of a QMC and its functionality. The QMC itself is shown as the blue cubic. It requires a bus qubit (red) which can be classically controlled to perform SWAP gates to the QMC. The read and write functionality is realized by applying a SWAP gate between a bus qubit to store and retrieve the quantum state into/out of the QMC.
physical qubits and complicate the coupling operations with other logical qubits.
In order to evaluate the performance of a QMC by considering all three requirements together, we define a metric, \(\alpha_{\text{in}}\), named _internal storage ratio_, as
\[\text{Internal storage ratio: }\alpha_{\text{in}}=\frac{T_{\text{storage}}}{T_{ \text{RW}}}, \tag{2}\]
where \(T_{\text{storage}}\) is the storage time of the QMC, while \(T_{\text{RW}}\) is the time for a read or write operation. To account for the imperfection, \(T_{\text{RW}}\) can be estimated by,
\[T_{\text{RW}}=\tau/F_{\text{RW}}\sim\tau/p_{\text{suc}}\sim\tau/\eta, \tag{3}\]
where \(\tau\) is the raw gate time, \(F_{\text{RW}}\) is the fidelity of the quantum gate, \(p_{\text{suc}}\) is the success probability of performing the gate, while \(\eta\) is the efficiency of the information storage or information retrieval. The metric \(\alpha_{\text{in}}\) represents an estimation of the storage time of the QMC scaled by its RW speed. Large \(\alpha_{\text{in}}\) means the QMC has faster RW operations in terms of its storage time, which is preferred. It also means that the QMC only needs to be reset after a large number of RW operations.
Meanwhile, we consider another metric named _external storage ratio_ as,
\[\text{External storage ratio: }\alpha_{\text{ex}}=\frac{T_{\text{net, storage}}}{T_{\text{op}}}\eta, \tag{4}\]
where \(T_{\text{op}}\) is the time for a quantum operation on the possible connected computing devices, and \(\eta\) is the QMC RW efficiency or fidelity. This metric measures the storage time of the QMC relative to the external essential operations, taking the imperfection of RW operations into account. Note that in Eq. (4), \(T_{\text{net, storage}}\) is the net storage time, \(T_{\text{net, storage}}=T_{\text{storage}}-2T_{\text{RW}}\). Therefore, when the internal storage ratio \(\alpha_{\text{in}}<2\), the external storage ratio will be negative, which means that this QMC construction still needs to be further improved.
### Comparing physical platforms for QMCs
The discussion of QMCs and their abstract model applies to both NISQ- and FTQC-era quantum memory devices. However, the requirements on the physical platforms differ in these two scenarios.
In the NISQ memory devices, both the QMCs and the computing qubits are made by single physical qubits. The storage time can be estimated by the coherence time of a single physical qubit \(T_{\text{coh}}\). However, as the requirements of two kinds of qubits are different, in the spirit of separating the computing and memory requirements, the quantum memory devices should consist of different types of physical qubits compared to the computing devices. Reflected in our abstracted QMC model shown in Fig. 2, the bus qubit can be a type of qubit different from the QMC, and the SWAP gates for RW operations are implemented between two physical platforms. Therefore, the RW time \(T_{\text{RW}}\) is not the two-qubit gate time typically used in characterizing computing qubits. The quality and the speed of a specific physical platform exchanging quantum information with other physical platforms are essential for building QMCs in the NISQ era.
The rapid advancement of quantum technologies presents abundant opportunities for constructing QMCs across a diverse range of physical platforms. Nowadays, the most promising quantum substrate systems for quantum computing include superconducting qubits, microwave modes, trapped ion systems, neutral atom systems, defects and dopped ions in solid state systems, quantum dots, and mechanical and acoustical phonon systems. We examine their potential applications in building QMCs, focusing on the possibility and quality of the coupling across different physical systems.
In Table 1, we summarize the main properties of these physical systems. We identify the bus qubits for these physical systems experimentally demonstrated by existing works, and estimate their internal and external storage ratios. Most of the physical systems can couple to photonic bus qubits. Depending on their energy scales and available quantum transitions, it is possible that QMC have optical or microwave photon interfaces.
As one of the applications of optical photonic systems is to build long-range quantum entanglements and quantum communication links, the QMCs are integrable with quantum communication devices, especially quantum repeaters [59; 60]. In this scenario, the main role of quantum memory is to store the quantum states to synchronize between different quantum operations. As photonic EPR pairs are commonly utilized in quantum communication protocols, and given their generation is usually slow compared to the photon measurements [61], photonic EPR pair generation time can be used as a time scale to measure the photonic QMC's storage time. On the other hand, photonic systems can also perform quantum computing using the measurement-based quantum computing (MBQC) scheme [62; 63; 64], where adaptive single-qubit measurements drive the computation on a pre-generated entangled resource state. In MBQC, resource state generation is one of the most significant questions, which is usually the most time-consuming operation. The resource state can also be constructed using photonic EPR pairs [65]. In order to give an estimation of the external storage ratio (\(\alpha_{\text{ex}}\)) of the QMCs with optical-photon-based bus qubits, we consider using photonic EPR pair generation time as an estimate of \(T_{\text{op}}\). Specifically, in Ref. [66], the photon-pair generation rate can reach 52.36 kHz (\(T_{\text{op}}\approx 19\)\(\mu\)s).
Due to the fast gate operations between superconducting qubits, we envision that transmons and fluxonium qubits will be utilized in quantum processing units, rather than quantum memory modules (see Sec. II.3 for details). In Table 1, we show the internal storage ratio \(\alpha_{\text{in}}\) of superconducting qubits for reference. Superconducting qubits can strongly couple to microwave fields (see Sec. II.4). Therefore, the QMCs with microwave
photon-based QMCs are potentially integratable with superconducting-based QPUs. Entangling gate operations between superconducting qubits are used as the external operations when evaluating the QMC's external storage ratio. Specifically, we take the CZ gate between transmon qubits \(T_{\rm op}=40\) ns with fidelity \(0.998\) reported in Ref. [67]. In Table 1, we report possible usage of atomic clouds, spin ensembles in solid state systems, and phononic systems for QMCs interacting with microwave and superconducting qubits. However, the performance of these systems still needs to be improved to take advantage of quantum memory.
In Fig. 3, we plot the internal and external storage time of the QMCs built on physical systems that are discussed in the rest of the section. We plot \(\alpha_{\rm in}=\alpha_{\rm ex}\) as the black dashed line for reference. We notice that there are two regions in Fig. 3, (1) above the dashed line, and (2) below the dashed line. When a QMC point falls on the dashed line, the external operation and the RW operations take similar times, while if the point is above the line, the external operations take less time than the RW operations, and vice versa.
Improving the QMCs can be reflected by shifting the corresponding point in the storage ratio plot (Fig. 3). For example, extending the QMC coherence time can improve both internal and external storage ratios, which shifts the data point along the diagonal direction to the upper right corner. Reducing the RW time of the QMC can improve the internal storage ratio, which will push the corresponding point to the right. On the other hand, there are two methods to improve the QMCs' external storage ratio. One concentrates on the external system by accelerating external operations. The other is to increase the coupling strength and efficiency between the QMC with the external quantum system. For example, the expected trapped ion QMCs listed in Table 1 (Yb ions) are mainly from the storage time improvement, while the atomic cloud and rare-earth-ion-doped crystal QMCs are mainly from the RW operations.
Contrary to the NISQ-era quantum memory devices, in the FTQC era, although the quantum memory and quantum computing units are made of the same species of physical qubits, the QEC codes can be different to take advantage of the QPU-memory separation. As the quantum memory does not need to support a fault-tolerant universal gate set, the QEC code on QMCs can have high thresholds and yield, e.g., using the quantum LDPC codes [55]. Therefore, designing FTQC-era QMCs may focus more on QEC, e.g., designing QEC codes of quantum memory and its interfaces to QPUs. We stress that the internal and external storage ratios defined in Eqs. (2) and (4) can still quantify the performance of the FTQC QMC design. The storage time \(T_{\rm storage}\) can be estimated by the logical error rates, while the RW time \(T_{\rm RW}\) should
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l} QMC & \(T_{\rm storage}\) & Bus & RW speed (\(T_{\rm RW}\)) & Efficiency (\(\eta\)) & \(\alpha_{\rm in}\) & \(\alpha_{\rm ex}\) \\ \hline Transmon & 557.00 \(\mu\)s & N.A & 40.00 ns & 0.998 & \(1.39\times 10^{4}\) & \\ \hline Fluxonium & 1.48 ms & N.A & 100.00 ns & 0.999 & \(1.48\times 10^{4}\) & \\ \hline MW (3D) & 34.00 ms & SC & 1000.00 ns & 0.994 & \(3.38\times 10^{4}\) & \(8.44\times 10^{5}\) \\ \hline Trapped ions & 300 ms (Ca) & Optical & 29.94 ns & 0.509 & \(5.10\times 10^{6}\) & \(8.00\times 10^{3}\) \\ & 5500.00 s (Yb) & Optical (herald) & 16.13 ms & 0.901 & \(3.07\times 10^{5}\) & \(2.59\times 10^{8}\) \\ \hline Neutral atoms & 800.00 \(\mu\)s (Rb) & Optical & 8.00 \(\mu\)s & 0.510 & \(51.0\) & \(20.5\) \\ & 7.90 s (Yb) & Optical & & & \(5.04\times 10^{5}\) & \(2.11\times 10^{5}\) \\ & Optical (expect) & 182.48 ns & & \(2.21\times 10^{7}\) & \(2.11\times 10^{5}\) \\ \hline Atomic cloud & 16.00 s & Optical & 1.04 \(\mu\)s & 0.510 & \(7.84\times 10^{6}\) & \(4.27\times 10^{5}\) \\ & Optical (expect) & & 0.9518 & \(1.46\times 10^{7}\) & \(7.97\times 10^{5}\) \\ & 800.00 \(\mu\)s & MW/SC & 25.00 \(\mu\)s & 0.6 & 19.2 & \(1.06\times 10^{4}\) \\ \hline REIDC & 52.9 min & Optical & 400.00 ms & 0.0608 & \(4.83\times 10^{2}\) & \(1.01\times 10^{7}\) \\ & Optical (expect) & & 0.5187 & \(4.12\times 10^{3}\) & \(8.62\times 10^{7}\) \\ \hline NV Nuclear spin & 12.90 s (C) & NV (e) & 419.00 \(\mu\)s & 0.99 & \(3.05\times 10^{5}\) & \\ & 63.00 s (N) & 389.00 \(\mu\)s & 0.94 & \(1.52\times 10^{5}\) & \\ \hline NV ensemble & 200 ns & MW/SC & 58.0 ns & 0.3742 & \(1.29\) & \(<0\) \\ & 1.80 ms & MW (expect) & & & \(1.16\times 10^{4}\) & \(1.67\times 10^{4}\) \\ \hline QD & 58.95 ns & SC & 23.81 ns & 0.80 & \(1.98\) & \(<0\) \\ & 102.00 \(\mu\)s & SC (expect) & & & \(3.43\times 10^{3}\) & \(2.03\times 10^{3}\) \\ \hline Phonons & 130.00 \(\mu\)s (GHz) & MW/SC & 25.00 ns & 0.95 & \(4.94\times 10^{3}\) & \(3.02\times 10^{3}\) \\ & Optical & 714.29 ns & 1 (assume) & \(1.82\times 10^{2}\) & \(6.73\) \\ & 100 ms (MHz) & Optical & 714.29 ns & 1 (assume) & \(1.40\times 10^{5}\) & \(5.24\times 10^{3}\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of physical techniques that can be utilized to implement quantum memory cells. We consider the physical platforms that have been surveyed in the main text. Details about our calculation and the corresponding references can be found in the main text. We consider the QMC storage time \(T_{\rm storage}\), possible bus qubits, the RW time \(T_{\rm RW}\) and its efficiency \(\eta\), and the internal storage ratio \(\alpha_{\rm in}\) [Eq. (2)], and the external storage ratio \(\alpha_{\rm ex}\) [Eq. (4)]. The transmon and fluxonium systems are used for homogenous computing systems, and hence \(\alpha_{\rm ex}=\alpha_{\rm in}\). ‘MW’: microwave photons, ‘SC’: superconducting qubits. We use transmon to estimate the external storage ratio when the QMC can connect to superconducting qubits.
take the gate operations between different QEC codes into consideration.
With the focus on building QMCs, to enable fast QEC cycles on memory registers, the physical qubits should have fast and accurate measurement and gate operations between them. In addition, the physical qubits should support the topology required by the QEC codes. These requirements are exactly similar to the NISQ quantum computing qubits, and hence the discussion of physical qubits used in FTQC-era quantum memory is beyond the scope of this paper.
### Superconducting qubits
Superconducting circuit systems are one of the most promising systems for quantum computing [68, 69]. Superconducting circuit systems feature strong nonlinearity provided by Josephson junctions. This nonlinearity allows for constructing quasi-atom structures capable of exhibiting quantum state manipulation, rapid initialization, quantum gate operations, and readout of the quasi-atoms' quantum states. However, the coherence time of superconducting qubits is relatively short. For example, the coherence time of transmon qubits [70, 71, 72] can reach \(T_{2}^{*}\approx 0.3\) ms [73, 74], and be further improved to 0.557 ms with dynamical decoupling. A fluxonium qubit [75] with coherence time \(T_{2}^{*}\) reaching 1.48 ms has been reported [76]. On the contrary, taking the spin states of atoms and ions as an example, the coherence time can reach several seconds or even tens of minutes [77, 78, 79, 80, 81, 82].
On the other hand, the superconducting qubits can perform fast and high-fidelity single- and two-qubit gate operations. For example, transmon qubits can perform CZ gates in 40 ns with a fidelity 99.8% [83, 84, 67], while a microwave-activated CZ gate between fluxonium qubits only takes \(\sim 100\) ns with a fidelity 99.9% [85, 86]. Due to these features, superconducting qubits can be leveraged as computing registers in quantum computing systems. In fact, utilizing the fast gate operations provided by superconducting qubits has been discussed in the context of hybrid quantum computing [87, 88, 89]. Therefore, in the NISQ-era quantum computing system design, we skip the discussion of using superconducting qubits as quantum memory cells. While in the FTQC era, where QEC is needed on quantum memory cells, the fast gate operations enable fast syndrome checking and error correction cycles, which can also be utilized as a good candidate for physical qubits to build logical quantum memory cells.
In order to compare the performance of QMCs made by other systems with a homogeneous superconducting-qubit-based quantum computing system, we based on transmon and fluxonium coherence time and gate times to estimate their storage ratio as a reference. Note that in this case, the internal and external storage ratios are essentially similar. We only focus on the internal storage ratio instead. The internal storage ratio of transmon qubits can reach \(\alpha_{\text{in}}\approx 1.39\times 10^{4}\), where we take \(T_{\text{RW}}=40\) ns and \(F=0.998\). The internal storage ratio of fluxonium qubits can reach \(\alpha_{\text{in}}\approx 1.48\times 10^{4}\), where we take \(T_{\text{RW}}=100\) ns and \(F=0.999\).
### Microwave cavities and resonators
With the development of superconducting qubits, quantum manipulation of microwave photonic states has become available, which has attracted a lot of attention recently [90]. Due to the improvement of microwave cavity fabrication techniques, microwave photon lifetimes inside a cavity keep improving, enabling the potential usage of QMCs for quantum information storage. There are two approaches to utilizing microwave resonators as QMCs: (1) using the Fock states of physical microwave photons or (2) using bosonic QEC code encoded microwave modes as qubits to store quantum information.
In the first approach, a QMC is encoded into the presence of a single photon in a cavity mode. The storage time of the QMC is largely determined by the lifetime of a photon inside the cavity [91]. The lifetime of 3D mi
Figure 3: The internal (\(\alpha_{\text{in}}\)) and the external storage ratios (\(\alpha_{\text{ex}}\)) of QMCs built on different physical platforms. We include superconducting qubits (blue circles), microwave modes (orange squares), trapped ions (labeled as ’TI’, shown as green diamonds), Rubidium and Ytterbium trapped neutral atoms (reddish-orange triangles), neutral atomic clouds (labeled as ’AC’, purple inverted triangles), Rare-earth-ion-doped crystals (REIDC) (hollow brown circles), NV centers (hollow pink squares), quantum dots (hollow yellow diamonds), and phononic modes (hollow lavernder triangles). ‘O’ stands for connecting to optical systems, while ‘MW’ is for connecting to microwave systems. The data points labeled as ‘expect’ are estimated by the best performance parameters in the specific system. The detailed numbers are discussed in the corresponding subsections and summarized in Table 1. For external storage ratios \(\alpha_{\text{ex}}<0\) and the external storage time of NV nuclear spin-based QMCs, we set them to be 0.5 to be able to present them in the plot.
crowave cavities can reach 10.4 ms [92] to 2 s [93]. In this case, one choice of the bus qubit of the microwave QMCs can be a propagating microwave photon. The QMC read process is the photon emission from the microwave cavity, while the write process is the microwave photon absorption. The QMC, i.e., the microwave cavity or resonator, couples to a microwave waveguide or coaxial cable that holds the itinerant microwave photon qubit. The coupling needs to be well controlled for a high storage fidelity and RW operation efficiency [94; 95; 96]. The absorption efficiency can reach 99.4% [95].
A more promising approach is to couple the microwave systems with superconducting qubits for fast computing operations. However, compared to the cavity microwave system, the superconducting qubit has a shorter coherence time, which can degrade the microwave-cavity-based QMCs. With the superconducting control qubit built in, the lifetime of the cavity photon can still reach 2 ms [97] to 25.6 ms [98], while the photon coherence time can reach 34 ms [98], which is still significantly longer than superconducting qubits. Moreover, a superconducting bus qubit can maintain fast bus-QMC RW operations (1 \(\mu\)s as pointed out in Ref [97]), which gives microwave QMCs internal storage ratio \(\alpha_{\rm in}\approx 3.4\times 10^{4}\), where we estimate the RW fidelity as 0.994 (estimated by the efficiency reported in Ref. [95]) since it is not explicitly reported in Refs. [97; 99]. In addition, if the microwave-based QMC is connected with superconducting quantum computing devices, the computing operation time can be estimated by the two-qubit gate time between two transmon qubits \(T_{\rm op}=40\) ns (\(F=99.8\%\)), the external storage ratio can reach \(\alpha_{\rm ex}\approx 8.4\times 10^{5}\).
The other approach uses the quantum error correction code encoded microwave state as the QMC qubits. Recently, promising results have been shown in the microwave qubits encoded in GKP, cat, and other bosonic codes [100; 101; 102; 103; 104; 105; 106]. In this case, the bus qubit needs to be a superconducting qubit, where the QMC read and write processes are equivalent to decoding and encoding the quantum information of the QMC qubit. Evaluating the performance of the logically encoded QMCs requires a detailed design of the error correction code used in the QMC and the bus qubits, which is beyond the scope of this paper.
With their multimode feature, microwave cavities are well-suited for large integration of QMCs. However, accessing different QMCs inside a single cavity is limited by the number of transmon qubits that couple to the cavity mode, typically kept low for high cavity coherence. Therefore, integrating multiple multi-mode cavities can be a promising approach [107]. However, finding the best strategy for microwave-cavity-based QMCs requires a comprehensive consideration of the memory device requirements, the cavity design, and the connectivity of the cavities, etc.
### Trapped ions
The trapped ion system is one of the popular systems not only for quantum memory but also for universal quantum computing. For a thorough review of trapped ion systems, we suggest referring to Refs. [108; 109; 110; 37]. In the trapped ion systems, the quantum information is stored in the electronic spin states of the ions. The ions are trapped using radio-frequency Paul traps [111] and other types of electromagnetic traps [112; 113; 114; 115]. The quantum information can be stored in the spin states of electrons or the nuclear, which can have a long coherence time, making them suitable for quantum memory. Specifically, quantum information can either be encoded into the hyperfine levels [116; 117; 81; 82] or Zeeman sublevels of the same orbital [119; 120; 121], or other quantum states in the specific ion level structures [122; 123; 124]. Depending on the species of the ions and the type of encoding, the coherence time of the qubits can vary significantly. For instance, in the Zeeman qubits, the coherence time can reach 300 ms [120], while the hyperfine states are more isolated, and the coherence time can reach several minutes to an hour [116; 81; 82] (5500 s reported in Ref. [82]).
Other than fast gate operations between trapped ions [125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 26], trapped ions can have strong coupling with optical cavity modes [128; 129; 130; 131; 132]. In addition, they can also be used as single-photon emitters, which can be pumped to generate entangled ion-photon pairs [133; 134; 135]. It enables trapped ions as QMCs for quantum computing systems with optical interfaces, photonic quantum computing systems, and quantum communications. In terms of using stationary photonic qubits living in the optical cavity mode as the bus qubit, the coupling strength \(g/2\pi=16.7\) MHz (\(T_{\rm RW}\approx 30\) ns) has been demonstrated [132]. With the 300 ms long coherence time achieved in the Zeeman qubit of Ca\({}^{+}\) ions [120], the internal storage ratio can reach \(\alpha_{\rm in}\approx 5.1\times 10^{6}\), where the RW efficiency is approximated by the ratio of the photon linewidth and the ion-photon coupling \(\eta\approx 1-g/\Delta\omega\approx 50.9\%\).
On the other hand, the generated entangled ion-photon pairs can be used in heralded entanglement generation schemes to create entanglement between ions with other photon emitters [136; 137; 138; 139]. In this scenario, for a trapped-ion-based QMC, the bus qubit is another trapped ion, which is used as a single-photon emitter. Using the heralded entanglement generation scheme, the bus ion is entangled with another computing qubit and forms a Bell state [140; 141; 142]. The bus ion can then be entangled with the QMC ion and transfer quantum information to the computing qubit. In this scheme, the RW time is determined by the bus-computing entanglement generation time, \(T_{\rm e\text{-g}}\), and a two-qubit gate time between ions \(T_{\rm Gate}\). The entanglement generation is probabilistic, whose probability is determined by the photon loss and photon detection efficiency. However, the generation process can be performed in parallel to speed up the generation time [143]. Therefore, the internal storage ratio,
in this case, depends heavily on the specific setup.
Here we aim to furnish an approximate estimation, solely to give a qualitative understanding of the performance of ion-based QMCs in this case. In Ref. [135], entangled Yb ion-photon pairs can be generated with fidelity 90.1% and rate 62 Hz (\(T_{\text{e-g}}\approx 16.1\) ms). As the two-qubit gates between ions can be implemented in 10 to 600 \(\mu\)s with fidelity \(>99\%\)[117, 118, 127, 27], the entanglement generation time is dominating. If the Yb ions can maintain long coherence time (5500 s reported in Ref. [82]), the internal storage ratio can reach \(\alpha_{\text{in}}\approx 3.1\times 10^{5}\), where we only account the contribution of \(T_{\text{e-g}}\) to the RW time.
Using trapped-ion-based QMCs for optical quantum computing and quantum communication, with the 300 ms coherence time of Zeeman qubits of Ca\({}^{+}\)[120], the external storage ratio can reach \(\alpha_{\text{ex}}\approx 8.0\times 10^{3}\). We estimate the RW operations based on Ref. [132] as above, and use photonic EPR pair generation time as the external operation time (19 \(\mu\)s, see Sec. II.2). For a Yb ion-based QMCs with RW scheme as mentioned above, the external storage ratio can reach \(\alpha_{\text{ex}}\approx 2.6\times 10^{8}\) with 5500 s storage time.
In addition, the spin degree of freedom of the ions can couple to electromagnetic fields in MHz to GHz range, which enables superconducting qubits as bus qubits to couple to superconducting quantum computing devices. Direct coupling between single microwave photons with single trapped ions is possible, but the coupling strength is estimated to be in the order of tens of Hz [144]. The slow coupling hinges using single microwave photons as bus qubits to couple to superconducting qubits. Instead, another approach using an oscillating electric field to drive sideband transitions of ions can provide \(\sim 60\) kHz coupling between an ion to superconducting qubits. Even though this coupling strength gives RW times much smaller than the coherence time of trapped ions, the RW time can be challenging as it is slower compared to superconducting qubit coherence time (see more detailed discussion in Sec. II.3).
Furthermore, the trapped-ion-based QMCs are suitable for large-scale integration. Since ions are naturally identical, there are no fabrication imperfections that limit the performance of individual QMCs. Specifically, commercial companies have demonstrated and made public access to 20 to 32-qubit trapped ion quantum computing units already [26, 27]. In addition, hundreds of ions can be trapped into a single 1D or 2D ion trap, which shows the capability to construct quantum devices with large sizes of trapped ions [145, 146, 147]. Unlike the trapped-ion quantum computing devices, which discourage large numbers of trapped ions in a single trap due to the hardness of driving two-qubit gates by selectively driving a single phonon mode of the trapped ion array, trapped-ion-based quantum memory devices do not require two-qubit gates between the QMCs, which releases the requirement of integrating trapped-ion QMCs. However, to reduce the RW latency, designing the structure of the quantum memories consisting of many QMCs to minimize the transporting time is an important question.
### Neutral atoms
Neutral atoms share several advantages with ion systems, including their intrinsic long coherence time brought by the spin degrees of freedom for information storage, the capability of precise control, and the ease of integration. However, neutral atoms have their own features, which distinguish themselves from ion-based QMCs. There are several strategies to encode quantum information into states of single atoms. Other than the hyperfine ground states of the Rydberg atoms [148, 77, 78, 79] (named as GG qubits in Ref. [150]), the ground state of an atom and its Rydberg excited state, i.e., a highly excited electronic state, can also be used to encode the qubit \(|0\rangle\) and \(|1\rangle\) states [151, 152] (GR qubits mentioned in Ref. [153, 150]). The coherence time of the qubits varies according to the species of qubits and the trapped atoms, ranging from a few microseconds to several seconds [154, 155, 156, 80, 149, 80, 149].
Despite coupling to another neutral atom leveraging the "Rydberg blockade" effect [156, 157, 158], neutral atoms also have strong coupling to optical light, which makes optical interfaces (photonic bus qubits) possible. In Ref. [159], two trapped Rb atoms interacting with photonic qubits with RW efficiency \(\eta^{2}\approx 26\%\) has been demonstrated. With the coherence time of the Rb atoms reported in the same experiment (800 \(\mu\)s) and the address pulse durations (8 \(\mu\)s) for the RW time, the internal storage ratio is \(\alpha_{\text{in}}\approx 51.0\). Using the QMC for optical quantum computing, where the operation time is estimated by 19 \(\mu\)s (see Sec. II.2 for detailed discussion), the external storage ratio reaches \(\alpha_{\text{ex}}\approx 20.5\). Note that the coherence time of other species of atoms can have a much longer coherence time. If the Yb atoms can achieve the same RW operations, the corresponding internal and external storage ratio can achieve \(\alpha_{\text{in}}\approx 5.0\times 10^{5}\) and \(\alpha_{\text{ex}}\approx 2.1\times 10^{5}\), respectively (\(T_{\text{coh}}=7.9\) s demonstrated in Ref. [80]). The strong coupling between the single atom and optical modes has also been demonstrated in experiments, where \(g/2\pi=3.2\) MHz is shown in Ref. [160]. Although in Ref. [160], it is the Cs atoms that couple strongly to the optical modes, the RW operations of general atom-based QMCs are expected to be further improved in the future. With the coupling strength \(g/2\pi=2.74\) MHz, the RW operations take \(T_{\text{RW}}=\pi/g\approx 182.5\) ns with efficiency \(\eta^{2}\approx 26\%\). The internal and external storage ratios can reach \(\alpha_{\text{in}}\approx 2.2\times 10^{7}\), and \(\alpha_{\text{ex}}\approx 2.1\times 10^{5}\), respectively, if the coherence time of the QMC is 7.9 s.
In addition, a cloud of neutral atoms can also be optically controlled to serve as QMCs for optical photons. When a cloud of atoms coherently couple to the same field, the quantum interference can boost the coupling strength between the field and the collective mode of the atoms [91]. To control the photon absorption and emis
sion, electromagnetically induced transparency (EIT) is one of the most commonly adopted ways to controllably absorb and emit bus optical photons, i.e., achieve RW operations. The EIT-based atomic cloud photonic quantum memories have been demonstrated in both cold and warm atomic ensembles [161; 162; 163; 164; 165; 166]. The EIT-based room-temperature atomic ensemble can have \(\sim 0.9\) ms storage \(1/e\) lifetime [174; 175] and in total 1 s storage time is possible [169], while the cold atom clouds with dynamical decoupling can extend the lifetime to 16 s [162]. For classical light pulse storage, the retrieval efficiency (equivalent to \(\eta^{2}\) used in Eq. (4)) of the atomic quantum memory can achieve 92% [170], while the light storage down to single-photon level has also been demonstrated [165; 166; 177; 178].
In the EIT-based QMCs made of atomic clouds, the RW operation is achieved by absorbing and emitting light pulses carrying the quantum information. Whenever the quantum information in the QMC is read, i.e., the light pulses are re-emitted from the optical media, the QMC gets reset. In the writing process, the writing time is determined by the speed of turning the control light pulse off. This time is compatible with the signal light pulse duration. In the cold-atom cloud system, the storage time can be estimated by the \(1/e\) lifetime, where we adopt 16 s from Ref. [162]. We estimate the internal storage ratio using the read and write full-process efficiency \(\eta^{2}=0.26\) and the control light FWHM duration 1040 ns reported in Ref. [162] as the RW time, which results in \(\alpha_{\rm in}\approx 7.8\times 10^{6}\). Leveraging the QMCs for optical quantum applications (\(T_{\rm op}\approx 19\)\(\mu\)s), the external storage ratio is \(\alpha_{\rm ex}\approx 4.3\times 10^{5}\). If the full-process efficiency can be improved to 0.906 as reported in Ref. [172], the internal and external storage ratios can be improved to \(\alpha_{\rm in}\approx 1.5\times 10^{7}\) and \(\alpha_{\rm ex}\approx 8.0\times 10^{5}\), respectively.
Furthermore, similar to the ions, the spin levels of atoms can also couple to microwave photons. However, the coupling between a single atom and a single microwave photon is weak [178; 144]. Therefore, a cloud of atoms is leveraged to enhance the coupling strength [178; 179; 180; 177; 179; 181]. Specifically, in Ref. [178], an ensemble of Rb atoms coherently coupled with microwave field, enabling atomic Rabi frequency 20 kHz. Based on the Rabi oscillation, we estimate the RW time as 25 \(\mu\)s with efficiency 0.6. Assuming the storage time can still reach 800 \(\mu\)s as demonstrated in Ref. [159], the internal storage ratio is \(\alpha_{\rm in}\approx 19.2\)[182]. Suppose the microwave field can couple to transmon qubits for computing, which takes another 1 \(\mu\)s for microwave-transmon coupling with efficiency \(\eta\approx 0.994\) (see Sec. II.4), the external storage ratio can be \(\alpha_{\rm ex}\approx 1.1\times 10^{4}\). Other attempts to construct coherent coupling between microwave fields in superconducting coplanar waveguides and a beam of Rydberg helium atoms have been demonstrated in experiments [183; 184; 185]. However, limited by the coherence time and the coupling strength, high-fidelity single microwave photon level operations still need to be demonstrated, which is needed to connect the QMCs with superconducting quantum computing devices.
Similar to the trapped-ion systems, neutral-atom-based QMCs can be largely integrated, and share similar benefits with trapped-ion systems. In addition, as neutral atoms are trapped into optical lattices, which enables higher dimensional neutral atom lattices easily [186; 187; 188; 189], higher dimensional integration of neutral-atom-based QMCs into quantum memory devices is viable. Moreover, the controlled removal of the missing sites in the optical lattices [190; 191; 28; 155] and coherent moving of the trapped atoms have been demonstrated [78], which can construct a more compact quantum memory device.
### Rare-earth-ion-doped Solid state systems
Similar to the trapped ion systems, the quantum states of ions doped into solid-state systems can also be precisely addressed and quantum manipulated. Among different species of ions, the rare-earth ion doped (REID) solid-state system is one of the other attractive systems to build optical quantum memory [192; 193; 194; 195; 196; 197; 198; 199], where an ensemble of doped ions are collectively manipulated. Recently, the atomic-frequency-combs-based (AFC) methods have been widely adopted in building a long-storage-time on-demand REID-based optical quantum memory [200; 201; 202; 203; 204; 205; 206]. In the AFC-based photon absorption, a sequence of narrow control pulses is sent to a broadband optical media to carve out an equal-spacing absorption spectrum, named 'frequency comb' [207]. The incident light can then be absorbed into the medium, exciting the medium atoms, and then be emitted after a period of time. The storage time is determined by the spectral spacing between the teeth of the frequency comb. In order to make the quantum memory on-demand, the optical excitation stored in the optical media is then converted into other excitation, e.g., a spin-wave excitation of the media [200; 201; 202; 205; 206]. This scheme also takes advantage of the long coherence time of the spin states compared to the optical excited states. Long storage time up to 52.9 min with dynamical decoupling has been demonstrated [206]. The efficiency of 26.9% has been reached for the AFC storage and retrieval. Converting the excitations to spin waves reduces the overall efficiency, causing the full-process efficiency to reduce to \(\approx 7\%\) in experiments [208]. The conditional storage fidelity can reach 99% [196; 208]. Further improving the retrieval efficiency is still a challenging question of the rare-earth ion-doped solid-state optical memory.
To estimate the REID QMCs' internal and external storage ratios, we notice that the AFC technique requires a preparation step before the RW operations [206; 208], i.e., a control light pulse is needed to prepare the medium's absorption spectrum into a frequency comb. This preparation time differs from the RW time in the definition of the internal storage ratio \(\alpha_{\rm in}\) (see Eq. (2)). Nevertheless, we can still treat it as a time overhead to
estimate the bare RW time \(\tau\). Based on the experiment reported in Ref. [206], the internal storage ratio can be estimated as \(\alpha_{\rm in}\approx 4.8\times 10^{2}\), where we adopt \(52.9\) min coherence time as the storage time, and the storage efficiency with dynamical decoupling as the RW efficiency \(\eta^{2}\approx 0.37\%\). As the exact time for the preparation pulses is not explicitly reported in Ref [206], we estimate it by \(400\) ms reported in Ref. [208]. The corresponding external storage time is estimated as \(\alpha_{\rm ex}\approx 1.0\times 10^{7}\), with photon EPR pair generation time \(T_{\rm op}=19\)\(\mu\)s. This shows that the QMC has a long storage time compared to a fast EPR generation process, but a relatively short storage time compared to its own RW operations (storage preparation time overhead). One limitation is the low efficiency. Imagining the storage efficiency can be improved to \(26.9\%\), which is the AFC efficiency as reported in Ref. [208], the internal storage time can be improved to \(\alpha_{\rm in}\approx 4.1\times 10^{3}\). Accordingly, the external storage time can be \(\alpha_{\rm ex}\approx 8.7\times 10^{7}\).
Similar to the trapped ions, the spin levels of the ions can couple to microwave fields, which makes a microwave-based bus qubit possible for quantum memory. There are several experimental attempts to use the spin states and hyperfine spin states of the ions to store microwave fields [209, 210, 211]. However, the coherent storing and reading out of a quantum microwave state is still lacking to fully demonstrate the feasibility of using REID crystals as a robust and reliable microwave memory.
### Solid-state defect centers
Solid-state defect centers have emerged as promising candidates for quantum memory due to their long coherence times and controllable electronic and nuclear spin states. Among all solid-state defect centers, the nitrogen-vacancy (NV) centers in diamond are one of the most promising ones due to their long spin coherence time even at room temperature [212, 213]. The negatively charged NV centers are spin-1 systems, where the electronic spin states can be used to encode quantum information. The spin states can have \(1.8\) ms lifetime in isotopically pure diamond samples without dynamical decoupling [214]. Although the nearby nucleus with nonzero spin creates a spin bath and decohere the spin states, the coherence time can be greatly extended with dynamical decoupling [215, 216, 217], to \(1.58\) s [216] for electronic spin states.
One natural choice of bus qubits for QMCs built on NV electronic spin states can be optical photons, as NV centers can be used as single-photon emitters. The entanglement between NV center spin states and the emitted photons has been demonstrated [218]. The heralded entanglement generation between remote NV centers has been realized in experiments [219, 220, 221, 222]. One disadvantage of this scheme is that the NV centers have a broad phonon side band [212, 213], which largely reduces the success probability. Another factor that decreases the success probability is the photon collection efficiency. In our analysis of using heralded entanglement generation schemes, the low success probability results in slow RW operations. To solve this problem, nano-photonic crystal structures have been utilized to Purcell enhance the photon emission to the zero-photon line and increase photon collection efficiency [223, 224].
The nearby carbon and nitrogen nuclear spin levels (hyperfine levels) have extra long coherence time compared to electronic spin states [225]. With dynamical decoupling, the coherence time can reach \(63\) s [217, 226]. The manipulation of these nuclear levels can be achieved using NV electronic states, which enable using the nuclear spin states as QMCs [221, 222, 225, 226, 227, 228], while using the NV electronic state as the bus qubit. The gate time between nuclear and electronic states ranges from \(389\)\(\mu\)s to \(1556\)\(\mu\)s [217, 226]. The nearby nuclear spin ensemble also has a long coherence time and is possible to store quantum information. The coherence time can reach \(3.5\) ms with dynamical decoupling [229].
With the experimental realizations of NV-center-based quantum systems shown in Ref. [217], we can estimate the QMC performance. If carbon nuclear spins are used as QMCs, the storage time can be estimated by the coherence time \(T=12.9\) s with dynamical decoupling. The quantum gate between carbon nuclear spin states and the NV electronic states takes \(419\)\(\mu\)s with fidelity \(F_{\rm gate}=0.99\), which is used to estimate the properties of RW operations. The internal storage ratio can reach \(\alpha_{\rm in}\approx 3.05\times 10^{4}\). Note that in this work, there are five carbon nuclear states used. We choose to report the largest internal storage ratio. Although the nitrogen nuclear spin states are not used as quantum memory in Ref. [217], making it a QMC that can benefit from the fast gate speed (\(389\)\(\mu\)s) and the long coherence time with dynamical decoupling (\(63\) s). With the estimated gate fidelity \(0.94\), the internal storage ratio can be \(\alpha_{\rm in}\approx 1.5\times 10^{6}\). However, one caveat of this approach is to connect the bus qubit, i.e., the electronic state of NV centers, to other computing registers. Although universal computing can be performed by controlling the bus qubit (NV center electronic state) and a nearby nuclear spin (nitrogen in Ref. [217]), how to scale up the systems is still an interesting question to explore.
Except for using the quantum state of a single defect center as a QMC, an ensemble of color centers can be treated as a spin ensemble, which can coherently couple to microwave fields. It enables microwave photons as bus qubits [230, 231, 232, 233, 234, 235]. Similar to REID crystals and atomic clouds, optical photon storage techniques, e.g., AFC and EIT methods, can also be applied in principle. Using microwave photons as bus qubits makes the coherent coupling to microwave-connected systems possible, e.g., to superconducting qubits [232, 233]. Direct coupling NV center ensembles with a flux qubit has also been reported in Ref. [231]. Specifically, in the experimental demonstration in Ref. [232], the Ramsey measurement gives an estimate on the storage time of the NV ensembles
\(T_{\rm storage}\approx 200\) ns. The RW of the stored state by superconducting qubits takes 58 ns with fidelity \(\eta=\sqrt{0.14}\). The internal storage ratio is \(\alpha_{\rm in}\approx 1.3\). Limited by the coherence of the NV ensembles, the net storage time is smaller than twice of rescaled RW time \(T_{\rm RW}/\eta\), so the external storage ratio \(\alpha_{\rm ex}<0\). If the electronic states can be transferred to nuclear spin ensembles, where the coherence time can reach 1.8 ms can be achieved, the internal and external storage ratio can be improved to \(\alpha_{\rm in}\approx 1.2\times 10^{4}\) and \(\alpha_{\rm ex}\approx 1.7\times 10^{4}\), which is compatible to homogeneous superconducting devices. To further take benefits from the NV-ensemble-based QMCs, extending its storage time and improving the RW operations is needed.
In general, solid-state defect centers can be compactly integrated into a single crystal. The defect color centers inside the solid state systems can be nicely fabricated and implanted inside the solid crystal, which makes large-scale integration possible. In addition, other types of defect centers, e.g., SiV [236; 237; 238], GeV [239], SnV [240] color centers, are also under investigation to improve their coherence properties and develop new quantum manipulation techniques.
### Quantum dots
Semiconductor-based quantum dot (QD) system has been attracting much attention in quantum computing and quantum information processing in recent years. Compared with other quantum systems, quantum dots can be fabricated by the well-developed deposition and lithography techniques used in the semiconductor industry. The small size of the QDs (\(\sim 100\) nm) makes them easy to be integrated largely [241; 242; 243; 244; 245; 246]. With the available high-fidelity quantum gates, semiconductor-based quantum dots can be potentially used as one of the candidates for quantum memory cells.
Quantum dot-based spin qubits have multiple ways to encode quantum information into the physical systems. As each quantum dot can confine an electron, which is a spin-1/2 particle, one natural way to encode quantum information is to use the spin state of the confined electron. The corresponding spin qubit is called 'Loss-DiVincenzo' (LD) qubit [244; 245; 246; 247; 248; 249; 250; 251]. Long coherence time up to 20 \(\mu\)s has been demonstrated in experiments [249]. With Hahn echo techniques, the coherence time can be extended to 100 \(\mu\)s [245; 249; 250]. When there are multiple quantum dots available, especially when the tuning barriers between the quantum dots are relatively low, the electrons confined in the nearby quantum dots can couple with each other and form entangled states. The quantum information can also be encoded into the state of multiple electrons. For example, the'singlet-triplet' (ST) qubit utilizes two entangled electrons confined in two nearby quantum dots [252; 253; 254; 254]. ST qubits also show promising long spin coherence time (\(\sim 2\)\(\mu\)s) [253]. The quantum information is encoded into the singlet state and one of the three triplet states. The total spin 1/2 states of three electrons can also be leveraged as a manifold to define a spin qubit [255]. As the quantum dot spin qubits in this type can realize universal quantum computing only by controlling the exchange couplings between different dots, this type of qubit is named an 'exchange-only' (EO) qubit [255; 256; 257; 246]. For a comprehensive review of the recent development of semiconductor quantum dot qubits, we suggest Ref. [246].
QD qubits can strongly couple to microwave fields in a superconducting resonator. The microwave field can have stronger coupling to the charge degrees of freedom [258; 259; 260], while the spin degrees of freedom of QDs can have longer coherence time [261; 262; 263; 264]. From the spectral measurements, QD charge qubits can have a strong coupling up to \(\sim 119\) MHz [258], while the spin qubits can a have coupling to microwave field 52 MHz [263]. Therefore, using microwave photons as bus qubits for a QD-based QMC is available. Moreover, as superconducting qubits can couple to resonators strongly, using the microwave field to couple superconducting qubits with QDs has also been demonstrated in experiments [260; 263]. Therefore, a transmon qubit can also be used as a bus qubit for QD-based QMCs. Specifically, the Rabi oscillation between a transmon qubit and a QD charge qubit is experimentally demonstrated [260].
Based on Ref. [260], the coherence time of QD is estimated as 59 ns (FWHM linewidth is 2.7 MHz). The RW operation quality can be estimated from the Rabi oscillation, where the RW time is estimated as \(T_{\rm RW}\approx 23.8\) ns, and the efficiency is estimated as 0.8. Therefore, the internal storage ratio reaches \(\alpha_{\rm in}\approx 1.98\). This means further improving the coherence time of QD qubits while maintaining the strong coupling to microwave resonators and superconducting qubits is needed. Note that the rescaled single RW operation time is comparable to the storage time, the external storage ratio is negative, which means the current QD-based QMCs still need improvement to gain advantages. Suppose the coupling can be tuned such that the QMCs can still maintain good coherence time (\(T_{2}\approx 102\)\(\mu\)s in Ref. [245]), while the RW operations are still as good as demonstrated in Ref. [260], the internal storage ratio can reach \(\alpha_{\rm in}\approx 3.4\times 10^{3}\). Considering using this QD-based QMC for superconducting quantum computing devices, where the operation time is \(T_{\rm op}=40\) ns (\(F\approx 99.8\%\)), the external storage ratio can be improved to \(\alpha_{\rm ex}\approx 2.0\times 10^{3}\).
### Phononic systems
Phononic systems are also widely considered within the context of hybrid quantum systems [87; 88; 89] because they can interact with circuit-QED systems through the piezoelectric effect, as well as with optical systems through the optomechanical effect. This makes photonic systems an intermediary system for microwave-to-optical transduction, which has been a key focus in recent ef
forts to achieve long-range quantum communication between circuit-QED devices [265; 266; 267; 268; 269; 270; 271]. For a review of recent progress on nano-phononic systems, we refer to Refs. [272; 273; 274].
Unlike electromagnetic waves, acoustic waves require media to propagate, making phononic modes well-isolated and beneficial for maintaining a long lifetime. For example, in nano-mechanical resonators which hold phonon modes with frequency \(\sim 1.4\) MHz, the single-phonon lifetime can reach 100 s to 1000 s [275; 276]. The coherence of the phonon modes can reach 100 ms [275]. The lifetime of phonon modes with GHz frequencies in nano-acoustic resonators can reach 1.43 s, while the coherence time can reach 130 \(\mu\)s [277]. Therefore, the photonic modes can build QMCs where the quantum information is stored in the oscillations of these phonon modes. Fast coherent couplings between microwave modes (superconducting qubits) and the phononic modes provide necessary tools to manipulate the phononic modes in the quantum regime and show its quantum features [271; 278; 279]. An iSWAP gate between the microwave modes and phononic mode can only take 25 ns with fidelity 0.95 [279]. Strong dispersive couplings between the microwave and phononic modes have also been realized in experiments [279; 280], which can be used to couple phononic modes with microwave and optical photons as well. This makes using microwave and optical photons as the bus qubits possible. In addition, phononic QMCs can be integrated with superconducting qubits for fast quantum gate operations with the help of microwave bus qubits.
Using the phonon modes as QMCs and microwave photons as bus qubits, the internal storage time of phononic QMCs can be estimated as \(\alpha_{\rm in}\approx 4.9\times 10^{3}\), where we consider the GHz-frequency phonon modes as the QMCs, as they naturally couple to the microwave bus modes and can be integrated with superconducting qubits. As coupling the microwave bus qubit with the superconducting qubit extends the RW time, to consider the external storage time, we take the coupling between the superconducting qubits and the microwave photon bus qubits into the consideration of phonon-based QMC RW operations (\(\approx 1.04\)\(\mu\)s in total [97], with efficiency \(\eta\approx 0.95\times 0.994\)). We then take the time for a two-qubit gate between superconducting qubits as a quantum computing operation time (40 ns). The external storage time can reach \(\alpha_{\rm ex}\approx 3.02\times 10^{3}\).
Furthermore, the phononic modes can also couple to photonic modes via optomechanical effect, where the coherent coupling strength \(g/(2\pi)\approx 700\) kHz has been demonstrated [281]. Therefore, using optical photons as the bus qubits of a phononic-mode-based QMC is possible. In this case, the RW time of the QMC can be estimated as 0.71 \(\mu\)s, where we consider the RW operation with unit fidelity for the estimation. The internal storage ratio can be estimated as \(\alpha_{\rm in}\approx 1.8\times 10^{2}\) with storage time 130 \(\mu\)s [277]. Using GHz phonon mode QMCs to store photonic qubits in optical quantum computing and quantum communication is also possible. With EPR pair photon generation rate at 52.36 kHz, the external storage ratio \(\alpha_{\rm ex}\approx 6.7\). If the MHz phonon QMCs can have a similar RW speed with optical bus qubits, the internal and external storage ratios can reach \(\alpha_{\rm in}\approx 1.4\times 10^{5}\) and \(\alpha_{\rm ex}\approx 5.2\times 10^{7}\), which means the phonon-based QMC needs to improve its coherence time and QMC-bus coupling speed to further enhance its performance.
Similar to microwave-cavity-based QMCs, as a mechanical membrane or acoustic resonator can support multiple phonon modes, these modes can all be used as QMCs. Therefore, the phononic QMCs can be easily integrated to form a quantum memory device. In addition, the small physical scale of the phononic systems makes them easily fit into a single dilution refrigerator, which can be integrated with superconducting quantum computing chips.
### Others platforms
In addition to the physical systems we discussed above, there has been significant interest in using topologically protected states for quantum computing. One effort includes using the topological error correction codes to encode physical qubits to logical qubits, whose quantum information is topologically protected. One example is the surface code [282; 40; 283]. Using the topological error correction codes for quantum memory has been discussed in the seminal reviews Refs. [52; 284]. This approach is not limited to any specific physical platforms. Another approach is to use the physical topological states as the basic physical qubits. In this scenario, as the physical qubit is robust to local imperfections, these physical qubits can be more robust compared to other physical qubits and reduce the QEC overhead. For example, the Majorana zero modes (MZM) localized on superconducting nanowires can be used as physical systems to encode quantum information. By braiding the MZMs, gate operations between qubits can be applied [285]. A complimentary review of the theory and experimental realization of the MZM in the solid-state systems can be found in Ref. [286], while achieving couplings between Majorana qubit and superconducting qubit has also been proposed [287]. Reviews of topological quantum computing can be found [288; 289]. Despite that the experimental realization of the MZM modes is still under some debate, it has the potential to become a promising technology to realize topologically protected quantum memory.
## III Building quantum memory devices
With the development of single QMCs, how to integrate individual QMCs into a quantum memory device (QMD) is the next question to explore. Similar to the classical memory device, in order to address QMCs effi
ciently, it is necessary to assign addresses to QMCs such that each QMC can be accessed by its address. In Fig. 4, we show an abstraction of a quantum memory device. A quantum memory device should have at least three interfaces: input, output, and address ports. In the memory loading phase, the address is given, and the quantum information carried by the QMC with the given address is exported to the output port. In the writing phase, the quantum information is fetched into the QMD through the input port and saved to the QMC with the given address. In contrast to a classical memory, all the quantum memory ports can either carry classical or quantum information. The unitary nature of quantum operations requires to have an additional address port for address information.
In order to consider different quantum memory designs and physical implementations, we define metrics to quantitatively discuss the device performance. A good quantum memory device should have fast RW speed, long storage time, and, ideally, a large scale of integration of QMCs. First of all, the internal storage time \(\alpha_{\text{in}}\) discussed in Eq (2) can be extended to
\[\text{Storage ratio: }\alpha_{\text{QMD}}=\frac{T_{\text{storage}}}{T_{ \text{RW}}}, \tag{5}\]
where \(T_{\text{RW}}\) is the read and write time of the quantum memory device. Compared to the metric of QMCs, addressing the proper QMC also takes time and may dominate the RW process. We define the QMD RW time as \(T_{\text{RW}}=T_{\text{addr}}+T_{\text{RW,QMC}}\), where \(T_{\text{addr}}\) is the addressing time and \(T_{\text{RW,QMC}}\) is the RW time for the QMCs. The numerator \(T_{\text{storage}}\) is the storage time of the quantum device. For near-term implementation, where the quantum memory is not error corrected, the coherence time of the QMC qubits inside the memory device can be a good measure, and hence \(T_{\text{storage}}=T_{\text{coh}}-T_{\text{RW}}\). While in the FTQC regime, the storage time can be estimated by the inverse of the logical error rate.
On the other hand, the external storage ratio \(\alpha_{\text{ex}}\) can also be similarly extended to describe the performance of a QMD by using QMD RW time \(T_{\text{RW}}\) in Eq. (4). However, to better quantify QMD's performance, especially to quantify its operation time relative to the computing operation, we consider a new metric,
\[\text{Memory latency: }\beta=\frac{T_{\text{RW}}/\eta}{T_{\text{op}}}, \tag{6}\]
where \(T_{\text{op}}\) is the time for a quantum operation on the quantum computing module. This parameter \(\beta\) effectively describes the latency of the QMD measured by the quantum computing speed, and hence is called the _memory latency_. An ideal quantum memory device should have a small latency.
In addition to these two metrics, we define another quantity named _addressability_,
\[\text{Addressability: }\gamma=\frac{1}{\alpha_{\text{QMD}}}\cdot\frac{N}{n}= \frac{T_{\text{RW}}N}{T_{\text{storage}}n}, \tag{7}\]
where \(N\) is the memory capacity, i.e., the total number of QMCs integrated inside the quantum memory device, and \(n\) is the number of QMCs that can be RW in parallel. The meaning of the addressability \(\gamma\) is the fraction of the QMCs that can be addressed in the memory cycle allowed by the storage time of the quantum device. Ideally, we want a quantum memory device to have \(\gamma<1\). If \(\gamma\gg 1\), the quantum memory device essentially integrates too many QMCs to be fully utilized, and the RW of the QMC is the bottleneck.
### Comparison between RAQM and QRAM
We focus on the two major types of QMDs:: (1) Random access quantum memory (RAQM), and (2) Quantum random access memory (QRAM). In this subsection, we briefly compare RAQM and QRAM in terms of their differences and applications. More detailed discussion and reviews on previous efforts of building these two devices can be found in the following subsections.
A Random Access Quantum Memory (RAQM) is a quantum analog of classical random access memory, more specifically, a dynamical RAM in classical computer architecture. In a RAQM, many QMCs are integrated into a QMC array to store quantum information. The QMCs can be addressed individually according to their addresses. A RAQM only allows classical address information, which means the QMCs can only be addressed separately. Addressing QMCs can be realized by classical controls on the QMC array. Mapping it to the model of quantum memory shown in Fig. 4, the input address information is purely classical and is kept classical during the memory query. The structure of RAQM and its functionality will be discussed in detail in Sec. III.2.
On the other hand, a Quantum Random Access Memory (QRAM) distinguishes itself from classical RAM and RAQM by enabling coherent addressing of multiple
\begin{table}
\begin{tabular}{c|l|l} \hline \hline & RAQM & QRAM \\ \hline Address Info & Classical & Quantum (encoded in the states of address qubits) \\ Addressing & Classical & Coherently routing bus qubits \\ Stored Info & Quantum & Classical or Quantum \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between the RAQM and QRAM. We highlight their difference and the key features of each device.
Figure 4: The abstraction of a quantum memory device. A general quantum memory device should have four ports, input, output, and two address ports. All these ports can be either classical information or quantum information.
QMCs. Coherent addressing requires a significant modification of the classical addressing techniques. Specifically, the quantum address information must be represented as quantum states of address qubits. In the abstract QMD models (Fig. 4), a QRAM can take both quantum input and quantum address information. The structure of a QRAM, especially its quantum addressing components, and its functionalities are discussed in Sec. III.3.
In Table. 2, we highlight the differences between RAQMs and QRAMs. We notice that both RAQM and QRAM are analogous to the classical random access memory, however, in two distinct directions. As demonstrated in Sec. III.2, the RAQM stresses the quantum nature of the memory cells, where the quantum information can be stored and retrieved, while the random access feature is purely classical. Therefore, quantum error correction and mitigation techniques need to be implemented on the quantum memory cells to improve the information storage fidelity. On the other hand, demonstrated in Sec. III.3, the QRAM focuses more on quantum routing to coherently address the information stored in the memory array. To improve the noise resilience, quantum error correction needs to be implemented on the quantum routing structure.
Although a QRAM, in principle, has all the functionalities of a RAQM, we still believe RAQMs are an indispensable part of quantum computing architectures in the future. As we discussed in Sec. III.3, if the address qubit is in a classical state, which corresponds to a single classical address, the quantum routing module in the QRAM can guide the bus qubit to the QMC deterministically. Therefore, using SWAP gates as the RW operations can read from and write to the QMC as a RAQM. However, the classical addressing in RAQMs is unnecessary to be protected by quantum error correction. Therefore, if classical addressing is sufficient for a quantum computing task, using RAQMs can greatly reduce the overhead of QEC on routing.
In Table. 3, according to the demand of the QMD ports in the loading and writing processes, we briefly summarize the possible realizations of the QMD. Given a memory device only stores classical information in memory cells with classical address, while it is expected to read the memory cells according to a given classical address and output classical data, the memory device can be a classical memory (the first row of Table 3). If the QMD needs to store quantum information according to classical address information in the writing process, while it loads the quantum information according to classical addresses, the QMD can be made by a RAQM (the row of 'Quantum/Classical, Quantum/Classical' in Table 3). Supposing a QMD is required to process quantum address information in the reading process, the QMD has to be a QRAM (see 'Quantum output, quantum address' and 'Classical output, quantum address' rows in Table 3). If the output port needs to be connected with classical information processing modules after the memory reading process, the output information needs to be classical. As both RAQM and QRAM will output quantum states in general, the QMD is required to take measurements of the quantum states and extract classical information from the output state. Therefore, it can be constructed by a RAQM or QRAM followed by measurements. On the contrary, in the data writing process, coherent addressing of multiple QMCs inside the memory device is possible through the QRAM. However, the physical application is still unclear to our best knowledge [290].
In the rest of this section, we provide a more detailed discussion on RAQMs in Sec. III.2 and QRAMs in Sec. III.3.
### Random Access Quantum Memory (RAQM)
The architecture of the RAQM is illustrated in Fig. 5. The classical addressing is accomplished through the classical control unit and the classical address decoding schemes. The control unit guides the bus qubit to interact with different QMCs to perform RW operations. The accessing mechanism shares similarities to a classical RAM. Other quantum registers can interact with the bus qubit to communicate with the RAQM.
During the writing process, the bus qubit is loaded with the quantum state that needs to be stored. With the classical address given, the QMC with this address
\begin{table}
\begin{tabular}{c|c|c|c|l} \hline \hline \multicolumn{2}{c|}{Reading} & \multicolumn{2}{c|}{Writing} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} \\ requirement & & \multicolumn{1}{c|}{requirement} & \multicolumn{1}{c}{Physical} \\ Output & Address & Input & Address & Realization \\ \hline \multirow{2}{*}{Classical} & \multirow{2}{*}{Classical} & \multirow{2}{*}{Classical} & \multirow{2}{*}{Classical} & \multirow{2}{*}{Classical} & \multirow{2}{*}{Memory} \\ Classical & Quantum & & Classical & Classical & QRAM + M \\ Quantum & Classical & Classical & Classical & RAQM \\ Quantum & Quantum & Classical & Classical & QRAM \\ \hline Classical & Classical & Quantum & Classical & RQAM + M \\ Classical & Quantum & Quantum & Classical & QRAM \\ Quantum & Classical & Quantum & Classical & QRAM \\ \hline Any & & Classical & Quantum & N.A. \\ Any & & Quantum & Quantum & N.A. \\ \hline \hline \end{tabular}
\end{table}
Table 3: Quantum memory devices with different requirements on the reading and writing process. In the memory reading process, the output and address information can be classical or quantum, while in the memory writing process, the input data and address data can be classical or quantum, too. When one bit of classical data needs to be stored in a QMC, the bus qubit needs to be prepared into \(|0\rangle\) or \(|1\rangle\) state and then perform the QMC writing process. When quantum data needs to be converted into a classical output, projective measurements on the state of the QMC qubit are required, which is labeled as ‘M’ in the table. N.A. specifies the situations where the application is not clear. The blue-shaded case is purely classical, whereas the red-shaded cases attract lots of attention in the current quantum computing research (see main text for more detailed discussion).
interacts with the bus qubit and performs a SWAP gate to store the quantum state into the corresponding QMC. The writing process can be formally expressed as
\[W(addr,\ket{\phi})\ket{0}_{ind=addr}\otimes\ket{\psi}_{ind\neqaddr}\] \[\rightarrow\ket{\phi}_{ind=addr}\otimes\ket{\psi}_{ind\neqaddr} \tag{8}\]
where \(addr\) is the classical address information, \(ind\) is the classical index of the QMCs in the QMC array. We assume the state of the QMC before the writing process is in state \(\ket{0}\).
During the reading process, a classical address is given. The QMC with the address qubit interacts with the bus qubit to perform a SWAP gate to swap the quantum state to the bus qubit. The reading process can be formally expressed as
\[R(addr)\ket{\phi}_{ind=addr}\otimes\ket{\psi}_{ind\neqaddr}\] \[\rightarrow\ket{\phi}_{(b)}\otimes(\ket{0}_{ind=addr}\otimes \ket{\psi}_{ind\neqaddr}) \tag{9}\]
where we assume that the QMC with the required address \(addr\) is initially in the state \(\ket{\phi}\), and it is disentangled with the other qubits for simplicity. After the reading process, the bus qubit is in the state \(\ket{\phi}\).
Combining both reading and writing processes, operations on RAQM can be expressed as
\[F_{\text{RAQM}}(addr,\ket{\phi}^{\text{(b)}},\ket{\Psi}^{\text{( QM)}})\] \[=\sum_{j}\alpha_{j}f_{\text{QMC}}(\ket{\phi}^{\text{(b)}},\ket{ \lambda_{j}}_{ind=addr}^{\text{(QM)}})\otimes\ket{\psi_{j}}^{\text{(QM)}},\] \[=\sum_{j}\alpha_{j}\ket{\lambda_{j}}^{\text{(b)}}\ket{\phi}_{ind=addr }^{\text{(QM)}}\otimes\ket{\psi_{j}}^{\text{(QM)}}, \tag{10}\]
where the operation on a single QMC \(f_{\text{QMC}}\) is shown in Eq. (1). Here, we express the quantum state of the QMC arrays in a Schmidt decomposition form
\[\ket{\Psi}^{\text{(QM)}}=\sum_{j}\alpha_{j}\ket{\lambda_{j}}_{ind=addr}^{\text{( QM)}}\ket{\psi_{j}}^{\text{(QM)}}, \tag{11}\]
where the states \(\ket{\lambda_{j}}\) are basis states of the QMC qubit with index \(ind=addr\) in the decomposition process. However, as we mentioned in Sec. II.1, the RW process involves the exchange of entanglement between the bus qubit and the memory qubits. Generalizing to a more complicated scenario where the qubits are entangled with additional qubits is obvious.
According to the current implementation of RAQMs in photonic and microwave systems, high-quality QMCs for the memory array are critical. In addition, the following requirements also have to be fulfilled:
1. Independent classical addressing of individual QMCs: The capability for independent addressing enables parallel addressing of different QMCs.
2. Independent quantum information storage: It is necessary that the quantum state within one QMC remains decoupled from the states of other QMCs when operating separately. Ideally, the cross-talk between different QMCs should be avoided.
The primary application of the RAQM resides in its utilization as an integrated memory device for quantum computing, a topic that will be explored further in the subsequent sections. The integration of QMCs into an array enables the storage of quantum states of a large number of qubits. Additionally, the classical random access functionality enables the storage and retrieval of quantum states in various QMCs at different times. More applications of RAQM in various quantum memory function units will be discussed in Sec. IV.
#### iii.1.1 Experimental demonstration of RAQMs
There have been experimental demonstrations of constructing a RAQM, ranging from storing quantum information carried by a matter qubit to a photonic pulse. A comparison between different implementations of RAQM is summarized in Table 4. In the rest of this subsection, we briefly survey a few experimental demonstrations of RAQMs and discuss their performance.
In 2017, Naik _et al._ experimentally demonstrated a circuit-QED system, which can be used as a small RAQM [99]. The QMCs are made by 11 strongly coupled resonators. The modes in individual resonators are coupled to form collective modes, where 9 of them are used as the QMCs in the RAQM. The bus qubit is a transmon qubit, which can be classically controlled through microwave parametric driving to perform iSWAP gates between the transmon mode and the selective resonator
Figure 5: The structure of a RAQM. The classical addressing is achieved using the classical control and classical address decoding scheme, similar to the classical RAM construction. The memory cells are QMCs, which can store quantum information. The quantum information in the bus qubits can be addressed to the QMC with the correct address and SWAP quantum information between them. The quantum computing registers can couple with the bus qubits to perform further information processing.
mode. The iSWAP gate fidelity ranges from 95% to 98.6%. The address is encoded in the collective mode frequencies. When a specific mode with frequency \(\omega_{j}\) needs to be addressed, a flux modulation with frequency \(|\omega_{j}-\omega_{t}|\) is applied, where \(\omega_{t}\) is the transmon frequency. The flux modulation activates a sideband transition to implement the iSWAP gate.
The coherent time of the cavity modes ranges from 1 to 10 \(\mu\)s, while the RW via the transmon-resonators mode iSWAP gate lasts 20 to 100 ns. Based on the device parameters reported in Ref. [99], limited by the relatively short coherence time, the internal storage ratio \(\alpha\) can vary from 7.5 (estimated from \(T_{\text{RW}}=100\) ns with fidelity \(F\approx 0.95\), \(T_{\text{storage}}\approx 1\)\(\mu\)s) to \(\approx 415\) (\(T_{\text{RW}}=20\) ns with fidelity \(F\approx 0.986\), \(T_{\text{storage}}\approx 8.46\)\(\mu\)s). The memory latency \(\beta\approx 2.63\) to 0.508, where we assume the quantum operation is a two-qubit gate time on transmon qubits, which takes 40 ns [83, 67, 84]. The large memory latency is due to the fast gate operations on the quantum processor relative to the RW operations. The addressability of the QM is \(\gamma\approx 2.16\times 10^{-2}\) to 1.20, which means more QMCs can be integrated into the RAQM before the RW process becomes the bottleneck. Here, we consider that the device demonstrated cannot address different QMCs in parallel, so we take \(n=1\). Note that in the worst-case scenario, the addressability is greater than unity, which means the RAQM needs to reduce the RW time further to fully appreciate the QMCs. On the other hand, introducing more RW ports while maintaining similar RW fidelity and time, which increases the parallel RW number \(n\), can also reduce the addressability, e.g., to \(\approx 0.6\) with \(n=2\).
Following this work, Chakram _et al._ used the flute method to fabricate 3D microwave cavities, which greatly increased the coherence time of the cavity modes to 2 ms [97]. Similarly, the address information is also encoded into the mode frequencies. However, the transmon-cavity mode iSWAP gate is activated using an applied microwave tone with the right difference frequencies. Although the transmon-cavity mode SWAP gate time extends to 0.5 - 1 \(\mu\)s, the number of RW operations increases significantly [97], which can be seen from a significant increase of the internal storage time \(\alpha\) to \(2.0\times 10^{3}\) to \(5.9\times 10^{3}\) (\(T_{\text{RW}}=0.5\) to 1.0 \(\mu\)s with fidelity \(F\approx 0.99\), \(T_{\text{storage}}\approx 2\) ms and 3 ms). However, because the RW operation becomes slower, the memory latency \(\beta\approx 12.6\) to 25.3. Due to the increase of internal storage time \(\alpha\), integrating 9 cavity modes as QMCs and addressing QMCs sequentially is still acceptable, with \(\gamma\approx 1.5\) to \(4.5\times 10^{-3}\). Even with 1000 modes integrated, the addressability \(\gamma\approx 0.17\) to 0.5, which means the RAQM design is still in a good regime (\(\gamma<1\)).
In addition to the circuit-QED system, RAQM is also experimentally realized in atomic cloud systems to store the quantum information carried by photonic qubits. Jiang _et al._ demonstrated using Rb atom clouds as QMCs to store optical photons [171]. They demonstrated the capability of storing 105 dual-rail encoded photonic qubits in 210 memory cells. The storage is achieved through electromagnetically induced transparency, while the random access feature is achieved by deflecting the control pulses to the cloud ensemble with the right ad
dress. The beam deflecting is obtained by acoustic-optical deflectors (AODs) using microwave tones.
In Ref. [171], the QMC coherent time is about 27.8 \(\mu\)s. The read and write efficiency is lower than 20% for all memory cells (2% to 18% reported). Although the RW control pulse extends to 500 ns, the memory retrieval photon pulse is emitted almost when the control pulse is applied. We take the stored photon pulse duration to estimate the RW speed of a single QMC \(T_{\text{RW}}\approx 100\) ns. Note that the RW time of a quantum memory device should include the time cost of setting the classical address and sending the control pulse to address the QMC. Since the AOD setting time is not reported in Refs. [171, 291], we ignore its contribution to the RW time. However, the AOD switching time is reported as 40 \(\mu\)s in Ref. [159]. If the addressing time \(T_{\text{addr}}\approx 40\)\(\mu\)s, it will obviously dominate the RW time of QMCs, and even dominate the storage time of QMCs.
If the addressing time is negligible, the RW time of the device is \(T_{\text{RW}}\approx 100\) ns, the device internal storage time can reach \(\alpha\approx 39\) (RW efficiency \(\eta\approx\sqrt{0.02}\)) to 118 (\(\eta\approx\sqrt{0.18}\)). Using the RAQM for quantum communication, as we discussed in Sec. II.2, the key operation is the EPR pair generation, which is estimated to \(T_{\text{op}}\approx 19\)\(\mu\)s. The memory latency of the RAQM is relatively small \(\beta\approx 0.012\) to 0.037. If all QMCs inside the RAQM are addressed individually, the addressability \(\gamma\approx 1.78\) to 5.3, where we consider using 210 cells as individual QMCs. In order to efficiently use all the QMCs, addressing 2 to 6 QMCs in parallel via multiplexing is needed. This experimental setup is improved in Ref. [291], where the control pulse is further reduced to improve RW time, and a smaller number of QMCs are integrated (49 QMCs), which makes the addressability to be 0.31 when addressing QMCs sequentially. In addition, in Ref [173], the optical communication between two memories has been demonstrated.
On the other hand, Langenfeld _et al._ demonstrates using single Rb atoms as QMCs in the RAQM [159]. Although only two atoms (QMCs) are shown in the RAQM device, the combined read and write operation reaches 26%. Similar to the atomic cloud-based RAQMs, the random access feature is realized by guiding the control pulses to the correct atom using AODs, where the addressing time is \(T_{\text{addr}}=40\)\(\mu\)s. The coherence of the QMCs can reach 800 \(\mu\)s. Although the addressing time is long, thanks to the relatively long coherence time, the internal storage time is still decent, \(\alpha\approx 8.2\). However, using the RAQM to interact with a fast EPR generator makes the addressing latency not negligible, where the memory latency \(\beta\approx 4.1\). The addressability of this RAQM \(\gamma\approx 0.20\), which is reasonable with only two QMCs. However, if more QMCs need to be integrated into this RAQM, the slow addressing time will soon become a bottleneck. This scenario requires either suppressing the total RW time or introducing parallel addressing techniques into the RAQM RW processes. As pointed out by the authors, the addressing time is possible to be reduced to 2 \(\mu\)s by using electro-optical deflectors. The suppression of the addressing time to 2 \(\mu\)s can improve the RAQM performance greatly by \(\alpha\approx 2.0\times 10^{2}\), memory latency \(\beta\approx 0.21\), and addressability \(\gamma\approx 0.010\).
In addition, O'Sullivan _et al._ demonstrates an echo-based scheme to store quantum information carried by photons into an ensemble of two-level atoms [292]. The quantum information can be encoded using chirped pulses to imprint a phase pattern to write to the RAQM. The same chirped pulse can be used to unwind the phase to read out the information. Multiple chirped pulses are used to realize the random access feature. The classical address of the memory is 'labeled' by the chirped pulse. To RW of the correct QMC inside the RAQM, the corresponding chirped pulse needs to be generated and sent into the quantum memory media. The RW time of each QMC is mainly determined by the chirped pulse duration (100 \(\mu\)s), where the addressing time is determined by the speed of setting the chirped pulse parameters. The read or write process efficiency is 17%. As the addressing time is not explicitly reported in Ref. [292], and it is negligible in the time sequence, we ignore the addressing time in evaluating the RAQM performance. Unlike the other RAQM experiments reported in this section, where the capacity of the RAQM is not determined by the number of physical systems that make individual QMCs, the quantum information is stored in the collective excitations of the single optical media in this experiment. Therefore, the capacity of the RAQM is determined by the number of distinct chirped pulses to access these distinct collective excitations. In the experiments, 16 modes are accessed, which give 16 QMCs inside the RAQM. The coherent time of the QMCs is extended to 2 ms using dynamical decoupling [292].
In terms of the performance of this RAQM, the RW time is relatively short compared to the coherence time of the quantum memory, which is shown by its internal storage time \(\alpha\approx 6.2\). With the 16 QMCs, the addressability is \(\gamma\approx 2.56\), which means to address all the QMCs, multiplexing to address \(n\approx 2\) QMCs in parallel is needed. However, as the bus qubit of the RAQM is microwave pulses, integrating this RAQM with either quantum communication protocols or superconducting quantum computing processors is possible. In terms of connecting this RAQM with superconducting quantum computing processors, the superconducting qubits need to interact with the microwave bus qubit, which takes another 1 \(\mu\)s for this operation and it is part of the RW time. However, the fast gate operations between superconducting qubits make the RAQM latency a disadvantage, with \(\beta\approx 61\). On the other hand, using the RAQM for quantum communication protocols requires a detailed discussion of microwave quantum communication protocols to evaluate the performance of the RAQM, which is beyond the scope of this paper. To give a rough estimate of its performance, we point out that microwave photon generation processes from superconducting qubits commonly take less than 1 \(\mu\)s [293, 294]. Comparing it with
the re-scaled RW time of the RAQM \(T_{\text{RW}}\approx 0.24\) ms, further improving the memory latency seems necessary. On the other hand, if microwave-to-optical transduction is involved in order to convert the microwave photon to the optical domain for long-range quantum communication, due to the limited transduction efficiency, the effective microwave EPR pair generation speed is in the order of 1 Hz to 1 kHz levels [295, 296, 297, 298], which enables the latency \(\beta<1\). However, the overall performance of the RAQM and the transduction device need to be further improved to meet other requirements of efficient quantum computing and quantum information processing [298].
### Quantum Random Access Memory (QRAM)
Quantum random access memory (QRAM) distinguishes itself from classical RAM and RAQM by achieving coherent addressing of the QMCs. In Fig. 6a, we sketch the architecture of a QRAM. The key ingredient of the QRAM is the quantum routing module (see Fig. 6b as an example). Based on the state of the address qubits, the routes of the bus qubits are coherent superposes after interacting with the quantum routing module, making the bus qubits coherently visit the corresponding QMCs and be returned as the output.
QRAMs were first proposed and designed by Giovannetti, Lloyd, and Maccone [299, 300]. Their design uses quantum routers in the quantum routing module, whose state can be set by the address qubits. After the router state is set, it coherently routes the next incoming qubit to the two different paths (see Fig. 6b). Sending all the address qubits into the routing module carves paths that guide the bus qubits to the corresponding QMCs, which will read out the information coherently. The bus qubit is then sent out from the QRAM, and the routers are unset to return to their starting state for the next memory call. The authors named this seminal design 'bucket-brigade' (BB) architecture. Unlike the direct analogy from the classical addressing structure, 'fan-out' architecture, they pointed out that the BB architecture is more efficient and noise-resilient. Following this seminal work, Hong _et al._ proposed an alternation of the bucket-brigade architecture by modifying the quantum switch qubits [301].
As demonstrated in Ref. [302], where Harrow _et al._ proposed the famous quantum linear algebra algorithms (HHL algorithm, named by the authors) to efficiently solve the inverse of a large matrix, the construction of QRAM and coherently addressing the memory content efficiently becomes indispensable for the speedup. Therefore, there is growing interest in building QRAM on different physical platforms. Hann _et al._ performed a realistic analysis on how to implement QRAM based on a superconducting system and acoustic quantum memory [303], while Chen _et al._ consider using solid-state systems and photons instead [304]. Recently, Weiss _et al._ proposed a QRAM design using the superconducting microwave system [305]. In addition, based on QRAM development, the concept and the architecture of a quantum data center have been proposed in Ref. [290]. Furthermore, QRAMs based on quantum random walk have also been demonstrated [306, 307, 308]. In addition to the bucket-brigade architecture, there are proposals to construct QRAM with other architectures, e.g., the flip-flop QRAM [309], hybrid QRAM [310, 311], etc. However, constructing a QRAM and demonstrating its performance have not been achieved in experiments yet, to the best of our knowledge.
On the other hand, the efficiency of QRAMs becomes another interesting question. In Ref. [312], Arunachalam _et al._ consider the noise resilient of the coherent addressing in a bucket-brigade QRAM. They claimed that the bucket-brigade architecture is not as noise-resilient as claimed in Refs. [299, 300], and the error scales exponentially as the number of address qubits. Contrary to this work, Hann _et al._ give further analysis to the noise resilience of coherently addressing different QRAMs. They pointed out that the errors occurring on quantum switches controlled by an inactive quantum switch do not degrade the final returned states from the QRAM. Therefore, the QRAM can still be noise resilient [313]. In addition, how to improve the efficiency of the QRAM operations in various scenarios is also discussed in Refs. [314, 310, 311]. The architecture for large integration of QRAM routing qubits and the memory qubits, and the possible realization using H-tree architecture are also discussed in Ref. [315]. For recent reviews on the topic of QRAM, we refer to Refs. [316, 317, 318].
Nowadays, one of the main motivations for developing a QRAM is to realize a quantum oracle for coherently accessing the memory data,
\[\hat{O}_{\mathbf{x}}\sum_{j}c_{j}\ket{j}^{\text{(addr)}}\ket{0}^{\text{(b)}}= \sum_{j}c_{j}\ket{j}^{\text{(addr)}}\ket{x_{j}}^{\text{(b)}}, \tag{12}\]
where \(\mathbf{x}\) represents some data that needs to be accessed, \(\hat{O}\) is the oracle operation. After the oracle call, the bus qubit state contains the information of \(\mathbf{x}\), and it is entangled with the address qubits. Compared to a classical oracle that only allows accessing each data \(\mathbf{x}\) according to the classical address, with this quantum oracle, several algorithms can be efficiently implemented with fewer oracle calls and give quantum speedup. For example, the Grover search algorithm [319], quantum Fourier transform algorithm [38], HHL algorithm for linear algebra [108], etc.
This quantum oracle can be achieved by QRAM. Classical data \(\mathbf{x}\) are preloaded into the QMC array according to its classical address, where the QMCs can be described by the state \(\ket{\Psi^{\text{(QM)}}}=\otimes_{k}\ket{x_{k}}\). We then prepare the address state, \(\ket{\psi^{\text{(addr)}}}=\sum_{j}c_{j}\ket{j}\), and initialize the bus qubit in the state \(\ket{0^{\text{(b)}}}\). The bus qubit and the address qubits are sent to the quantum routing module in the QRAM, which allows the bus qubit to be coherently coupled with the QMCs with address \(addr=j\). To generate the output state of the quantum oracle in Eq. (12), a
sequence of control-NOT gates are applied to the QMC qubits and the bus qubit, which leaves the state of QMCs unchanged and not entangled with the bus qubit or address qubits. Lastly, the bus qubit and address qubits are returned by the quantum routing module.
We should stress that the above process is only one way to operate QRAM, where the information stored in each QMC is purely classical. The process disentangles the QMC qubits with the bus qubits after the CNOT gates. Furthermore, to implement the quantum oracle in Eq. (12), only the reading process of the QRAM is necessary. In addition, the data reading process does not rely on SWAP gates between the bus qubit and the QMC qubit. Next, we aim to survey how QRAM can be operated and what the outcome would be.
When operating a QRAM, a series of address qubits with the bus qubits are input into the QRAM, while after the QRAM operation, these qubits are returned. When the input address qubits are in a classical state, i.e., the state is in the computational basis, the quantum routing module will guide the bus qubit to a single QMC with the corresponding address. So, when we need to perform RW operation to the QMCs, regardless of whether the QMC state is classical or quantum, the RW operation can be achieved by a SWAP gate operation. With classical address information, RAQM can achieve the same functionality with less overhead on the quantum routing module design and operations. Therefore, if the address information is purely classical, using QRAM does not seem to be necessary.
When the address information is quantum, we have briefly discussed how to read classical data out using classical 'copy' operation, which can be implemented using CNOT gates between the bus qubit and QMC qubits (or classical-controlled Pauli-X gates on the bus qubit [303, 316]). However, a few questions remain unclear, e.g., what will happen when the QRAM is in the writing mode, what are the QRAM outputs when the data is quantum, etc. In Refs. [303, 316] and Ref. [290], the reading and writing process of QRAMs are briefly discussed. For the completeness of our discussion and to help answer these questions, we briefly go over the reading and writing process of QRAMs, and discuss the state of the address qubits, the bus qubits, and the QMC qubits. We further assume that a single bus qubit is enough (word length is one qubit in each QMC), which can be easily generalized to cases with more bus qubits.
#### iv.2.1 QRAM reading classical data
When the QMCs store classical information, i.e., the state of each QMC is either \(\ket{0}\) or \(\ket{1}\), the data stored in the QMCs can be viewed as a binary vector \(\mathbf{x}\), where the \(j\)-th element is \(x_{j}\), which is stored in the state of the QMC qubit with address \(j\), labeled by \(\ket{x_{j}^{(j)}}\). The address qubits are in the state \(\ket{\phi^{\text{(addr)}}}=\sum_{j}c_{j}\ket{j}\), where \(c_{j}\) is the complex coefficient when the state is written in the computational basis. The bus qubit is initialized to \(\ket{0^{\text{(b)}}}\) state. The initial
Figure 6: The architecture of a QRAM is shown in (a). The key element of a QRAM is its quantum routing structure. A sketch of the quantum routing structure (up to three levels) is shown in (b). The address qubits and the bus qubits are sent through the routing systems from the top. The address qubits set the states of quantum switches. When the quantum switch is set to state \(\ket{L}\) (\(\ket{R}\)), the next incoming qubit is routed to the left (right) child switch.
state of the bus qubit, address qubits, and the QMCs is
\[\left|\phi^{\rm(addr)}\right\rangle\left|0^{\rm(b)}\right\rangle\left(\bigotimes \limits_{k}\left|x_{k}^{\rm(k)}\right\rangle\right)=\sum_{j}c_{j}\left|j\right\rangle \left|0^{\rm(b)}\right\rangle\left(\bigotimes\limits_{k}\left|x_{k}^{\rm(k)} \right\rangle\right) \tag{13}\]
After sending the address qubits and the bus qubit into the QRAM, according to the BB architecture, coherent paths that guide the bus qubits to the corresponding QMCs with address \(j\) are activated. Mathematically, in each term of Eq. (13), the bus qubit can interact with the QMC qubits with \(j\) address. Because \(\text{CNOT}\left|x\right\rangle\left|0\right\rangle=\left|x\right\rangle\left|0 \oplus x\right\rangle=\left|x\right\rangle\left|x\right\rangle\), where \(x=0,1\), to copy the classical data out to the bus qubit, a CNOT gate with the QMC qubit as the control can be applied. After the CNOT gate, the state is transformed to
\[\sum_{j}c_{j}\left|j\right\rangle\left|0^{\rm(b)}\right\rangle\left(\bigotimes \limits_{k}\left|x_{k}^{\rm(k)}\right\rangle\right)\rightarrow\sum_{j}c_{j} \left|j\right\rangle\left|x_{j}^{\rm(b)}\right\rangle\left(\bigotimes\limits_ {k}\left|x_{k}^{\rm(k)}\right\rangle\right), \tag{14}\]
where QMCs are still disentangled from the bus qubit and the address qubits, while the bus and the address qubits are entangled. And the quantum oracle in Eq. (12) has been realized.
To be more consistent with the quantum memory we have discussed so far, we consider using a SWAP gate between the connected QMCs and the bus qubit. In this case, the QMC qubit with address \(j\) is swapped with the bus qubit state, which gives
\[\sum_{j}c_{j}\left|j\right\rangle\left|x_{j}^{\rm(b)}\right\rangle\left(\left| 0^{\rm(j)}\right\rangle\bigotimes\limits_{k\neq j}\left|x_{k}^{\rm(k)} \right\rangle\right)=\left[\sum_{j}c_{j}\left|j\right\rangle\left|x_{j}^{\rm( b)}\right\rangle\left(\left|0^{\rm(j)}\right\rangle\bigotimes\limits_{\begin{subarray}{c}k \in\{addr\}\\ k\neq j\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
If a series of SWAP gates are applied to the QMCs and the bus qubit, rather than using CNOT gates in the reading process, the state is
\[\sum_{j}c_{j}\ket{j}\ket{\psi_{j}^{\text{(b)}}}\left(\ket{0^{(j)}}\big{\rangle} \bigotimes_{k\neq j}\ket{\psi_{k}^{(k)}}\right)=\left[\sum_{j}c_{j}\ket{j}\ket{ \psi_{j}^{\text{(b)}}}\left(\ket{0^{(j)}}\bigotimes_{\begin{subarray}{c}k\in \{addr\}\\ k\neq j\end{subarray}}\ket{\psi_{k}^{(k)}}\right)\right]\otimes\left(\bigotimes_{ k\notin\{addr\}}\ket{\psi_{k}^{(k)}}\right), \tag{17}\]
where again, the reading process leaves the bus and the address qubits entangled with the QMC qubits. If the state stored in the QMCs is entangled, according to the entanglement pre-exists in the QMCs, more qubits in QMCs can be entangled with the bus and address qubits regardless of quantum gates used in the reading process.
#### iv.3.3 Writing classical data into QRAMs
In the memory writing process, the bus qubit is prepared in an unknown quantum state and then sent into the memory module with the address information. The memory module will save the information inside the memory media according to the address information. However, QRAMs can take coherent address information, and the bus qubit can entangle with the address qubits. Therefore, we assume the bus and the address qubits are initialized into the state \(\sum_{j}c_{j}\ket{j}\ket{x_{j}^{\text{(b)}}}\), where \(x_{j}\in\{0,1\}\) and then sent into the QRAM for the writing process.
Providing the QRAM memory is initialized to state \(\bigotimes_{k}\ket{0^{(k)}}\), because the information contained in the bus qubit is classical, we can use a CNOT gate controlled by the bus qubit to copy to the QMC qubit. However, with coherent address information, the resulting state becomes,
\[\sum_{j}c_{j}\ket{j}\ket{x_{j}^{\text{(b)}}}\left|x_{j}^{(j)}\right\rangle \left|x_{j}^{(j)}\right\rangle\left(\bigotimes_{k\neq j}\ket{0^{(k)}}\right)= \left[\sum_{j}c_{j}\ket{j}\ket{x_{j}^{\text{(b)}}}\left(\ket{x_{j}^{(j)}} \bigotimes_{\begin{subarray}{c}k\in\{addr\}\\ k\neq j\end{subarray}}\ket{0^{(k)}}\right)\right]\otimes\left(\bigotimes_{k \notin\{addr\}}\ket{0^{(k)}}\right). \tag{18}\]
To understand this process, let us consider the bus qubit is disentangled with the address qubits at the beginning, i.e., \(\ket{x_{j}^{\text{(b)}}}=\ket{x^{\text{(b)}}}\) is independent of address \(j\). The state in Eq. (18) becomes
\[\ket{x^{\text{(b)}}}\otimes\sum_{j}c_{j}\ket{j}\otimes\ket{x^{(j)}}\bigotimes _{k\neq j}\ket{0^{(k)}}, \tag{19}\]
which means the bus qubit is still disentangled from the rest of the system, and the bus qubit state is coherently saved to the QMCs with the addresses \(j\in\{addr\}\). However, this is different from keeping multiple copies of \(\ket{x}\) in QMCs with addresses \(j\in\{addr\}\), which results in
\[\ket{x^{\text{(b)}}}\sum_{j}c_{j}\ket{j}\left(\bigotimes_{k\in\{addr\}}\ket{x^ {(k)}}\right)\left(\bigotimes_{k\notin\{addr\}}\ket{0^{(k)}}\right),\]
where the QMC qubits are disentangled from the rest of the system. This state is different from Eq. (19). Similarly, in the general case shown in Eq. (18), the classical data is saved to the corresponding QMC coherently, which leaves all the qubits involved in this process entangled. This process is different from writing classical data one by one into the corresponding QMCs deterministically.
If the write operation is a SWAP gate, similar to using SWAP gates to read classical data out of a QRAM (see Sec. III.3.1), the writing process leaves the involved QMC qubits entangled with the address qubits. The outcome state is
\[\ket{0^{\text{(b)}}}\sum_{j}c_{j}\ket{j}\left(\ket{x_{j}^{(j)}}\bigotimes_{k \neq j}\ket{0^{(k)}}\right)=\ket{0^{\text{(b)}}}\left[\sum_{j}c_{j}\ket{j} \left(\ket{x_{j}^{(j)}}\bigotimes_{\begin{subarray}{c}k\in\{addr\}\\ k\neq j\end{subarray}}\ket{0^{(k)}}\right)\right]\otimes\left(\bigotimes_{k \notin\{addr\}}\ket{0^{(k)}}\right). \tag{20}\]
Writing quantum data into QRAMs
When the bus qubit is prepared in a quantum state and entangled with the address qubits, i.e., the state of the bus and the address qubits are
\[\sum_{j}c_{j}\ket{j}\ket{x_{j}^{\rm(b)}}=\sum_{j}c_{j}\ket{j}\left(b_{j,0}\ket{0^{ \rm(b)}}+b_{j,1}\ket{1^{\rm(b)}}\right),\]
where the coefficients \(b_{j,0}\) and \(b_{j,1}\) are nonzero. In this case, the writing result is similar to the reading process, shown in Sec. III.3.2. If the writing operation is a CNOT gate controlled by the bus qubit, the outcome state is
\[\sum_{j}c_{j}\ket{j}\left(b_{j,0}\ket{0^{\rm(b)}0^{(j)}}+b_{j,1} \ket{1^{\rm(b)}1^{(j)}}\right)\left(\bigotimes_{k\neq j}\ket{0^{\rm(k)}}\right)\] \[=\left[\sum_{j}c_{j}\ket{j}\left(b_{j,0}\ket{0^{\rm(b)}0^{(j)}}+b_ {j,1}\ket{1^{\rm(b)}1^{(j)}}\right)\right)\left(\bigotimes_{\begin{subarray}{ c}k\in\{addr\}\\ k\neq j\end{subarray}}\ket{0^{\rm(k)}}\right)\right]\otimes\left(\bigotimes_{k \notin\{addr\}}\ket{0^{\rm(k)}}\right), \tag{21}\]
where all the qubits involved in the process are entangled. While the writing process is performed using SWAP gates, the state is
(22)
where the bus qubit is disentangled while the other qubits involved are entangled together.
#### iii.3.5 Comparison between different QRAM operation modes
In Table 5, we compare the four operation modes of a QRAM, where we specifically show the entanglement feature of the outcome state. We noticed that because of the coherent addressing feature, the address and the bus qubit are entangled with the quantum memory after the reading and writing queries, except in the case of reading the classical data using CNOT gates (or any other methods to copy the classical data to the bus qubit). The feature of generating entanglement between the address qubits and QMC qubits inside the memory is unique to QRAM operations, which is caused by the coherently addressing of the bus qubits to the quantum memory array. This feature can be useful to generate large-scale quantum entanglement. However, in the current stage of the quantum memory research, to our best knowledge, there is no specific usage for these operations. Instead, reading classical data coherently out of the quantum memory using CNOT-type classical copying operation can realize the quantum oracle in Eq. (12), while leaving the memory disentangled with the rest of the system, which becomes the main application of QRAMs. In this sense, QRAMs can be used as a classical data quantum encoder as an I/O unit or an implementation of the quantum oracle in Eq. (12), rather than traditionally believed quantum memory devices.
When a QRAM is used in this mode, the QRAM has two stages, (1) classical data loading, (2) coherent address. In stage (1), the classical data (\(\mathbf{x}\)) is loaded into the quantum memory (or even a classical memory module, as long as classically controlled gates on the bus qubits are available [303]), which can be represented as
\[W(\mathbf{x})\ket{0}^{\rm(QM)}\rightarrow\ket{\mathbf{x}}^{\rm(QM)}. \tag{23}\]
In stage (2), the bus qubit is initialized at \(\ket{0}\) state, and the address qubits are prepared. The bus and the address qubits are sent into the QRAM to coherently address the
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline & & \multicolumn{2}{c}{RW operation} \\ \hline \multirow{2}{*}{Read} & & CNOT & SWAP \\ \cline{2-3} & Classical & addr, b & all \\ & Quantum & all & all \\ \hline \multirow{2}{*}{Write} & Classical & all & addr, QMC \\ & Quantum & all & addr, QMC \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of different operation modes of QRAM. We consider the entanglement inside the output state. Specifically, we layout the entangled qubits, ‘addr’ is for address qubits, ‘b’ is for the bus qubit, ‘QMC’ is for the QMC qubits in the memory, and ‘all’ means all the qubits involved are entangled.
memory. This can be expressed by
\[R(\left|addr\right\rangle,\left|0\right\rangle)\left|\mathbf{x}\right\rangle^{ \left(\text{QM}\right)}\rightarrow\left|addr\,\&\,\mathbf{x}\right\rangle \otimes\left|\mathbf{x}\right\rangle^{\left(\text{QM}\right)}, \tag{24}\]
where the output state \(\left|addr\,\&\,\mathbf{x}\right\rangle\) respects a state of both the address and the bus qubits, which respect the quantum oracle in Eq. (12).
#### iii.2.6 Device requirements
As there is no experimental demonstration of a working QRAM, how to build a QRAM, even how to layout different components in a QRAM, is still an open question and requires a lot of effort in material fabrication, control techniques, and device optimizations. Instead of analyzing the performance of the available quantum memory devices, we highlight a few desired properties of QRAMs to better fulfill the usage of QRAMs, i.e., coherently addressing the classical data.
When the QRAM is used to encode classical data into quantum states, which are then used in quantum algorithms to be processed, e.g., in linear algebra operations and quantum machine learning algorithms, the classical data can be enormous. In order to address the classical data efficiently, the corresponding quantum routing systems also need to be massive. This requires QRAM to have a high integration of required quantum switches and the memory qubits inside the quantum routing module. Secondly, in order to achieve the quantum advantage of the algorithms using the quantum oracle, it is necessary to have a fast oracle call. Therefore, each call of the QRAM needs to be fast such that the quantum algorithms can still outperform their classical counterparts.
In addition, depending on the specific usage of QRAM in different quantum algorithms, the latency requirement of the QRAM can be different. For example, if the QRAM is to encode classical data into a quantum state, which is then used to be processed with a deep quantum circuit, as long as the QRAM call is short enough compared to the circuit implementation time, the latency of the QRAM is acceptable. On the other side, if the quantum circuit is short, the QRAM calls need to have a small latency.
## IV Quantum memory functional units in quantum processing unit architecture
Nowadays, due to the rapid development of new fabrication technology, quantum manipulation, and quantum error correction and mitigation techniques, the field of quantum computing and quantum storage has been progressing rapidly and maturing. However, current quantum computing research, especially in the realm of designing and fabricating quantum computing devices, primarily focuses on integrating more quantum registers to demonstrate their performance. Simultaneously, due to limitations imposed by physical and practical conditions, it is extremely challenging to place millions or even billions of quantum registers within the same device and maintain high connectivity. On the other hand, as we have seen from our previous sections, quantum memory techniques are increasingly mature. Therefore, it becomes possible to consider how to utilize quantum memory as an essential component within future quantum processing units (QPU).
Drawing inspiration from the architecture of classical computers, in Fig. 7, we show the main components that can be contained in the future design of the QPU architecture. A detailed design of QPU architecture is beyond the scope of our paper, and hence we show our envision of the future QPUs without discussing the detailed designs of each functioning unit. We believe a future QPU will include the following functional units, (1) a quantum functional unit (QFU) for implementing quantum gate operations, (2) a quantum memory unit (QM) that can contain a quantum cache (Q-Cache) and a larger main quantum memory, (3) quantum bus (Q-Bus) for quantum communication within a QPU, (4) quantum input-output interface (QIO) for classical data loading, and (5) quantum network interface components (QNIC) for quantum data loading and communication.
QFU is the central unit to implement quantum algorithms. It can contain a small number of quantum registers that support universal gate sets. When the quantum algorithm is performed, the quantum registers in QFUs implement the quantum gates required by the quantum algorithm. Limited by the size of the QFU, quantum memory is needed. The main QM and Q-Cache are two different memory modules that can store the quantum
Figure 7: The proposed architecture of a quantum processing unit (QPU). We envision a QPU should include a quantum function unit (QFU), quantum memory (QM), quantum bus (Q-Bus) for communication with the QPU, quantum input-output interface (QIO) unit for an interface with classical data, and quantum network interconnect component (QNIC) for quantum data and quantum communication. A classical control unit is also necessary to control different quantum modules in the QPU. The main memory components are colored blue.
information for later usage in the QFUs, which we will discuss in Sec. IV.1 and Sec. IV.2, respectively. As RW operations on the quantum memory devices require bus qubits, the Q-Bus is for quantum communication between different modules inside the QPU.
The QIO and the QNIC modules are responsible for communication between the QPU and the other quantum and classical devices. QIO module provides classical interfaces with other devices. Specifically, when classical data is prepared and needs to be processed by the quantum computer, the QIO unit is responsible for encoding the classical data into quantum states. Especially, we focus on the memory device in the QIO unit, where efficiently loading classical data into the QPU can leverage QRAMs as we discussed in Sec. III.3. The functionality of QRAMs inside the QPU architecture will be further discussed in Sec. IV.4.
In the QNIC components, we consider including a quantum communication interface with other quantum devices. For example, it can enable quantum communication with other quantum sensors, which can generate quantum data for the QPU to process. It can also enable coupling with other QPUs for distributed quantum computing tasks [298, 320] and communication with a quantum internet for long-range quantum communication with other quantum devices. One of the key components to ensure reliable quantum communication with other devices is quantum buffers, which we will discuss further in Sec. IV.3.
In the rest of the section, we direct our attention to the modules consisting of quantum memory. Particularly, we focus on the quantum memory units colored blue in Fig. 7, which include main quantum memory, quantum cache (Q-Cache), quantum buffer (Q-Buffer), and QRAM in the QIO. We deliberate on their utilities at the architecture level and their design requirements. Specifically, we evaluate their memory qubit coherence (M.Q.C), addressing coherence (A.C), qubit integration (Q.I.), read and write parallelization (RW Para.), and operation speed (O.S.). We will discuss each of the quantum memory functional units in the following sub-sections. In Table 6, we give a brief comparison between different types of quantum memory units in terms of these metrics.
### Quantum Memory Unit
Analog to the role of memory in the current classical computing architecture, quantum memory is one of the central components in our architecture design. In order to expand the capability of QFU, it is necessary to use quantum memory to store quantum information carried by the qubits that are not immediately involved in the quantum operations.
Similar to classical memory, a quantum memory unit should support the operation of reading from and writing to a QMC with a given address. Specifically, coherently addressing multiple QMCs is not necessary. As we discussed in Sec. III.2, RAQM can be used to realize a quantum memory unit in the QPU architecture.
The quantum memory unit should also satisfy a few design requirements. The quantum memory unit is similar to a classical computer's main memory. As the quantum state may need to be stored in the quantum memory for an extended period of time, the quantum memory should have a low error rate to ensure the information is still authentic. Therefore, (1) the quantum memory unit should have a much longer storage time in terms of computing operation time, i.e., the device's \(\beta\) metric should be large. This is the most important requirement to fulfill. (2) The quantum memory unit should have a large number of QMCs integrated into the device while maintaining low cross-talk errors, as the quantum memory unit needs to store all quantum information required in a quantum algorithm. (3) The RW operation time of the quantum memory unit should be small, ideally. Furthermore, (4) enabling addressing QMCs in parallel would be more beneficial for increasing the bandwidth of the RW operations, and simultaneously ensuring its addressability (\(\gamma<1\)).
However, due to the long coherence time of the QMCs from the requirement (1), if in the NISQ era where the QMCs are not error corrected, the QMC with longer coherence time means lower coupling to the environment, which usually causes a long reading and writing time. In the FTQC era where QEC is used, suppressing the error rate to increase the storage time necessitates a large code distance and more physical qubits, which can slow down the logical SWAP gate operations. In addition, a large integration of QMCs can occupy a relatively large physical space, which makes the quantum memory unit spatially separated from the QFU. All these factors lead to extending the RW time of the quantum memory. However, to ensure the first two requirements, the requirement of latency (\(\beta\)) can be slightly released.
With the design requirement of the quantum memory unit, we then consider how to utilize this device in the upper stacks. The reading and writing functions are similar to the RAQM discussed in Sec. III.2. Specifically,
\begin{table}
\begin{tabular}{l|l|l|l|l|l} & M.Q.C & A.C. & Q.I. & RW & O.S. \\ & & & & Para. & \\ \hline Main QM & 3 & Classical & 3 & 3 & 1 \\ Q-Cache & 2 & Classical & 1 & 2 & 3 \\ Q Buffer & 2 to 3 & Classical & 1 & 1 & 1 to 2 \\ QRAM & Classical & 3 & 3 & Seq. & 2 to 3 \\ \end{tabular}
\end{table}
Table 6: Comparison between the requirements of different quantum memory modules in the architecture of QPUs. Here we focus on: memory qubit coherence (M.Q.C), addressing coherence (A.C), large qubit integration (Q.I.), read and write parallelization (RW Para.), and read and write operation speed (O.S.). We rate them in a total score 3: 1 means low requirement of the property, low capability to achieve. Seq. stands for sequential operations.
the quantum memory unit should have a classical input to take in the address information. The input can take another quantum register, which carries the quantum information in the writing process while loading the quantum information from the memory in the reading process. We stress that unlike classical memory, where the reading and writing processes are unidirectional, i.e., the information is copied from and to the quantum memory, accessing the QMCs is bidirectional in the quantum case. Therefore, there is no hard distinction between the reading and writing processes, and both reading and writing processes can be represented by the model in Fig. 8. However, in order to better organize the programming and highlight where the nontrivial quantum state is stored, it would be useful to have both read and write functions enabled, although the underlying physical operations are essentially the same.
### Quantum Cache (Q-Cache)
Quantum cache (Q-Cache) is another quantum memory functional unit inside the architecture of QPUs. According to Sec. IV.1, the quantum memory unit can have relatively long latency. In order to speed up quantum computation, analog to classical computing systems, a quantum cache (Q-Cache) can be utilized. Specifically, if the quantum information carried by a certain qubit is relatively frequently visited, instead of swapping it from and to the main quantum memory every time the operation is done, the information can be stored inside a Q-Cache, which can provide faster RW operations.
Therefore, for the purpose of speeding up quantum computation, Q-Cache has a few design requirements. (1), the latency of the RW operations on a Q-Cache needs to be small, which is required by the operation speed of a Q-Cache. In order to achieve the low latency of the RW operations, a Q-Cache can choose bare qubits with shorter coherence time in the NISQ device, while choosing QEC codes with smaller code distance in the fault-tolerant device, compared to the ones used in the quantum memory unit. Therefore, (2) a Q-Cache may have moderately long storage times. We claim that the Q-Cache should satisfy
\[\alpha_{\text{ex, QC}}/\beta_{\text{QM}}=\frac{T_{\text{storage, QC}}\eta_{\text{QC}}}{T_{\text{RW, QM}}/\eta_{\text{QM}}}>r_{\text{threshold}}\sim 2, \tag{25}\]
where 'QC' labels the properties of Q-Cache, and 'QM' stands for the quantum memory unit. This means the storage time of the Q-Cache should be at least longer than storing and retrieving the quantum information from the quantum memory unit. If the quantum information carried by a qubit idles for a duration longer than twice the RW time of the main quantum memory, it would be better to be stored in the main memory, as it can experience less error. In the actual design of a QPU, the threshold value \(r_{\text{threshold}}\) can be further optimized. In addition, to further improve the latency of the Q-Cache, the Q-Cache is expected to be located on the same chip of QFUs or nearby. Due to the spatial limitation, (3) the number of QMCs can be small. On the other hand, to increase the communication bandwidth, (4) operating RW of Q-Cache in parallel through multiple banks following the classical memory design is desired.
As the Q-Cache can also be implemented by RAQMs, the interface of a Q-Cache is similar to the quantum memory unit, which is discussed in Sec. IV.1. The interface of a Q-Cache can also be represented by Fig. 8.
### Quantum Buffer
The quantum buffer is another quantum memory functional unit inside the QPU architecture. In the process of implementing a quantum algorithm or quantum operation, some resource states are probabilistically generated. Therefore, it is necessary to include quantum buffers to store these states and retrieve them when they are requested. The quantum buffer can be widely used in quantum communication components, especially in the QNIC shown in Fig. 7. There are two possible applications, interfacing with quantum sensors and with quantum networks for quantum communication. Specifically, quantum sensors can prepare quantum states that encode the sensing information. The quantum states can be imported into the QPU for further processing. However, the quantum sensing process can be slow compared to the quantum computing cycles in the QFU, and various quantum sensors operate at different speeds, which necessitate quantum buffers to receive the quantum state and make them ready to be processed by the QFU.
Another application is to use a quantum buffer to buffer information from a quantum internet. Long-range quantum communication usually relies on entanglement generation and state teleportation [59; 61]. However, the remote entanglement generation, involving state purification and photon measurements, is probabilistic in nature. Therefore, when a qubit is successfully entangled with the remote quantum system, it can be stored in the quantum buffer for later communication use [320]. Notably, the usage of quantum buffers is not limited to QNIC units.
Figure 8: The interface of a quantum memory unit or a Q-Cache. The classical address information (\(addr\)) and quantum registers (labeled as \(b_{\text{in}}\)) interact with the quantum memory, while the quantum registers \(b\) with the QMC with address \(addr\) state is outputted, which is labeled as \(b_{\text{out}}\).
Indeed, whenever there is a need to store the probabilistically generated resource states, a quantum buffer can be utilized. One of the examples would be in the magic state distillation process, where the quantum buffer can store the generated high-fidelity magic states for later use in implementing surface code Toffoli gates.
To fulfill these implementations, quantum buffers are required to work between two quantum systems. Without losing the generality, one of the quantum systems can be viewed as an information saver, which generates the quantum information and saves it to the quantum buffer, while the other one is the information loader, which loads the quantum information depending on its processing need. Therefore, there are two characteristic time periods, one is the time for generating the quantum state (\(T_{\text{g}}\)), which is from the information saver, while the other one is the duration of state consumption (\(T_{\text{c}}\)), which is from the information loader. Therefore, it requires (1) the external storage ratio of the quantum buffer to be long compared to the slower process, i.e.,
\[\alpha_{\text{ex, QB}}=\frac{T_{\text{storage}}\eta}{\text{max}(T_{\text{g}},T_{ \text{c}})}, \tag{26}\]
is a metric for a quantum buffer design and \(\alpha_{\text{ex, QB}}\gg 1\) ideally. On the other hand, (2) the latency of the quantum buffer needs to be small compared to the faster process, i.e., ideally,
\[\beta_{\text{QB}}=\frac{T_{\text{RW}}/\eta}{\text{min}(T_{\text{g}},T_{\text{ c}})}\ll 1. \tag{27}\]
The integration of the quantum buffer may not be large. It is unnecessary to buffer a huge number of quantum states, as the oldest copies can be discarded. Ideally, wasting quantum states is not efficient, and hence (4) we require \(N\sim T_{\text{g}}/T_{\text{c}}\). The parallelization feature may not be required and depends on the requirements on the consumption side. If multiple states can be consumed simultaneously, parallelizing the reading process is needed. However, simultaneously reading from and writing to the quantum buffer should not be allowed.
Since the metrics of quantum buffers include generation and consumption times, quantum buffers can be designed for quantum tasks differently. Notably, even for the same task, both the generation time \(T_{\text{g}}\) and the consumption time \(T_{\text{c}}\) can depend on the algorithms and protocols. For example, in the task of quantum communication between the current QPU and a remote quantum device, the entangled state purification protocols can have different yield and time [298], which can affect \(T_{\text{g}}\). Therefore, given quantum buffer properties, optimizing the algorithms on both generation and consumption sides to satisfy the above requirement, as well as co-design the quantum hardware and algorithms, can be interesting directions to proceed [320].
The quantum buffer is slightly different from the memory. In most cases, a single quantum buffer stores a certain type of quantum state, while each state may have different state fidelity. The state can be constantly generated with different time intervals and can be requested from other quantum units in the other quantum function units. The random access feature is not necessary for a quantum buffer. Instead, a quantum buffer can be constructed by an array of QMCs with, for example, the first-in-first-out (FIFO) policy. In Fig. 9a, we show the interface of a quantum buffer. The quantum buffer can take in a single bit of classical data for the instruction of reading or writing operations. It also takes a quantum register to interact with the quantum buffer to store or retrieve the quantum information. The quantum buffer can return the bus register along with a single bit of classical data to show whether the query operation is successful. The state of the returned bus register can depend on whether the query is successful or not.
In Fig. 9b, we show the reading process, which is demonstrated by the RW bit being set to be \(0\). In the reading process, the state stored inside the quantum buffer is requested from other QPU modules. In the reading process, the bus register is set to be \(|0\rangle\). If there are quantum states stored in the quantum buffer, which is available to be retrieved, the reading query is successful with a returning value \(1\) in the output 'S/F' bit. The bus register swaps the stored state out, labeled as \(|S\rangle\). On the other hand, if there are no available quantum states inside the quantum buffer, the reading query fails, with S/F returning \(0\). This is similar to an underflow situation in a classical buffer. If this happens, the bus register is then returned with state \(|0\rangle\) without interacting with the buffer QMCs.
The writing process of a quantum buffer is shown in Fig. 9c. In the writing mode, a quantum state carried by the bus register must be stored in the quantum buffer. When the quantum buffer is not full, i.e., not all quantum memory registers are used in the quantum buffer, the control unit of the quantum buffer will locate the unused memory registers, and the bus register state can be successfully stored by swapping its state to this memory
Figure 9: The interfaces of the quantum buffer are shown in (a), while the reading and write processes are shown in (b) and (c), respectively.
register. The S/F bit will output \(0\) while the bus register is set back to state \(|0\rangle\), which corresponds to the original state of the memory register. On the other hand, if the quantum buffer is already full, to be consistent with the reading process, the quantum buffer will return \(1\) in the S/F output bit, while holding the quantum bus register not to interact with the memory bit. This corresponds to a classical overflow situation.
We note that although in our design shown in Fig. 9, we consider sequentially reading and writing quantum states of a single bus register, reading multiple registers in the same query of quantum buffer should be supported. In this case, the quantum buffer can take multiple bus registers as input, depending on its specific implementation. When the underflow or overflow situation happens, the quantum buffer should hold the reading and writing queries until all the states or storage quantum memory registers are ready.
### QRAM in QIO
As mentioned in Sec. III.3, QRAMs work distinctly from the other quantum memory units. In our architecture design, QRAMs can be used in the QIO unit, where the classical data is loaded to the QRAM, while according to the algorithms, a set of address qubits are prepared to be sent into the QRAM as the address information, which coherently addressing the classical data and preparing a bus-address entangled states as the output. As we have discussed in Sec. III.3.6, in order to efficiently obtain this goal, a few design requirements have been discussed.
In the architecture stack, the QRAM can be built in as a general-purpose device for encoding classical data into quantum states and quantum compression of classical data. In addition, QRAMs can also be built within special function units that can perform algorithms with quantum speedups. For example, a QRAM can be built into the special function units for the Grover search algorithm [319], which can speed up the database search. The main design requirement of QRAMs lies in having fast and reliable queries. In the framework of quantum memory devices, it is equivalent to small reading latency, where
\[\beta_{\text{QRAM}}=\frac{T_{\text{R}}/\eta}{T_{\text{Circ}}}\ll 1, \tag{28}\]
where \(T_{\text{Circ}}\) is the time for implementing the quantum operations between two QRAM queries. On the other hand, the QRAM is designed to interact with classical data, which is usually in a large size. Therefore, to have a compact integration is greatly important.
When a QRAM is used as an interface between classical data and quantum devices, in Fig. 10, we show an abstracted model for a QRAM. The interface of a QRAM is shown in Fig. 10a. A QRAM can have two modes: one mode is to load classical data into the memory, while the other mode is to coherently address the classical data. A QRAM can take a single bit of classical data as the mode specification (labeled as 'R/W'). In addition, a QRAM can also take classical data as an input, or take a set of quantum address registers and a bus register as inputs.
In the classical data loading mode, where the R/W classical register is set to \(1\), a classical data \(\vec{x}\) is sent to the QRAM. The QRAM loads the classical data \(\hat{x}\) to the QRAM's memory components. In order to use the QRAM to quantum encode the classical data, or as quantum oracles shown in Eq. (12) in quantum algorithms, the R/W classical register is set to \(0\), which shows the QRAM is in the quantum query mode. In this mode, the QRAM takes a set of quantum registers, which include address and bus registers. After a QRAM read query, the address and bus registers are returned. Their states are now changed according to the classical data \(\hat{x}\) that has been stored inside the memory based on Eq. (12).
## V Quantum memory programming model
With the available quantum memory units, utilize the quantum memory modules in the future quantum programs should be available, which enables quantum-memory-aware program design. Currently, quantum programs at the assembly level are described as a quantum circuit using OpenQASM [321, 322]. QASM focuses on representing quantum circuits, including initialization and resetting quantum registers, applying quantum gates, performing measurements, and classically controlled gate operations. However, present quantum programs such as QASM are all centered around quantum registers, overlooking quantum memory. Therefore, we investigate possible changes in the current gate-based quantum programming languages and APIs to incorporate quantum memory.
Figure 10: The model of a QRAM module. The interface of a QRAM is shown in (a). The QRAM classical data loading process is shown in (b), while the QRAM query process is in (c).
In order to adapt quantum assembly languages, such as QASM, for quantum memory utilities, we propose to introduce four quantum-memory-related primitives:
1. \(\mathtt{mem}\ size\). It declares the quantum memory requirements used in the corresponding program, specifying the total number of quantum memory cells (\(size\)). It should be used at the beginning of an assembly code. When the quantum program is executed on a quantum device, the required memory size will be passed to the quantum hardware controller to check if the hardware can support the required memory size.
2. \(\mathtt{ld}\ q=[addr]\). It loads the quantum state stored in the quantum memory to the corresponding quantum register or qubits. When the argument \(q\) is a qubit, the state of the QMC with address \(addr\) will be loaded to the qubit, while the QMC is reset. If the argument \(q\) is a quantum register with \(n\) qubits, the states of QMCs starting from address \(addr\) are loaded to the corresponding register, while these QMCs are reset.
3. \(\mathtt{st}\ [addr]=q\). It stores the quantum state carried by the quantum register or qubit to the quantum memory cells with address \(addr\). Similar to the primitive \(\mathtt{ld}\), the argument \(q\) can be either a qubit or a quantum register.
4. \(\mathtt{mreset}\ addr\). It resets the quantum memory cell with address \(addr\). If the \(addr\) is missing, it will reset all the quantum memory cells declared.
With the four basic primitives to operate on quantum memory, more complex quantum memory management strategies can be implemented. The interfaces of the quantum buffer and quantum cache can be implemented as APIs, contained in quantum libraries for operating the specific quantum devices.
On the other hand, QRAM is a different type of quantum function unit distinct from quantum memory. In order to operate QRAMs in the assembly language, we proposed to include the following three primitives.
1. \(\mathtt{qram}\ \mathtt{name}[addr\_len,word\_len]\). It declares the QRAM needed in the program, and gives it a name name. The declaration requires two arguments: \(addr\_len\) specifies the number of address qubits needed to address all the memory registers, and \(word\_len\) specifies the word length of each memory register, i.e., the number of memory cells consisting of a single register.
2. \(\mathtt{qinit}\ \mathtt{name}\ [x]\). It writes classical data into the QRAM. The classical data needs to be coherently addressed via the QRAM. The argument \(x\) should be a classical array of registers. The size should be compatible with the QRAM declared in the code.
3. \(\mathtt{qld}\ \mathtt{name}(q\_b)[q\_addr]\). It performs coherent addressing of the content in the QRAM with the quantum address that is represented by the state of the address qubits \(q\_addr\). The quantum register \(q\_b\) is also given as the bus qubit.
These three primitives provide the essential functionalities of QRAM operations, which can be utilized to construct more sophisticated quantum programs and libraries, including realizing Grover search, quantum datalookup, etc. In Table 7, we summarize the primitives we proposed here.
In addition to the primitives to initiate and operate the QRAM, as a QRAM can load classical data into its memory media, we feel it is necessary to include array operations on classical data, which is recently supported by OpenQASM 3.0 [322]. With the quantum memory enabled, array operations on qubits and quantum registers are also necessary. Therefore, we propose to include slice operations on quantum registers and quantum arrays. For example,
```
1qubit[4]q;
2mem4;
3st[0]=q[1:];
```
Listing 1: Code example for Fig. 11
```
1OPENQASM3;
2gatecr(n)c,t{
3angle\(\theta\)=2r/power(2,n)
4ctrl@U(0,0,\(\theta\))c,t;
5}
6
7qubit[4]q;
8qubit[1]b;
9qubit[1]aux;
10bit[2]caux;
11
12//an examplevectorwithbinaryvaluesbit[16]vec=[0,1,1,0,0,1,1,0,1,0,1]
14
15mem4;//specifythememoryrequirementgramqr(4,1];//specifytheQRAMrequirement
17qinitqr[vec];//loadthedataintoQRAM
18
19hq;
20ldqramqrq b;//useQRAM
21//saveqintomemorywaitingforthemeasurementonauxqubit
22st[0]=q;
23
24cxbaux;
25measureaux->caux[0];
26
27if(caux[0]==1){
28ldq=[0];
29ldqr(b)[q];//useQRAM
30
31measureb->caux[1];
32st[0]=q[1:];
33
34intj=0;
35foriin[0:2]{
36hq[i];
* [37] j = i+1; while{j<4}{
* [38] if(i==0) ld q[j] = [i]; cr(j+i+1) q[j], q[i];
* [41] j = j+1;
* [43] st [i] = q[i];
* [44]
* [45]
* [46] h q[3]
* [47] st [3] = q[3];
* [48]
To describe the functionalities of the quantum memory and QRAM primitives, we include an example of the modified QASM code for amplitude encoding the classical data and then performing the quantum Fourier transform. The input data is a classical binary vector vec. The quantum circuit is shown in Fig. 11. We consider using probabilistic amplitude encoding process [316]. The encoding succeeds when the measurement on the auxiliary qubit is \(|1\rangle\). The quantum Fourier transform (QFT) sequence uses control-\(R_{k}\) gates, where
\[R_{k}=\left[\begin{array}{cc}1&0\\ 0&e^{i2\pi/2^{k}}\end{array}\right], \tag{29}\]
where \(k\) is an integer [38]. The outcome state is saved in the quantum memory with addresses 0 to 4.
In addition to the primitives we proposed to include, we stress that including quantum memory devices can also impact the design of quantum transpilers used at higher programming levels. For example, in the current stage, a quantum algorithm is written in terms of its quantum circuit, which needs to be transpiled into a QASM code based on standardized gate operations by gate decomposition and simplification. The OpenQASM code can then be converted into instructions (like the control pulses to control the physical qubits) that can directly interact with the quantum hardware. When quantum memory is available in the architecture of quantum computing systems, a memory-aware quantum transpiler is needed to manage the quantum memory usage and optimize the performance of quantum algorithms.
Higher-level quantum programming languages are also expected to coordinate with the presence of quantum memory devices. For example, quantum software development toolkits, such as Qiskit [323], should include libraries that support RAQM and QRAM operations, quantum memory management, as well as using QRAMs to speed up quantum algorithms, such as the Grover search, quantum Fourier Transform, etc. With the memory-device-aware transpilers available, these libraries can be transpiled to QASM codes that can operate quantum memory units to leverage their functionalities. On top of that, a more sophisticated quantum algebra library taking advantage of quantum-memory-aware optimization and QRAMs can be fully built for general-purpose quantum computers.
Furthermore, as addresses are introduced to operate on the quantum memory, arithmetic calculations on the quantum memory addresses should also be supported, while more sophisticated quantum data structures can be introduced into the quantum programming languages,
Figure 11: The amplitude encoding and QFT on the encoded state. The red arrows show the loading quantum states from and saving to the quantum memory.
\begin{table}
\begin{tabular}{l|l|l} \hline \hline Device & Code Format & Meaning \\ \hline RAQM & mem \(size\) & declear memory usage \\ & ld \(q\) = [\(addr\)] & load quantum memory to q-register \\ & st [\(addr\)] = \(q\) & store q-register to memory \\ & mreset \(addr\) & reset the memory \\ \hline QRAM & qram name[\(addr\_len,word\_len\)] & declear the usage of a QRAM \\ & qinit name [\(x\)] & load classical data to QRAM \\ & qld name(\(q\_b\))[\(q\_addr\)] & coherently address the QRAM \\ \hline \hline \end{tabular}
\end{table}
Table 7: A summary of quantum memory primitives to be included in QASM.
e.g., quantum array. In addition, it is necessary to include pointers for quantum memory to enable dynamical memory allocation and efficient data access.
## VI Quantum memory applications
Quantum memory is a crucial component in quantum computing systems. We explore the use of quantum memory in the architecture of a single quantum processing unit (QPU), but its potential extends far beyond that. Quantum memory can find various applications in quantum computing and quantum information processing tasks.
In our discussion, quantum memory is usually used for saving quantum information. One example is to use the QMCs in quantum memory modules to store a multi-qubit quantum state. With a large integration of QMCs, a quantum database can be built. A huge number of QMCs can be utilized to store a large quantum state or numerous copies of intermediate-size quantum states. However, a quantum database that only consists of quantum memory for quantum information storage is not completely feasible. One limitation is from the quantum no-cloning theorem. The quantum state stored in the quantum database cannot be copied. When the quantum information is requested and measured, the information is destroyed and no longer stored in the database. Conversely, in a classical database, when the information is requested, it will not destroy the information stored in the database. Therefore, we envision that a pure quantum database is not suitable for permanent information storage. Instead, a quantum data center [290] in the future can consist of both classical database and quantum memory, such that the everlasting information can be stored by a classical method in the classical memory, while the quantum memory can be used to store quantum information, either encoded from the classical information for further processing, or received from other quantum parties to be processed with the classical information.
However, encoding classical information into quantum states can compress the data for quantum processing. For example, using amplitude encoding, a classical binary data string with length \(2^{N}\) can be encoded into a quantum state of \(N\) qubits. Fortunately, this encoding can be obtained efficiently with QRAMs [316]. After encoding the classical data into quantum states, it can then be efficiently utilized in quantum algorithms, such as quantum machine learning [324; 325]. Furthermore, the quantum encoding of the classical information can also be more efficient in performing remote information processing. For example, if a client only needs to compare a large data with size \(2^{N}\) on his/her end to a chunk of data stored in the data center, classically, the data needs to be sent for comparison. With the help of the swap test [326], encoding the classical data into quantum states and sending a copy of the encoded quantum states for comparison only requires sending \(N\) qubits. If the loss of the quantum network and the imperfections can be well controlled, for a given required accuracy, only polynomially many copies of the quantum state are needed.
Furthermore, as pointed out in Ref. [290], with the help of quantum communication protocols, a quantum data center can also help improve communication privacy. In Ref. [290], Liu _et al._ proposed a protocol for multi-party private quantum communications, which ensures the privacy of the communication between multiple untrusted parties.
Other than quantum data centers, quantum buffers can be used in quantum networks and quantum internet. A quantum buffer can be combined with quantum repeaters in the communication nodes, while the entangled states can be stored for quantum communication and purification. The initial application of quantum buffer in quantum networks has been discussed in Ref. [320].
Therefore, we believe that a quantum data center combined with quantum internet and quantum communication can be an excellent application example for quantum memory and QRAM devices. With these applications, we believe quantum memory research will be increasingly attractive to research in related fields.
In addition, an interesting question with the development of quantum memory devices for quantum communication is how to verify the device is authentic. In Ref. [327], Rosset _et al._ proposed a protocol to verify the quantum storage of the quantum memory devices, developed on top of the protocol in Ref. [328]. A resource theory for quantum memory is also constructed in Ref. [327]. This protocol has been demonstrated in experiments [329; 330; 331]. This protocol is also recently generalized to continuous-variable quantum memories [332]. With the large-scale quantum memory devices available, how to verify the quantum memory and perform verification protocols more efficiently on all quantum memory devices on a quantum internet is still a valuable question for quantum memory research.
## VII Conclusion and outlook
In the rapidly evolving landscape of quantum computing and quantum information science, building efficient and reliable quantum memory devices stands as a cornerstone for unlocking the full potential of quantum technologies. In this paper, we survey different aspects of quantum memory, from the physical systems for quantum memory materials to quantum memory devices, e.g., the RAQM and QRAM. We envision that the rapid development of quantum computing systems and the improvement of various physical systems enable more complicated and high-performance memory devices, which provide the opportunity to investigate the higher-stack design of quantum computing systems, especially focusing on the pivotal role of quantum memory in quantum processing units. We then discuss the quantum mem
ory architecture design and the corresponding programming model, as well as their possible applications. Specifically, we point out the possible memory modules inside the QPU architecture, and define their software-oriented functionalities. We hope this article can present not only the status of the research in the hardware and device stacks of quantum memory to higher-stack researchers but also a view of quantum memory devices from the top stacks and show the hardware researchers the possible directions to improve their devices. We believe that it is time to start considering quantum memory and its role in quantum computing systems. We hope that our article helps to motivate people to enter this exciting field.
## VIII Acknowledgement
This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704, (Basic Energy Sciences, PNNL FWP 76274). The Pacific Northwest National Laboratory is operated by Battelle for the U.S. Department of Energy under Contract DE-AC05-76RL01830.
|
2309.07328 | Electronic and spin transport in Bismuthene with magnetic impurities | Topological insulators have remained as candidates for future electronic
devices since their first experimental realization in the past decade. The
existence of topologically protected edge states could be exploited to generate
a robust platform and develop quantum computers. In this work we explore the
role of magnetic impurities in the transport properties of topological
insulators, in particular, we study the effect on the edge states conductivity.
By means of realistic $\it{ab}$ $\it{initio}$ calculations we simulate the
interaction between magnetic adatoms and topological insulators, furthermore,
our main goal is to obtain the transport properties for large samples as it
would be possible to localize edge states at large scales. | Armando Pezo, Felipe Crasto de Lima, Adalberto Fazzio | 2023-09-13T21:53:38Z | http://arxiv.org/abs/2309.07328v1 | # Electronic and spin transport in Bismuthene with magnetic impurities
###### Abstract
Topological insulators have remained as candidates for future electronic devices since their first experimental realization in the past decade. The existence of topologically protected edge states could be exploited to generate a robust platform and develop quantum computers. In this work we explore the role of magnetic impurities in the transport properties of topological insulators, in particular, we study the effect on the edge states conductivity. By means of realistic _ab initio_ calculations we simulate the interaction between magnetic adatoms and topological insulators, furthermore, our main goal is to obtain the transport properties for large samples as it would be possible to localize edge states at large scales.
## I Introduction
Topological materials brought new possibilities in the development of new technologies, particularly in spin-based electronic devices. After the theoretical predictions [1; 2] they were experimentally observed by means of transport measurements [3]. Subsequent experimental characterization was mainly based on angle-resolved photoemission spectroscopy (ARPES) techniques [4]. However, performing electronic transport experiments faces several difficulties related to the (i) substrate, (ii) temperature effects and (iii) topological energy gap. Surpassing these three difficulties one of the main candidates to realize the quantum spin Hall effect (QSHE) at room-temperature is bismuthene [5]. Due to its large Spin Orbit Coupling (SOC), bismuthene and Bi-based materials serve as good candidates for spintronic and valleytronic applications [6; 7; 8; 9; 10].
A great deal of interest in topological insulators arise given the Bulk-Boundary correspondence [11; 12; 13; 14], which relates the non-trivial bulk band topology to the existence of metallic surface states through a topological invariant [15]. Within the class of different topological phases, the quantum spin Hall (QSH) effect, arising in a finite 2D system, present a spin-momentum locking for the edge states [16; 17; 18]. However, impurities and adatoms in the system are shown to modify this edge state character [19], but retaining the topological protection against backscattering for impurities not mixing the edge state spins. The spin-momentum locking dictate the edge states spins aligned with the direction perpendicular to the nanoribbon plane being up/down depending on the edge state momentum velocity positive/negative. Only impurities with magnetic axis not aligned with such spin direction can introduce spin-mixing terms [20; 21].
This scattering is consistent with the field theoretical description of the 1-dimensional edge states, which experience a perturbation in terms of the magnetic impurity moment modeled by \(H^{\prime}=J\vec{\sigma}\cdot\mathbf{S}\delta(x-x^{\prime})\)[22; 23; 24; 25], being \(J\) the strength of the interaction between the helical edge states and a magnetic moment with spin \(\mathbf{S}\). It is worth noticing that this perturbation acts only locally, so it's expected that this interaction is spatially limited to a region surrounding the magnetic adatom. Therefore one could expect that magnetic adatoms will have an impact whenever they participate on measurements of realistic samples.
In this paper, we study the electronic behaviour of the interaction between topological insulators and adatoms using transport simulations, in particular our focus is related to magnetic adatoms (breaking time reversal symmetry), then allowing back-scattering between helical states at the same edge. We carry out density functional theory (DFT) calculations and perform transport calculations through non-equilibrium Green's functions (NEGFs) using as inputs the converged _ab initio_ Hamiltonians. A decimation approach is employed which allows the possibility to construct our scattering region from different small building blocks containing magnetic adatoms located randomly along the material.
## II Methods
We use a plane-wave basis set to obtain the optimized structures along with a localized basis for the electronic transport. In both cases we used the Perdew-Burke-Ernzerhof [26; 27] exchange-correlation functional. We performed the geometry optimizations with the plane-wave basis as implemented in the Vienna _Ab-initio_ Simulation Package (VASP) [28; 29]. For the simulations, we employ \(350\,\mathrm{eV}\) for the plane-wave expansion cutoff. The ionic potentials were described using the projector augmented-wave (PAW) method [30] with a force convergence criteria of \(0.005\,\mathrm{eV}\mathrm{/}\mathrm{\SIUnitSymbolAngstrom}\).
The transport calculations were performed using the Hamiltonian matrix obtained directly from SIESTA [31], using a single-zeta plus polarization basis. We used a real space mesh cutoff energy of \(350\,\mathrm{Ry}\), sampling the reciprocal space with \(10\)\(\vec{k}\)-points along the periodic direction of our nanoribbons. We added \(20\,\mathrm{\SIUnitSymbolAngstrom}\) of vacuum for both non-periodic directions and the nanoribbon edges were hydrogen-passivated. The SOC is introduced using the on-site approximation [32]. The Hamiltonian (\(\hat{H}\)) and
overlap (\(\hat{S}\)) matrices are obtained after performing a full self-consistent cycle.
The NEGFs electronic transport problem consists in solving the Hamiltonian below [33]
\[\hat{H}=\begin{pmatrix}\hat{H}_{L}&\hat{H}_{C}&0&\dots&0\\ \hat{H}_{C}^{\dagger}&\hat{H}_{i}&\hat{H}_{C}&0&\vdots\\ 0&0&\ddots&\vdots&0\\ \vdots&\vdots&\hat{H}_{C}^{\dagger}&\hat{H}_{j}&\hat{H}_{C}\\ 0&\dots&0&\hat{H}_{C}^{\dagger}&\hat{H}_{R}\end{pmatrix} \tag{1}\]
where \(\hat{H}_{L}\) and \(\hat{H}_{R}\) are the Hamiltonian matrices describing the electrodes while \(\hat{H}_{C}\) are the coupling between the leads and the central region, and \(\hat{H}_{i}\)'s are the matrices forming the total scattering region (SR). The SR Green's function [34; 35] is given by
\[G_{M}^{Ret}(E)=(\epsilon^{+}\hat{S}_{M}-\hat{H}_{M}-\Sigma_{L}^{Ret}(E)- \Sigma_{R}^{Ret}(E))^{-1}, \tag{2}\]
where \(\Sigma_{L,R}\) are the self-energies for left and right electrodes. In Fig. 1 is depicted a schematic representation of the transport setup.
Our main goal consists in the study of a sample containing a large number of adatoms, to achieve this, we must increase the system's size making exact diagonalization of (1) computationally expensive. For this reason, the decimation technique is a valid approach that was successfully applied previously [36; 37; 38; 39; 40]. In such case, the scattering region is divided into building blocks connected via first neighbors' interactions. Each building block is computed through DFT and the scattering region Green's function is obtained after the decimation, then it takes into account the degrees of freedom for the whole system comprised by all the building blocks. The interaction with the leads is introduced via the self-energies written like \(\Sigma^{Ret}(E)=(\epsilon^{+}\hat{S}-\hat{H})G^{0(Ret)}(E)(\epsilon^{+}\hat{S }-\hat{H})\) once the surface Green's functions (\(G^{0(Ret)}\)) are obtained [41; 42]. The transmission can be calculated for both spin-conserved and spin-flip parts,
\[T^{\sigma\sigma^{\prime}}=Tr[\Gamma_{L}^{\sigma\sigma^{\prime}}(G_{M}^{\sigma \sigma^{\prime}})^{\dagger}\Gamma_{R}^{\sigma\sigma^{\prime}}G_{M}^{\sigma \sigma^{\prime}}], \tag{3}\]
where the coupling matrices are \(\Gamma_{L/R}^{\sigma,\sigma^{\prime}}=i[\Sigma_{L/R}^{(Ret)\sigma,\sigma^{ \prime}}-\Sigma_{L/R}^{(Ret)\dagger\sigma,\sigma^{\prime}}]\),such that the total transmission \(T(E)\) is written like
\[T(E)=\sum_{\sigma,\sigma^{\prime}}T^{\sigma\sigma^{\prime}}, \tag{4}\]
from which we arrive to the conductance by [43; 44]
\[G=\frac{e^{2}}{h}T(E_{F}), \tag{5}\]
in units of \(G_{0}=e^{2}/h\).
In order to gain more information for the Spin transport we define the following polarization [45; 46]
\[P(E)=\frac{T^{\sigma,\sigma}-T^{\sigma,\sigma^{\prime}}}{T^{\sigma,\sigma}+T^ {\sigma,\sigma^{\prime}}}, \tag{6}\]
where we evaluate the normalized difference between the spin-conserved and the the spin-flip transmission, such that the polarization can take values within the window \([-1,1]\), being \(-1\) the value corresponding to the case where we have a maximum spin-flip transmission whereas \(+1\) means that we had a full spin-conserved transmission.
## III Results and Discussion
### Bismuthene and single adatom
In Fig. 2 we show the conductance obtained for three different adatoms on the ribbon edge. Only Nickel adatom has no effect on the conductance within the in the bulk energy gap, given its non-magnetic ground state. This means that Nickel does not break time-reversal and backscattering protection is preserved. Magnetic Co and Fe adatoms present energy values with non-quantized electronic transport, with the larger drop of the conductance in the case of Iron giving its larger magnetic moment. The strength of magnetic moment and adsorption energy are presented in Table 1. The adsorption energies (\(E_{ads}\)) are defined as
\[E_{ads}=E_{ribbon+ad}-E_{ribbon}-E_{ad},\]
where \(E_{ribbon+ad}\) is the energy of the ribbon containing one adatom, \(E_{ribbon}\) is the pristine ribbon energy and \(E_{ad}\) is the adatom isolated energy. Such values are consistent
Figure 1: Schematic representation of the two-probe setup studied in this work. The electrodes are located at the left (\(L_{L}\)) and right (\(L_{R}\)) sides of the device. The scattering region (SR) with length \(L\), made out from building blocks (\(H_{i}\)) containing adatoms and coupled between each other (\(H_{C}\)), is sandwiched by the electrodes which effective interactions are represented by the self energies \(\Sigma_{L}\) and \(\Sigma_{R}\).
with previous works [47], and their negative \(E_{ads}\), indicate exothermic process, being possible contaminants in experimental systems. In Fig. 3 (a) and (b) are depicted the bands for bismuthene ribbons containing Co and Fe adatoms respectively. Here we can see (i) the presence of mid-gap states come from the magnetic adatoms (ii) and energy shift between left/right edge states given the adatom charge transfer, and (iii) a larger gap opening in the case of the Fe atom. The LDOS for the mid-gap states are shown in Fig. 3 panels (c)-(f) indicating the localized nature on the adatom.
From Fig. 3 we note how the edge states dispersion relation changes after introducing the magnetic adatom, such an effect can be explained by considering the following effective Hamiltonian
\[\hat{H}_{\mathrm{eff}}=\hbar v_{F}k\sigma_{z}\otimes\tau_{z}+\] \[(\vec{m}\cdot\vec{\sigma}\delta(x-x^{\prime}))\otimes\frac{( \tau_{z}+\tau_{0})}{2}+ \tag{7}\] \[V\sigma_{0}\otimes\tau_{z},\]
being \(\sigma\) and \(\tau\) the Pauli matrices corresponding to spin and edge spaces, \(v_{F}\) the Fermi velocity, \(k\) the momentum, \(\vec{m}\) the adatom magnetic moment, \(x^{\prime}\) the adatom position, \(V\) the potential difference between left/right edge states. Such a model, in the case of a purely out-of-plane magnetization, leads to an energetic shift due to the \(m_{z}\) component of the magnetization, whereas the gap opening is an outcome of the in-plane magnetization component which will be proportional to either \(m_{x}\) or \(m_{y}\) depending on the momentum direction of the edge state.
ization for pristine and edge adsorbed adatoms. We can see that for pristine system [Fig. 4(a)] spin-conserving transport is dominant (\(P(E)\sim 1\)) within the whole topological energy gap. The residual spin-flip appearing in \(P(E)\) is a signature of the edge-edge scattering give the finite size of the nanorribon width (\(\sim 80\,\)A). For the non-magnetic Ni adatom, similar behavior is observed [Fig. 4(b)], where the presence of the Ni impurity mediate a greater edge-edge interaction, leading to a small spin-flip process (however greater than the pristine). For the magnetic impurities Co and Fe [Fig. 4(c) and (d)] the scenario is drastically changed. Despite the charge transport being quantized within most of the topological energy gap [Fig. 2] significant spin-flip mechanisms are present. For instance, on Co system the spin-flip and spin-conserving mechanism present similar contributions leading to \(P(E)\sim 0\). For Fe adatom in most of the energy range, spin-conserving mechanism dominates over spin-flip (\(P(E)\sim 0.5\)), however for energy in resonance with the localized Fe impurity energy level [Fig. 3 (b)] the spin-flip transport is dominant.
### Localization effect
To gain a deep understanding on the spin transport, we consider the localization effect on the topological edge states in the presence of Iron atoms. Here considering random distribution of Fe atoms close to one edge, in this way we have that the conductance for the adatoms' free edge will be fixed as 1, whereas backscattering events take place solely on impurity doped edge. Additionally, we have increased the scattering region keeping the same Fe linear concentration. In Fig. 5 we can see that this is the case, (i) the conductance never decreases below one, since only one edge is interacting with the magnetic adatoms, and (ii) conductance drop around 0.2 eV (the resonant Fe defect level, Fig. 2) became energetically spread with the increase of scattering region. Particularly, above 360 nm of scattering region the Fe doped edge transport become completely suppressed.
This is similar to what is presented in Fig. 4 where only one of the polarization channels reaches negative values by virtue of TRS. One of our main results is related to the polarization as a function of the sample's length. Based on the same analysis performed in other two dimensional materials [45; 46; 48], we can extract the values for the localization lengths obtained as a function of the conductance for a certain energy with respect to the device length. The localization length was obtained according to the following equation [49]
\[\ln(G)=-\frac{L}{\xi}, \tag{8}\]
where \(L\) is the sample's length, \(G\) the conductance and \(\xi\) the localization length. We show in Table 2 the localization lengths obtained for three different energies within non-trivial topological gap. The localization length will quantify the penetration depth of the topological state within the scattering region. The behaviour of the conductance as a function of the scattering region length
\begin{table}
\begin{tabular}{c c} Energy label & \(\xi\) (nm) \\ \hline \(E_{1}\) & 192 \\ \(E_{3}\) & 22 \\ \(E_{4}\) & 53 \\ \end{tabular}
\end{table}
Table 2: Table containing the localization lengths (\(\xi\)) for different energies following the linearity relation between \(\ln(G)\) and the samples length (L).
Figure 5: Conductance curves displaying the effect of having an increasing number of Fe adatoms.
Figure 6: Logarithmic relation between the conductance and the length of the nanoribbon containing a distribution of impurities along one only edge. The inset represents the drop in the polarization for three energy points such that the we can relate this to the spin decoherence in our system. \(\ln(G)\) vs sample’s length (L) for different energies chosen in the bulk gap. The inset shows the spin polarization (\(P(L)\)).
\(L\) is depicted in Fig. 6. Here, we can see a long penetration depth (long localization length) for the energies non-resonant with the Fe impurity states. Particularly, we note that for the energy labeled as \(E_{4}\) near 0.2 eV, the conductance almost vanishes, leading to a more localized nature (\(\xi=53\,\)nm). Additionally, we see the inverse dependence of the polarization with the localization length [inset in Fig. 6]. That is, the spin transport is persistent in regions non-resonant with impurity states, even for the TRS breaking Fe adatoms.
### Bismuthene and multiple adatoms
In the previous section we explored the effect of different adatoms adsorbed only close to one of the nanoribbons edge. Here, we can see the effect of distribution of the adatoms on the whole ribbon, keeping the same linear density of adatom. For Ni, given its preservation of TRS (non-magnetic) the backscattering protection is still preserved. This picture changes when Fe adatoms are attached to the nanoribbon as we can see in Fig. 7. Besides the drop in the conductance for bulk states, we also can see how the breaking of TRS leads to a drop in the conductance within the topological energy gap. Additionally, we show the cumulative effect on increasing the length of the scattering region (keeping the same linear density of adatoms). That is, the larger the ribbon length, i.e. number of Fe adatoms in the structure, the larger is the number of possible backscattering events. It's worth mentioning that, since we are working in the diluted regime by considering just one impurity per block, most of the decrease in conductance appears to be confined to certain energy windows on which the impurity states are localized. We expect that, as the concentration increases, the drop in the conductance will start to become broader in energy. Despite the dilute regime, increasing the ribbon length the transport lose coherence [50; 51; 52].
## IV Conclusions
In summary, we have shown that adatoms in topological materials can retain at some energy ranges the scattering forbidden character even for atoms breaking time-reversal symmetry. We characterize the resonance energy for magnetic impurities Fe and Co close to the topological insulator edge. After increasing the number sample's length, for impurities coupling only to one of the edges, we demonstrated that edge states began to localize as a result of the back-scattering from which we obtained the localization length. This certainly puts some bounds with respect to the dissipationless nature of the topological transport and would serve as a guiding rule in future measurements. Additionally we demonstrate the effect of having random distribution of magnetic adatoms on the topological insulator ribbon, where close to the Fermi energy the transport keep most of its character.
###### Acknowledgements.
This work is partially supported by the Coordination for the Improvement of Higher Education Personnel - Brazil (CAPES) - Finance Code 001, and Sao Paulo Research Foundation (FAPESP), grants no. 19/04527-0, 16/14011-2, 17/18139-6, and 17/02317-2. A. P. thanks to Prof. Alexandre R. Rocha for helpful discussions. The authors acknowledge the Brazilian National Scientific Computing Laboratory (LNCC), the Institute of Physics of the University of Sao Paulo (USP) and the Federal University of ABC (UFABC) for computational resources of the computers Santos Dumont, Josephson, and Titanio, respectively.
Figure 7: Conductance for Fe adatoms distributed along the edge of the nanoribbon(a) and (b) zoom-in view around the Fermi level. Each color represents different nanoribbon’s lengths being proportional to the number of Iron impurities. |
2309.16225 | Periodic homogenization for singular Lévy SDEs | We generalize the theory of periodic homogenization for multidimensional SDEs
with additive Brownian and stable L\'evy noise for $\alpha\in (1,2)$ to the
setting of singular periodic Besov drifts of regularity $\beta\in
((2-2\alpha)/3,0)$ beyond the Young regime. For the martingale solution from
Kremp, Perkowski '22 projected onto the torus, we prove existence and
uniqueness of an invariant probability measure with strictly positive Lebesgue
density exploiting the theory of paracontrolled distributions and a strict
maximum principle for the singular Fokker-Planck equation. Furthermore, we
prove a spectral gap on the semigroup of the diffusion and solve the Poisson
equation with singular right-hand side equal to the drift itself. In the CLT
scaling, we prove that the diffusion converges in law to a Brownian motion with
constant diffusion matrix. In the pure stable noise case, we rescale in the
scaling that the stable process respects and show convergence to the stable
process itself. We conclude on the periodic homogenization result for the
singular parabolic PDE. | Helena Kremp, Nicolas Perkowski | 2023-09-28T07:58:39Z | http://arxiv.org/abs/2309.16225v1 | # Periodic homogenization for singular Levy SDEs
###### Abstract
We generalizes the theory of periodic homogenization for multidimensional SDEs with additive Brownian and stable Levy noise for \(\alpha\in(1,2)\) (cf. [1, 10]) to the setting of singular periodic Besov drifts \(F\in(\mathscr{C}^{\beta}(\mathbb{T}^{d}))^{d}\) for \(\beta\in((2-2\alpha)/3,0)\) beyond the Young regime. For the martingale solution from [1] projected onto the torus, we prove existence and uniqueness of an invariant probability measure \(\pi\) with strictly positive Lebesgue density exploiting the theory of paracontrolled distributions and a strict maximum principle for the singular Fokker-Planck equation. Furthermore, we prove a spectral gap on the semigroup of the diffusion and solve the Poisson equation with singular right-hand side equal to the drift itself. In the CLT scaling, we prove that the diffusion converges in law to a Brownian motion with constant diffusion matrix. In the pure \(\alpha\)-stable noise case, we rescale in the scaling that the \(\alpha\)-stable process respects and show convergence to the stable process itself. We conclude on the periodic homogenization result for the parabolic PDE for the singular generator \(\mathfrak{L}^{\varepsilon}=-(-\Delta)^{\alpha/2}+\varepsilon^{1-\alpha}F( \varepsilon^{-1}\cdot)\cdot\nabla\) as \(\varepsilon\to 0\).
_Keywords: Periodic homogenization, singular diffusion, stable Levy noise, Poisson equation, singular Fokker-Planck equation, paracontrolled distributions MSC2020: 35A21, 35B27, 60H10, 60G51, 60L40, 60K37._
## 1 Introduction
Periodic homogenization decribes the limit procedure from microscopic boundary-value problems posed on periodic structures to a macroscopic equation. Such periodic media are for example composite materials or polymer structures. The theory originated from engineering purposes in material sciences in the 1970s, cf. [1] and the references therein. Mathematically, this leads to the study of the limit of periodic operators with rapidly oscillating coefficients. There exist analytic and probabilistic methods to determine the limit equation. We refer to the classical works [1, 2] for the background on homogenization theory. We employ a probabilistic method using the Feynman-Kac formula (cf. [3]). Via the Feynman-Kac formula, the periodic homogenization result for the Kolmogorov PDE with fluctuating and unbounded drift corresponds to a central limit theorem for the diffusion process.
In this work, we generalize the theory of periodic homogenization for SDEs with additive Brownian noise, respectively stable Levy noise, from [1], respectively [10], from the setting of regular coefficients to singular Besov drifts \(F\in(\mathscr{C}^{\beta}(\mathbb{T}^{d}))^{d}\) for \(\beta\in((2-2\alpha)/3,0)\) on the \(d\)-dimensional torus \(\mathbb{T}^{d}\).
In [1, Section 3.4.2], the periodic drift coefficient is assumed to be \(C^{1}\) with Holder-continuous derivative and the periodic diffusion coefficient is assumed to be symmetric and uniformly elliptic, as well as \(C^{2}\) with Holder-continuous first derivative and bounded second
derivative. The assumption of uniform ellipticity can be relaxed to allow for some degeneracy, which was investigated in [10] using Malliavin calculus techniques.
In [11] the multiplicative symmetric \(\alpha\)-stable noise case for \(\alpha\in(1,2)\) is studied and the coefficients are assumed to be even more regular, namely \(C^{3}\). The regularity assumptions were relaxed in [1, 1], where the authors more generally consider the periodic homogenization for the generator of an \(\alpha\)-stable-like Feller process. In [1], using a Zvonkin transformation to remove the drift (cf. [13]), the authors can consider drifts that are bounded and \(\beta\)-Holder continuous for \(\beta\in(1-\alpha/2,1)\). They also consider a non-linear intensity function \(\sigma\) and therefore a multiplicative noise term of the form \(\sigma(X_{t},dL_{t}^{\alpha})\), see [1, Equation (2.1)] with an isotropic \(\alpha\)-stable process \(L^{\alpha}\), whereas in [11] the intensity function \(\sigma(x,y)\) is linear in \(y\).
In the recent article [11] the authors further generalize the assumption on the drift coefficient to bounded, measurable drifts and consider the solution of the martingale problem associated to the SDE. The operator they consider is a Levy-type operator that in particular includes all stable Levy noise generators, symmetric and non-symmetric. They prove the homogenization result with the corrector method, an analytical method in homogenization theory, and show that different limit phenomena occur in the cases \(\alpha\in(0,1)\), \(\alpha=1\), \(\alpha\in(1,2)\), \(\alpha=2\) and \(\alpha\in(2,\infty)\).
With analytical methods, the papers [1, 1, 10] deal with Levy-type operators with oscillating coefficients for \(\alpha\in(0,2)\), but without drift part.
In the mixed jump-diffusion case, [12] investigates the periodic homogenization for zero-drift diffusions with small jumps. The homogenized process in this case is also a Brownian motion.
We focus on the additive \(\alpha\)-stable symmetric noise case, where different limit behaviours occure for \(\alpha=2\) (the Brownian noise case) and \(\alpha\in(1,2)\). Our contribution is the generalization to distributional drifts, not only in the Young, but also in the rough regime.
For the homogenization result, we rely on Kipnis-Varadhan martingale methods (cf. [13] and [14]). Those methods require to solve the Poisson equation for the generator of the diffusion (or more generally the resolvent equations and imposing additional assumption) and to rewrite the additive functional in terms of that solution and Dynkin's martingale. Poisson equations for generators of diffusions with regular coefficients were studied in the classical article [12].
Following [14], we generalize those techniques to much less regular drift coefficient. In particular this includes bounded measurable drifts or distributional drifts in the Young regime, where classical PDE techniques apply. More interestingly, our theory applies in the setting of singular drifts such as a typical realization of the periodic spatial white noise, cf. Remark 7.6. In order to apply the SDE solution theory from [10], we restrict to additive noise.
To be more precise, we study the functional central limit theorem for the solution \(X\) of the martingale problem associated to the SDE
\[dX_{t}=F(X_{t})dt+dL_{t} \tag{1.1}\]
with \(F\in(\mathscr{C}^{\beta}(\mathbb{T}^{d}))^{d}\) and a symmetric \(\alpha\)-stable process \(L\) for \(\alpha\in(1,2]\). The singular generator \(\mathfrak{L}\) of \(X\) is given by
\[\mathfrak{L}=-\mathscr{L}_{\nu}^{\alpha}+F\cdot\nabla.\]
The first step is to prove existence and uniqueness of an invariant probability measure \(\pi\) on \(\mathbb{T}^{d}\) for \(\mathfrak{L}\) with strictly positive Lebesgue density. We achieve this by solving the singular Fokker-Planck equation with singular initial condition \(\mu\in\mathscr{C}_{1}^{0}\),
\[(\partial_{t}-\mathfrak{L}^{*})\rho_{t}=0,\quad\rho_{0}=\mu,\]
with formal Lebesgue adjoint \(\mathfrak{L}^{*}\) of \(\mathfrak{L}\) and proving a strict maximum principle on compacts. Furthermore, we prove spectral gap estimates for the semigroup of the diffusion projected onto
the torus and solve the singular resolvent equation for \(\mathfrak{L}\). This enables, through a limiting argument in a Sobolev-type space \(\mathscr{H}^{1}(\pi)\) with respect to \(\pi\), to solve the Poisson equation (1.3) with singular right-hand side \(F-\langle F\rangle_{\pi}\). Here, we define \(\langle F\rangle_{\pi}=\int Fd\pi\) in a stable manner.
For the homogenization, we distinguish between the cases \(\alpha=2\) (Brownian noise case) and \(\alpha\in(1,2)\), as the scaling and the limit behaviour differs. In the standard Brownian noise case, we prove weak convergence
\[\left(\frac{1}{\sqrt{n}}(X_{nt}-nt\langle F\rangle_{\pi})\right)_{t\in[0,T]} \Rightarrow(\sqrt{D}B_{t})_{t\in[0,T]}, \tag{1.2}\]
where \(B\) is a standard Brownian motion and \(D\) is the constant diffusion matrix with entries
\[D(i,j):=\int_{\mathbb{T}^{d}}(e_{i}+\nabla\chi^{i}(x))(e_{j}+\nabla\chi^{j}(x ))^{T}\pi(dx),\]
for \(i,j=1,\ldots,d\) and \(e_{i}\) denoting the \(i\)-th euclidean unit vector. The limit is motivated by the result from [1, Section 3.4.2]. Furthermore, \(\chi\in(L^{2}(\pi))^{d}\) solves the Poisson equation with singular right-hand side \(F-\langle F\rangle_{\pi}\):
\[(-\mathfrak{L})\chi^{i}=F^{i}-\langle F^{i}\rangle_{\pi}, \tag{1.3}\]
for \(i=1,\ldots,d\). In the pure Levy noise case \(\alpha\in(1,2)\) we rescale in the \(\alpha\)-stable scaling \(n^{-1/\alpha}\) instead of \(n^{-1/2}\). In this scaling we show, that the Dynkin martingale vanishes and thus we obtain weak convergence towards the stable process itself,
\[\left(\frac{1}{n^{1/\alpha}}(X_{nt}-nt\langle F\rangle_{\pi})\right)_{t\in[0,T ]}\Rightarrow(L_{t})_{t\in[0,T]}. \tag{1.4}\]
In particular, compared to the Brownian noise case, there is no diffusivity enhancement in the limit (analogously to the regular coefficient case, cf. [10]).
The paper is structured as follows. Preliminaries and the strategy to prove the central limit theorem are outlined in Section 2. In Section 3 we solve the singular Fokker-Planck equation with the paracontrolled approach. The singular resolvent equation for \(\mathfrak{L}\) is solved in Section 4. We show in Section 5 existence and uniqueness of the invariant measure \(\pi\). Section 5 furthermore yields a characterization of the domain of the generator \(\mathfrak{L}\) in \(L^{2}(\pi)\), cf. Theorem 5.7. In Section 6, we solve the Poisson equation with singular right-hand side \(F-\langle F\rangle_{\pi}\). Finally, we prove the CLT in Section 7 and relate to the periodic homogenization result for the parabolic PDE with oscillating operator \(\mathfrak{L}^{\varepsilon}=-\mathscr{L}_{\nu}^{\alpha}+\varepsilon^{1-\alpha} F(\varepsilon^{-1}\cdot)\cdot\nabla\), cf. Corollary 7.3.
## 2 Preliminaries
This section gives an introduction to periodic Besov spaces and Schauder and exponential Schauder estimates on such. Furthermore, we introduce the projected solution \(X^{\mathbb{T}^{d}}\) of \(X\) onto the torus and its generator \(\mathfrak{L}\) and semigroup. We define the space of enhanced distributions \(\mathscr{X}_{\infty}^{\beta,\gamma}\). This section finishes with a summary on our strategy in proving the convergence results (1.2) and (1.4).
Let \(\mathscr{S}(\mathbb{R}^{d})\) be the space of Schwartz functions and \(\mathscr{S}^{\prime}(\mathbb{R}^{d})\) the space of tempered distributions. A periodic (or 1-periodic) distribution \(u\) satisfies \(u(\varphi(\cdot+1))=u(\varphi)\) for all \(\varphi\in\mathscr{S}(\mathbb{R}^{d})\). Let \(\mathbb{T}^{d}=(\mathbb{R}/\mathbb{Z})^{d}\) denote the torus and \(\mathscr{S}(\mathbb{T}^{d})\) the space of Schwartz functions on the torus, i.e. smooth functions with the locally convex topology generated by the family of semi-norms \(\|f\|_{\gamma}=\sup_{x\in\mathbb{T}^{d}}\lvert D^{\gamma}f(x)\rvert\) for multi-indices \(\gamma\in\mathbb{N}^{d}\), and its topological dual \(\mathscr{S}^{\prime}(\mathbb{T}^{d})\). Let \((p_{j})_{j\geqslant-1}\) be a smooth dyadic partition of unity, i.e. a family of functions \(p_{j}\in C_{c}^{\infty}(\mathbb{R}^{d})\) for \(j\geqslant-1\), such that
1. \(p_{-1}\) and \(p_{0}\) are non-negative radial functions (they just depend on the absolute value of \(x\in\mathbb{R}^{d}\)), such that the support of \(p_{-1}\) is contained in a ball and the support of \(p_{0}\) is contained in an annulus;
2. \(p_{j}(x):=p_{0}(2^{-j}x)\), \(x\in\mathbb{R}^{d}\), \(j\geqslant 0\);
3. \(\sum_{j=-1}^{\infty}p_{j}(x)=1\) for every \(x\in\mathbb{R}^{d}\); and
4. \(\operatorname{supp}(p_{i})\cap\operatorname{supp}(p_{j})=\emptyset\) for all \(|i-j|>1\).
Then, we define the Besov space on the torus with regularity \(\theta\in\mathbb{R}\), integrability \(p\in[1,\infty]\) and summability \(q\in[1,\infty)\) as (cf. [13, Section 3.5])
\[B^{\theta}_{p,q}(\mathbb{T}^{d}):=\{u\in\mathscr{S}^{\prime}(\mathbb{T}^{d}) \mid\|u\|_{B^{\theta}_{p,q}}:=\|(2^{j_{\text{s}}}\|\Delta_{j}u\|_{L^{p}( \mathbb{T}^{d})})_{j\geqslant-1}\|_{l^{q}(\mathbb{Z}^{d})}<\infty\} \tag{2.1}\]
for the Littlewood-Paley blocks \(\Delta_{j}u=\mathscr{F}_{\mathbb{T}^{d}}^{-1}(\rho_{j}\mathscr{F}_{\mathbb{T }^{d}}u)\) with Fourier transform \(\mathscr{F}_{\mathbb{T}^{d}}f(k)=\hat{f}(k)=\int_{\mathbb{T}^{d}}f(x)e^{-2\pi ik \cdot x}dx\), \(k\in\mathbb{Z}^{d}\), with inverse Fourier transform \(\mathscr{F}_{\mathbb{T}^{d}}^{-1}f(x)=\sum_{k\in\mathbb{Z}^{d}}f(k)e^{2\pi ik \cdot x}\) and a dyadic partition of unity \((\rho_{j})_{j\geqslant-1}\) as above (cf. also [13, Section 3.4.4]). The Fourier transform on \(\mathbb{R}^{d}\), we define as \(\mathscr{F}f(y)=\int_{\mathbb{T}^{d}}f(x)e^{-2\pi iy\cdot x}dx\), \(y\in\mathbb{R}^{d}\) with inverse \(\mathscr{F}^{-1}f(y)=\mathscr{F}f(-y)\), \(f\in\mathscr{S}(\mathbb{R}^{d})\). In the case \(q=\infty\), we rather work with the separable Besov space, and thus define
\[B^{\theta}_{p,\infty}=B^{\theta}_{p,\infty}(\mathbb{T}^{d}):=\{u\in\mathscr{ S}^{\prime}(\mathbb{T}^{d})\mid\|u\|_{B^{\theta}_{p,\infty}}:=\lim_{j\to\infty}2^{ j_{\text{s}}}\|\Delta_{j}u\|_{L^{p}}=0\}. \tag{2.2}\]
We introduce the notation \(\mathscr{C}^{\theta}_{p}(\mathbb{T}^{d}):=B^{\theta}_{p,\infty}(\mathbb{T}^{d})\) for \(p\in[1,\infty)\) and \(\mathscr{C}^{\theta}(\mathbb{T}^{d}):=B^{\theta}_{\infty,\infty}(\mathbb{T}^{ d})\) and analogously for \(\mathbb{T}^{d}\) replaced by \(\mathbb{R}^{d}\). We simply write \(B^{\theta}_{p,q}\), respectively \(\mathscr{C}^{\theta}_{p}\), in the case the statement holds for any of the spaces, on the torus \(\mathbb{T}^{d}\) and on \(\mathbb{R}^{d}\). We recall from Bony's paraproduct theory (cf. [1, Section 2]) that in general for \(u\in\mathscr{C}^{\theta}\) and \(v\in\mathscr{C}^{\beta}\) with \(\theta,\beta\in\mathbb{R}\), the product \(uv:=u\owedge v+u\owedge v+u\owedge v\), is well defined in \(\mathscr{C}^{\min(\theta,\beta,\theta+\beta)}\) if and only if \(\theta+\beta>0\). Denoting \(S_{i}u=\sum_{j=-1}^{i-1}\Delta_{j}u\), the paraproducts are defined as follows
\[u\owedge v:=\sum_{i\geqslant-1}S_{i-1}u\Delta_{i}v,\quad u\owedge v:=v\owedge u,\quad u\ominus v:=\sum_{|i-j|<1}\Delta_{i}u\Delta_{j}v.\]
Here, we use the notation of [14, 15] for the para- and resonant products \(\owedge,\owedge\) and \(\owedge\).
In estimates we often use the notation \(a\lesssim b\), which means, that there exists a constant \(C>0\), such that \(a\leqslant Cb\). In the case that we want to stress the dependence of the constant \(C(d)\) in the estimate on a parameter \(d\), we write \(a\lesssim_{d}b\).
Let \(C^{\infty}_{b}=C^{\infty}_{b}(\mathbb{R}^{d},\mathbb{R})\) denote the space of smooth, bounded functions with bounded partial derivatives.
The paraproducts satisfy the following estimates for \(p,p_{1},p_{2}\in[1,\infty]\) with \(\frac{1}{p}=\min(1,\frac{1}{p_{1}}+\frac{1}{p_{2}})\) and \(\theta,\beta\in\mathbb{R}\) (cf. [16, Theorem A.1] and [1, Theorem 2.82, Theorem 2.85])
\[\|u\odot v\|_{\mathscr{C}^{\theta+\beta}_{p}} \lesssim\|u\|_{\mathscr{C}^{\theta}_{p_{1}}}\|v\|_{\mathscr{C}^{ \beta}_{p_{2}}}, \text{ if }\theta+\beta>0, \tag{2.3}\] \[\|u\owedge v\|_{\mathscr{C}^{\beta}_{p}} \lesssim\|u\|_{L^{p_{1}}}\|v\|_{\mathscr{C}^{\beta}_{p_{2}}} \lesssim\|u\|_{\mathscr{C}^{\theta}_{p_{1}}}\|v\|_{\mathscr{C}^{\beta}_{p_{2}}}, \text{ if }\theta>0,\] \[\|u\owedge v\|_{\mathscr{C}^{\beta+\beta}_{p}} \lesssim\|u\|_{\mathscr{C}^{\theta}_{p_{1}}}\|v\|_{\mathscr{C}^{ \beta}_{p_{2}}}, \text{ if }\theta<0.\]
So if \(\theta+\beta>0\), we have \(\|uv\|_{\mathscr{C}^{\gamma}_{p}}\lesssim\|u\|_{\mathscr{C}^{\theta}_{p_{1}}}\|v \|_{\mathscr{C}^{\beta}_{p_{2}}}\) for \(\gamma:=\min(\theta,\beta,\theta+\beta)\).
Next, we collect some facts about \(\alpha\)-stable Levy processes and their generators and semigroups. For \(\alpha\in(0,2]\), a symmetric \(\alpha\)-stable Levy process \(L\) is a Levy process, that moreover satisfies the
scaling property \((L_{kt})_{t\geqslant 0}\triangleq k^{1/\alpha}(L_{t})_{t\geqslant 0}\) for any \(k>0\) and \(L\stackrel{{ d}}{{=}}-L\), where \(\stackrel{{ d}}{{=}}\) denotes equality in law. These properties determine the jump measure \(\mu\) of \(L\), see [10, Theorem 14.3]. That is, if \(\alpha\in(0,2)\), the Levy jump measure \(\mu\) of \(L\) is given by
\[\mu(A):=\mathds{E}\bigg{[}\sum_{0\leqslant t\leqslant 1}\mathbf{1}_{A}(\Delta L _{t})\bigg{]}=\int_{S}\int_{\mathbb{R}^{+}}\mathbf{1}_{A}(k\xi)\frac{1}{k^{1+ \alpha}}dk\tilde{\nu}(d\xi),\quad A\in\mathscr{B}(\mathbb{R}^{d}\setminus\{0 \}), \tag{2.4}\]
where \(\tilde{\nu}\) is a finite, symmetric, non-zero measure on the unit sphere \(S\subset\mathbb{R}^{d}\). Furthermore, we also define for \(A\in\mathscr{B}(\mathbb{R}^{d}\setminus\{0\})\) and \(t\geqslant 0\) the Poisson random measure
\[\pi(A\times[0,t])=\sum_{0\leqslant s\leqslant t}\mathbf{1}_{A}(\Delta L_{s}),\]
with intensity measure \(dt\mu(dy)\). Denote the compensated Poisson random measure of \(L\) by \(\hat{\pi}(dr,dy):=\pi(dr,dy)-dr\mu(dy)\). We refer to the book by Peszat and Zabczyk [13] for the integration theory against Poisson random measures. The generator \(A\) of \(L\) satisfies \(C_{b}^{\infty}(\mathbb{R}^{d})\subset\mathrm{dom}(A)\) and is given by
\[A\varphi(x)=\int_{\mathbb{R}^{d}}\bigl{(}\varphi(x+y)-\varphi(x)-\mathbf{1}_{ \{|y|\leqslant 1\}}(y)\nabla\varphi(x)\cdot y\bigr{)}\mu(dy)\qquad\text{for $ \varphi\in C_{b}^{\infty}(\mathbb{R}^{d})$}. \tag{2.5}\]
If \((P_{t})_{t\geqslant 0}\) denotes the semigroup of \(L\), the convergence \(t^{-1}(P_{t}f(x)-f(x))\to Af(x)\) is uniform in \(x\in\mathbb{R}^{d}\) (see [13, Theorem 5.4]).
We also have a Fourier respresentation of the operator \(A\), that is defined as follows.
**Definition 2.1**.: _Let \(\alpha\in(0,2)\) and let \(\nu\) be a symmetric (i.e. \(\nu(A)=\nu(-A)\)), finite and non-zero measure on the unit sphere \(S\subset\mathbb{R}^{d}\). We define the operator \(\mathscr{L}_{\nu}^{\alpha}\) as_
\[\mathscr{L}_{\nu}^{\alpha}\mathscr{F}^{-1}\varphi=\mathscr{F}^{-1}(\psi_{\nu }^{\alpha}\varphi)\qquad\text{for $\varphi\in C_{b}^{\infty}$,} \tag{2.6}\]
_where \(\psi_{\nu}^{\alpha}(z):=\int_{S}\lvert\langle z,\xi\rangle|^{\alpha}\nu(d\xi).\) For \(\alpha=2\), we set \(\mathscr{L}_{\nu}^{\alpha}:=-\frac{1}{2}\Delta\)._
_On the torus and for \(\alpha\in(1,2)\), we define the fractional Laplacian as follows: for \(f\in C^{\infty}(\mathbb{T}^{d})\)_
\[\mathscr{L}_{\nu}^{\alpha}f=\mathscr{F}_{\mathbb{T}^{d}}^{-1}(\mathbb{Z}^{d} \ni k\mapsto\psi_{\nu}^{\alpha}(k)\hat{f}(k))\]
_and for \(\alpha=2\) analogously._
**Remark 2.2**.: _If we take \(\nu\) as a suitable multiple of the Lebesgue measure on the sphere, then \(\psi_{\nu}^{\alpha}(z)=|2\pi z|^{\alpha}\) and thus \(\mathscr{L}_{\nu}^{\alpha}\) is the fractional Laplace operator \((-\Delta)^{\alpha/2}\)._
**Lemma 2.3**.: 1 _Let \(\alpha\in(0,2)\) and let again \(\nu\) be a symmetric, finite and non-zero measure on the unit sphere \(S\subset\mathbb{R}^{d}\). Then for \(\varphi\in C_{b}^{\infty}\) we have \(-\mathscr{L}_{\nu}^{\alpha}\varphi=A\varphi\), where \(A\) is the generator of the symmetric, \(\alpha\)-stable Levy process \(L\) with characteristic exponent \(\mathds{E}[\exp(2\pi i\langle z,L_{t}\rangle)]=\exp(-t\psi_{\nu}^{\alpha}(z))\). The process \(L\) has the jump measure \(\mu\) as defined in Equation (2.4), with \(\tilde{\nu}=C\nu\) for some constant \(C>0\)._
Footnote 1: [13, Lemma 2.3]
If \(\alpha=2\), then the generator of the symmetric, \(\alpha\)-stable process coincides with \(\sum_{i,j}C(i,j)\partial_{x_{i}}\partial_{x_{j}}\) for an explicit covariance matrix \(C\) (cf. [10, Theorem 14.2]), that is, the generator of \(\sqrt{2C}B\) for a standard Brownian motion \(B\). To ease notation, we consider here \(C=\frac{1}{2}\operatorname{Id}_{d\times d}\) and whenever we refer to the case \(\alpha=2\), we mean the standard Brownian motion noise case and we set \(\mathscr{L}_{\nu}^{\alpha}:=-\frac{1}{2}\Delta\).
**Assumption 2.4**.: _Throughout the work, we assume that the measure \(\nu\) from Definition 2.1 has \(d\)-dimensional support, in the sense that the linear span of its support is \(\mathbb{R}^{d}\). This means that the process \(L\) can reach every open set in \(\mathbb{R}^{d}\) with positive probability._
An \(\alpha\)-stable, symmetric Levy process, that satisfies Assumption 2.4, we also call non-degenerate.
In the following, we will not distinguish between \(F\in(\mathscr{C}^{\beta}(\mathbb{T}^{d}))^{d}\) and the periodic version on \(\mathbb{R}^{d}\), \(F^{\mathbb{R}^{d}}\in(\mathscr{C}^{\beta})^{d}\), whenever there is no danger of confusion. We understand (1.1) as a singular SDE with periodic coefficient \(F^{\mathbb{R}^{d}}\) and in particular existence of a solution to the martingale problem follows from [10]. For that, we need to assume that the drift \(F\) can be enhanced in the following sense. Let for \(\gamma\in(0,1)\),
\[\mathscr{M}^{\gamma}_{\infty,0}X=\{u:(0,\infty)\to X\mid\exists C>0,\forall t >0,\|u_{t}\|_{X}\leqslant C[t^{-\gamma}\lor 1]\}.\]
**Assumption 2.5**.: _For \(\beta\in(\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]\) we assume that \((F_{1}=F,F_{2})\in\mathscr{X}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})\), that is \((F^{\mathbb{R}^{d}},F_{2}^{\mathbb{R}^{d}})\in\mathscr{X}^{\beta,\gamma}_{\infty}\), where_
\[\mathscr{X}^{\beta,\gamma}_{\infty}:=cl(\{\big{(}\eta,(P_{t}(\partial_{i}\eta^ {j})\odot\eta^{k})_{i,j,k\in\{1,\ldots,d\}}\big{)}\mid\eta\in C^{\infty}_{b}( \mathbb{R}^{d},\mathbb{R}^{d})\}) \tag{2.7}\]
_for \(\gamma\in[(2\beta+2\alpha-1)/\alpha,1)\) and for the closure in \(\mathscr{C}^{\beta+(1-\gamma)\alpha}\times\mathscr{M}^{\gamma}_{\infty,\infty }\mathscr{C}^{\beta\beta+\alpha-1}\). For \(\beta\in(\frac{1-\alpha}{2},0)\), we assume that \(F\in\mathscr{C}^{\beta+(1-\gamma)\alpha}\) for \(\gamma\in((\beta-1)/\alpha,0)\) and set \(\mathscr{X}^{\beta,\gamma}_{\infty}:=\mathscr{C}^{\beta+(1-\gamma)\alpha}\)._
**Remark 2.6**.: _The assumption on the enhanced distribution in (2.7) is stronger than the assumption in [10, Definition 4.2] in the sense that \(F\) is an enhanced distribution for any finite time horizon \(T>0\), instead of for a fixed time horizon. This assumption will be needed in Section 4 to solve the resolvent equation. Notice also that the blow-up \(\gamma\) occures at the initial time \(t=0\) and not at a terminal time and that \(F\) does not depend on a time variable here. Furthermore, in the definition above we allow for three different indices \(i,j,k\) in (2.7). This assumption is due to the fact that we also solve the adjoint equation, i.e. the Fokker-Planck equation. For the Fokker-Planck equation, we will encounter the products \(P_{t}(\partial_{i}F^{i})\odot F^{j}\) for \(i,j=1,\ldots,d\), whereas for the Kolmogorov equation, we have \(P_{t}(\partial_{i}F^{j})\odot F^{i}\) for \(i,j\). To cover both products, we assume (2.7). The blow-up \(\gamma\) can be thought of as close to \(1\) and \(t\mapsto P_{t}(\partial_{i}F^{j})\odot F^{k}\in\mathscr{M}^{\gamma}_{\infty,0} \mathscr{C}^{2\beta+\alpha-1}\) in particular implies that for any \(T>0\), \(\int_{0}^{T}P_{t}(\partial_{i}F^{j})\odot F^{k}dt\in\mathscr{C}^{2\beta+\alpha -1}\)._
For completeness we state the definition of a solution to the singular martingale problem from [10, Definition 4.1], cf. also [10], and [10, Theorem 4.2] about the existence and uniqueness of martingale solutions.
**Definition 2.7** (Martingale problem).: _Let \(\alpha\in(1,2]\) and \(\beta\in(\frac{2-2\alpha}{3},0)\), and let \(T>0\) and \(F^{\mathbb{R}^{d}}\in\mathscr{X}^{\beta,\gamma}_{\infty}\). Then, we call a probability measure \(\mathds{P}\) on the Skorokhod space \((\Omega,\mathscr{F})\) a solution of the martingale problem for \((\mathscr{G}^{V},\delta_{x})\), if_
1. \(\mathds{P}(X_{0}\equiv x)=1\) _(i.e._ \(\mathds{P}^{X_{0}}=\delta_{x}\)_), and_
2. _for all_ \(f\in C_{T}\mathscr{C}^{\varepsilon}\) _with_ \(\varepsilon>2-\alpha\) _and for all_ \(u^{T}\in\mathscr{C}^{3}\)_, the process_ \(M=(M_{t})_{t\in[0,T]}\) _is a martingale under_ \(\mathds{P}\) _with respect to_ \((\mathscr{F}_{t})\)_, where_ \[M_{t}=u(t,X_{t})-u(0,x)-\int_{0}^{t}f(s,X_{s})ds\] (2.8) _and where_ \(u\) _is a mild solution of the Kolmogorov backward equation_ \(\mathscr{G}^{F}u=f\) _with terminal condition_ \(u(T,\cdot)=u^{T}\)_, where_ \(\mathscr{G}^{F}:=\partial_{t}-\mathscr{L}^{\alpha}_{\nu}+F\cdot\nabla\)_._
**Remark 2.8**.: _Although we consider a drift term \(F\) that does not depend on a time variable, we consider the parabolic Kolmogorov PDE in the definition above. Equivalently one could reformulate the martingale problem with the resolvent equation for the operator \(-\mathscr{L}_{\nu}^{\alpha}+F\cdot\nabla\) instead. We use the above definition to be able to apply the result from [13]._
**Theorem 2.9**.: _Let \(\alpha\in(1,2]\) and \(L\) be a symmetric, \(\alpha\)-stable Levy process, such that the measure \(\nu\) satisfies Assumption 2.4. Let \(T>0\) and \(\beta\in((2-2\alpha)/3,0)\) and let \(F^{\mathbb{R}^{d}}\in\mathscr{X}_{\infty}^{\beta,\gamma}\). Then for all \(x\in\mathbb{R}^{d}\), there exists a unique solution \(\mathbb{Q}\) on \((\Omega,\mathscr{F})\) of the martingale problem for \((\mathscr{G}^{V},\delta_{x})\). Under \(\mathbb{Q}\) the canonical process is a strong Markov process._
In the following, we will also consider the projected process \((X_{t}^{\mathbb{T}^{d}})=(\iota(X_{t}))\) for the canonical projection \(\iota:\mathbb{R}^{d}\to\mathbb{T}^{d}\), \(x\mapsto[x]=x\mod\mathbb{Z}^{d}\), and the martingale solution \(X\) from Theorem 2.9. The generator \(\mathfrak{L}\) of \(X^{\mathbb{T}^{d}}\) we define by
\[\mathfrak{L}f:=-\mathscr{L}_{\nu}^{\alpha}f+F\cdot\nabla f\]
acting on functions \(f:\mathbb{T}^{d}\to\mathbb{R}\).
This work moreover yields a characterization of the domain \(\mathrm{dom}(\mathfrak{L})\) of the generator \(\mathfrak{L}\), cf. Theorem 5.7. We denote its semigroup by \((T_{t}^{\mathbb{T}^{d}})_{t\geqslant 0}\) with \(T_{t}^{\mathbb{T}^{d}}f:=T_{t}f^{\mathbb{R}^{d}}\), \(f\in L^{\infty}(\mathbb{T}^{d})\), with the semigroup \((T_{t})_{t\geqslant 0}\) of the Markov process \((X_{t})\) on \(\mathbb{R}^{d}\) with periodic drift \(F^{\mathbb{R}^{d}}\).
The semigroup \((P_{t}^{\mathbb{T}^{d}})\) of the generalized fractional Laplacian \((-\mathscr{L}_{\nu}^{\alpha})\) acting on functions on the torus, is analogously defined as \(P_{t}^{\mathbb{T}^{d}}f:=P_{t}f^{\mathbb{R}^{d}}\) and the semigroup estimates for \((P_{t})\) imply the estimates for \((P_{t}^{\mathbb{T}^{d}})\) on the periodic Besov spaces \(\mathscr{C}^{\theta}(\mathbb{T}^{d})=\mathscr{C}^{\theta}_{\infty}(\mathbb{T} ^{d})\) (due to \(u\in L^{\infty}(\mathbb{T}^{d})\) implying \(u^{\mathbb{R}^{d}}\in L^{\infty}(\mathbb{R}^{d})\) and vice versa). The following lemma states the semigroup estimates for \((P_{t}^{\mathbb{T}^{d}})\) on \(\mathscr{C}^{\theta}_{2}(\mathbb{T}^{d})\), that will be employed in the sequel. The proof can be found in Appendix A. Lemma 2.10 in particular proves the extension of \(\mathscr{L}_{\nu}^{\alpha}\) to Besov spaces \(\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})\).
**Lemma 2.10**.: _Let \(u\in\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})\) for \(\beta\in\mathbb{R}\). Then the following estimates hold true_
\[\|\mathscr{L}_{\nu}^{\alpha}u\|_{\mathscr{C}^{\beta-\alpha}_{2}(\mathbb{T}^{d })}\lesssim\|u\|_{\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})}. \tag{2.9}\]
_Moeover, for any \(\theta\geqslant 0\) and \(\vartheta\in[0,\alpha]\),_
\[\|P_{t}u\|_{\mathscr{C}^{\beta+\theta}_{2}(\mathbb{T}^{d})}\lesssim(t^{- \theta/\alpha}\lor 1)\|u\|_{\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})},\quad\|(P_{t}- \mathrm{Id})u\|_{\mathscr{C}^{\beta-\theta}_{2}(\mathbb{T}^{d})}\lesssim t^{ \vartheta/\alpha}\|u\|_{\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})}. \tag{2.10}\]
For functions with vanishing zero-order Fourier mode, we can improve the Schauder estimates for large \(t>0\). This is established in the following lemma, the proof can be found in Appendix A.
**Lemma 2.11**.: _Let \((P_{t})\) be the \((-\mathscr{L}_{\nu}^{\alpha})\)-semigroup on the torus \(\mathbb{T}^{d}\) as defined above. Then for \(g\in\mathscr{C}^{\beta}_{2}\), \(\beta\in\mathbb{R}\), with \(\hat{g}(0)=\mathscr{F}_{\mathbb{T}^{d}}(g)(0)=0\), exponential Schauder estimates hold true. That is, for any \(\theta\geqslant 0\), there exists \(c>0\), such that_
\[\|P_{t}g\|_{\mathscr{C}^{\beta+\theta}_{2}(\mathbb{T}^{d})}\lesssim t^{-\theta/ \alpha}e^{-ct}\|g\|_{\mathscr{C}^{\beta}_{2}(\mathbb{T}^{d})}.\]
In the sequel, we will employ the following duality result for Besov spaces on the torus. For Besov spaces on \(\mathbb{R}^{d}\), the result is proven in [1, Proposition 2.76]. The same proof applies for Besov spaces on the torus (cf. also [14, Theorem in Section 3.5.6]).
**Lemma 2.12**.: _Let \(\theta\in\mathbb{R}\) and \(f,g\in C^{\infty}(\mathbb{T}^{d})\). Then we have the duality estimate:_
\[|\langle f,g\rangle|\lesssim\|f\|_{B^{\theta}_{2,2}(\mathbb{T}^{d})}\|g\|_{B^{- \theta}_{2,2}(\mathbb{T}^{d})}. \tag{2.11}\]
_In particular, the mapping \((f,g)\mapsto\langle f,g\rangle\) can be extended uniquely to \(f\in B^{\theta}_{2,2}(\mathbb{T}^{d})\), \(g\in B^{-\theta}_{2,2}(\mathbb{T}^{d})\)._
Let us define the periodic Bessel-potential space or fractional Sobolev space for \(s\in\mathbb{R}\),
\[H^{s}(\mathbb{T}^{d})=\bigg{\{}u\in\mathscr{S}^{\prime}(\mathbb{T}^{d})\biggm{|} \|u\|_{H^{s}(\mathbb{T}^{d})}^{2}=\sum_{k\in\mathbb{Z}^{d}}(1+|k|^{2})^{s}|\hat {f}(k)|^{2}<\infty\bigg{\}},\]
and the homogeneous periodic Bessel-potential space
\[\dot{H}^{s}(\mathbb{T}^{d})=\bigg{\{}u\in\mathscr{S}^{\prime}(\mathbb{T}^{d}) \biggm{|}\|u\|_{H^{s}(\mathbb{T}^{d})}^{2}=\sum_{k\in\mathbb{Z}^{d}}|k|^{2s}| \hat{f}(k)|^{2}<\infty\bigg{\}}.\]
Motivated by the corresponding characterization of periodic Besov spaces from [13, Section 3.5.4], we define the homogeneous Besov space on the torus for \(\theta\in(0,1)\) with notation \(\Delta_{h}u(x):=u(x+h)-u(x)\), \(h,x\in\mathbb{T}^{d}\) as follows:
\[\dot{B}^{\theta}_{2,2}(\mathbb{T}^{d}):=\bigg{\{}u\in L^{2}(\mathbb{T}^{d}) \biggm{|}\|u\|_{\dot{B}^{\theta}_{2,2}(\mathbb{T}^{d})}^{2}:=\int_{\mathbb{T} ^{d}}|h|^{-2\theta}\|\Delta_{h}u\|_{L^{2}(\mathbb{T}^{d})}^{2}\frac{dh}{|h|^{ d}}<\infty\bigg{\}}. \tag{2.12}\]
For \(\theta=1\), we set \(\dot{B}^{1}_{2,2}(\mathbb{T}^{d}):=\dot{H}^{1}(\mathbb{T}^{d})\). Using derivatives of \(u\), one can define homogeneous periodic Besov spaces in that way also for \(\theta\geqslant 1\) (cf. [13, Section 3.5.4]), but we will not need them below. We also refer to [13, (iv) of Theorem, Section 3.5.4] for an equivalent characterization of spaces \(B^{\theta}_{2,2}(\mathbb{T}^{d})\) for \(\theta\in(0,1)\) in terms of the differences \(\Delta_{h}u\).
### Strategy to prove the main result
To prove the CLT in Theorem 7.2, we distinguish between the cases \(\alpha=2\) and \(\alpha\in(1,2)\). In the following, we briefly summarize our strategy to prove the convergences (1.2) (Brownian case, cf. [1, Chapter 3, Section 4.2] in the case of \(C_{b}^{2}\)-drift) and (1.4) (pure Levy noise case, cf. [10] in the case of \(C_{b}^{3}\)-drift).
First we prove existence of a unique invariant probability measure \(\pi\) for \(X^{\mathbb{T}^{d}}\). To that aim, we solve in Section 3 the singular Fokker-Planck equation with the paracontrolled approach in \(\mathscr{C}_{1}^{\alpha+\beta-1}\), yielding a continuous (as \(\alpha+\beta-1>0\)) Lebesgue-density. Furthermore, we prove a strict maximum principle on compacts for the Fokker-Planck equation. In Section 5 an application of Doeblin's theorem then yields existence and uniqueness of the invariant ergodic probability measure \(\pi\) for \(\mathfrak{L}\) with a strictly positive Lebesgue density \(\rho_{\infty}\). Doeblin's theorem furthermore yields pointwise spectral gap estimates on the semigroup \((T_{t}^{\mathbb{T}^{d}})_{t\geqslant 0}\) associated to \(\mathfrak{L}\), i.e. the process \(X^{\mathbb{T}^{d}}\) is exponentially ergodic. We then extend those pointwise spectral gap estimates to \(L^{2}(\pi)\)-spectral gap estimates. This enables to solve the Poisson equation in Corollary 5.9 for right-hand sides that are elements of \(L^{2}(\pi)\) and that have vanishing mean under \(\pi\). In particular, we can solve the Poisson equation with right-hand side \(F^{m}-\langle F^{m}\rangle_{\pi}\) for \(F^{m}\in C^{\infty}(\mathbb{T}^{d})\) for each fixed \(m\in\mathbb{N}\), where \(F^{m}\to F\) in \(\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})\), denoting the solution by \(\chi^{m}\). We then prove convergence of \((\chi^{m})_{m}\) in \(L^{2}(\pi)\) utilizing a Poincare-type estimate for the operator \(\mathfrak{L}\) and combining with the theory from [11]. Via solving the resolvent equation \((\lambda-\mathfrak{L})g=G\) in Section 4 with the paracontrolled approach for right-hand-sides in \(G\in L^{2}(\pi)\) or \(G=F^{i}\), \(i=1,...,d\), we then obtain in Section 6 convergence of \((\chi^{m})_{m}\) in \((\mathscr{C}_{2}^{\alpha+\beta}(\mathbb{T}^{d}))^{d}\) to a limit \(\chi\) which indeed solves the Poisson equation \((-\mathfrak{L})\chi=F-\langle F\rangle_{\pi}\) with singular right-hand side \(F-\langle F\rangle_{\pi}\). Here the mean \(\langle F\rangle_{\pi}\) can be defined in a stable manner using the regularity, respectively the paracontrolled structure, of the density \(\rho_{\infty}\), cf. Lemma 5.10. Decomposing the drift in terms of the solution to the Poisson equation and Dynkin's martingale, we can finally prove the functional CLT in Section 7.
Via Feynman-Kac formula, the CLT yields the periodic homogenization result of Corollary 7.3 for the solution to the associated Cauchy problem with operator \(\mathfrak{L}^{\varepsilon}\) as \(\varepsilon\to 0\), where formally \(\mathfrak{L}^{\varepsilon}f=-\mathscr{L}_{\nu}^{\alpha}f+\varepsilon^{-1}F( \varepsilon^{-1}\cdot)\cdot\nabla f\).
## 3 Singular Fokker-Planck equation and a strict maximum principle
This section features the results on the Fokker-Planck equation, Theorem 3.1 and Proposition 3.5, that will be of use in Section 5 below.
Let us define the blow-up spaces for \(\gamma\in(0,1)\),
\[\mathscr{M}_{T,0}^{\gamma}X:=\left\{u:(0,T]\to X\;\middle|\;\sup_{t\in[0,T]}t^ {\gamma}\|u_{t}\|_{X}<\infty\right\}\]
and
\[C_{T,0}^{1,\gamma}X:=\left\{u:(0,T]\to X\;\middle|\;\sup_{0\leqslant s<t \leqslant T}\frac{s^{\gamma}\|u_{t}-u_{s}\|_{X}}{|t-s|}<\infty\right\}\]
with blow-up at \(t=0\).
The solution to the Fokker-Planck eqution with initial condition equal to a Dirac measure, will have a blow-up at time \(t=0\) due to the singularity of the initial condition. A direct computation shows that the Dirac measure in \(x\in\mathbb{R}^{d}\) satisfies \(\delta_{x}\in\mathscr{C}_{p}^{-d(1-\frac{1}{p})}\) for any \(p\in[1,\infty]\), in particular \(\delta_{x}\in\mathscr{C}_{1}^{0}\). Moreover, one can show that the map \(x\mapsto\delta_{x}\in\mathscr{C}_{1}^{-\varepsilon}\) is continuous for any \(\varepsilon>0\). The next theorem proves existence of a mild solution to the Fokker-Planck equation
\[(\partial_{t}-\mathfrak{L}^{*})\rho_{t}=0,\quad\rho_{0}=\mu,\]
with initial condition \(\mu\in\mathscr{C}_{1}^{-\varepsilon}\) for small \(\varepsilon>0\). Here, \(\mathfrak{L}^{*}\) denotes the formal Lebesgue-adjoint to \(\mathfrak{L}\),
\[\mathfrak{L}^{*}f:=-\mathscr{L}_{\nu}^{\alpha}f-\nabla\cdot(Ff)=-\mathscr{L}_ {\nu}^{\alpha}f-div(Ff).\]
The proof of Theorem 3.1 is similar to [13, Theorem 4.7].
**Theorem 3.1**.: _Let \(T>0\), \(\alpha\in(1,2]\) and \(p\in[1,\infty]\). Let either \(\beta\in(\frac{1-\alpha}{2},0)\) and \(F\in\mathscr{C}_{\mathbb{R}^{d}}^{\beta}\) or \(F\in\mathscr{X}_{\infty}^{\beta,\gamma^{\prime}}\) for \(\beta\in(\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]\), \(\gamma^{\prime}\in(\frac{2\beta+2\alpha-1}{\alpha},1)\). Then, for any small enough \(\varepsilon>0\) and any initial condition \(\mu\in\mathscr{C}_{p}^{-\varepsilon}\), there exists a unique mild solution \(\rho\) to the Fokker-Planck equation in \(\mathscr{M}_{T,0}^{\gamma}\mathscr{C}_{p}^{\alpha+\beta-1}\cap C_{T}^{1- \gamma}\mathscr{C}_{p}^{\beta}\cap C_{T,0}^{1,\gamma}\mathscr{C}_{p}^{\beta}\) for \(\gamma\in(C(\varepsilon),1)\) (for some \(C(\varepsilon)\in(0,1)\)) in the Young regime and \(\gamma\in(\gamma^{\prime},\frac{\alpha\gamma^{\prime}}{2-\alpha-3\beta})\) in the rough regime, i.e._
\[\rho_{t}=P_{t}\mu+\int_{0}^{t}P_{t-s}(-\nabla\cdot(F\rho_{s}))ds, \tag{3.1}\]
_where \((P_{t})_{t\geqslant 0}\) denotes the \((-\mathscr{L}_{\nu}^{\alpha})\)-semigroup. In the rough case, the solution satisfies_
\[\rho_{t}=\rho_{t}^{\sharp}+\rho_{t}\otimes I_{t}(-\nabla\cdot F) \tag{3.2}\]
_where \(\rho_{t}^{\sharp}\in\mathscr{M}_{T,0}^{\gamma}\mathscr{C}_{p}^{2(\alpha+ \beta)-2}\cap C_{T}^{1-\gamma}\mathscr{C}_{p}^{2\beta-2+\alpha}\cap C_{T,0}^ {1,\gamma}\mathscr{C}_{p}^{2\beta-2+\alpha}\) and \(I_{t}(v):=\int_{0}^{t}P_{t-s}v_{s}ds\)._
_Moreover, the solution depends continuously on the data \((F,\mu)\in\mathscr{X}_{\infty}^{\beta,\gamma^{\prime}}\times\mathscr{C}_{p}^ {-\varepsilon}\). Furthermore, for any fixed \(t>0\), the solution satisfies \((\rho_{t},\rho_{t}^{\sharp})\in\mathscr{C}^{\alpha+\beta-1}\times\mathscr{C}^ {2(\alpha+\beta)-2}\)._
_If \((F,\mu)\) are \(1\)-periodic distributions, then the solution \(\rho_{t}\) is \(1\)-periodic._
Proof.: We will prove that we can solve the Fokker-Planck equation for initial conditions \(\mu\in\mathscr{C}_{p}^{-\varepsilon}\) for \(\varepsilon=-((1-\tilde{\gamma})\alpha+\beta)\) for \(\tilde{\gamma}\in[\frac{\alpha+\beta}{\alpha},1)\) in the Young regime and for \(\varepsilon=-((2-\tilde{\gamma})\alpha+2\beta-1)\) for \(\tilde{\gamma}\in[\frac{2\beta+2\alpha-1}{\alpha}\lor 0,\gamma^{\prime}]\) in the rough regime. In the Young regime, we obtain a solution \(\rho\in\mathscr{M}_{T,0}^{\gamma}\mathscr{C}_{p}^{\alpha+\beta-1}\cap C_{T}^{1 -\gamma}\mathscr{C}_{p}^{\beta}\cap C_{T,0}^{\gamma,1}\mathscr{C}_{p}^{\beta}\) for \(\gamma=\tilde{\gamma}\) and the proof is analogous to [13, Theorem 4.1]. We thus only give the proof in the rough regime.
To that aim, let us define, analogously as in the proof of [13, Theorem 4.7] for \(\gamma\in(\gamma^{\prime},1)\) as there,
\[\mathscr{L}_{T,p}^{\gamma,\theta}:=\mathscr{M}_{T,0}^{\gamma} \mathscr{C}_{p}^{\theta}\cap C_{T}^{1-\gamma}\mathscr{C}_{p}^{\theta-\alpha} \cap C_{T,0}^{1,\gamma}\mathscr{C}_{p}^{\theta-\alpha}\]
and the paracontrolled solution space
\[\mathscr{D}_{T,p}^{\gamma}:=\{(u,u^{\prime})\in\mathscr{L}_{T,p}^ {\gamma^{\prime},\alpha+\beta-1}\times(\mathscr{L}_{T,p}^{\gamma,\alpha+\beta -1})^{d}\mid u_{t}^{\sharp}=u_{t}-u_{t}^{\prime}\otimes I_{t}(-\nabla\!\cdot\! F)\in\mathscr{L}_{T,p}^{\gamma,2(\alpha+\beta)-2}\}\]
for \(p\in[1,\infty]\), equipped with the norm
\[\|u-w\|_{\mathscr{D}_{T,p}^{\gamma}}:=\|u-w\|_{\mathscr{L}_{T,p}^ {\gamma^{\prime},\alpha+\beta-1}}+\|u^{\prime}-w^{\prime}\|_{(\mathscr{L}_{T,p }^{\gamma,\alpha+\beta-1})^{d}}+\|u^{\sharp}-w^{\sharp}\|_{\mathscr{L}_{T,p}^ {\gamma,2(\alpha+\beta)-1}},\]
which makes the space a Banach space.
For \(\mu\in\mathscr{C}_{p}^{-\varepsilon}\), \(\varepsilon=-((2-\tilde{\gamma})\alpha+2\beta-2)\), we first prove that we obtain a paracontrolled solution \(\rho\in\mathscr{D}_{T,p}^{\gamma}\). As the proof is similar to [13, Theorem 4.7], we only give the essential arguments of the proof. Notice that compared to [13, Theorem 4.7], here we consider the operator \(\mathfrak{L}^{\star}\) instead of \(\mathfrak{L}\) and initial conditions in \(\mathscr{C}_{p}^{-\varepsilon}\) for \(\varepsilon=-((2-\tilde{\gamma})\alpha+2\beta-2)\), hence \(\rho_{0}=\rho_{0}^{\sharp}\).
For \(\rho\in\mathscr{D}_{T,p}^{\gamma}\) the resonant product \(F\odot\rho=(F^{i}\odot\rho)_{i=1,..,d}\) is well-defined and satisfies
\[F^{i}\odot\rho=F^{i}\odot\rho^{\sharp}+\rho^{\prime}\cdot(F^{i} \odot I_{t}(\nabla\cdot F))+C_{1}(\rho^{\prime},I_{t}(\nabla\cdot F),F^{i})\]
for the paraproduct commutator
\[C_{1}(f,g,h):=(f\odot g)\odot h-f\cdot(g\odot h).\]
Using the paraproduct estimates, we obtain Lipschitz dependence of the product on \((F,\rho)\in\mathscr{X}_{\infty}^{\beta,\gamma^{\prime}}\times\mathscr{D}_{T,p}^ {\gamma}\), that is,
\[\|F \odot\rho\|_{\mathscr{M}_{T}^{\gamma^{\prime}}\mathscr{C}_{p}^{ \alpha+2\beta-1}}\] \[\lesssim\|F\|_{\mathscr{X}_{\infty}^{\beta,\gamma^{\prime}}}(1+\| F\|_{\mathscr{X}_{\infty}^{\beta,\gamma^{\prime}}})\big{(}\|\rho\|_{\mathscr{M}_{T}^{ \gamma^{\prime}}\mathscr{C}_{p}^{\alpha+\beta-1}}+\|\rho^{\prime}\|_{( \mathscr{M}_{T}^{\gamma^{\prime}}\mathscr{C}_{p}^{\alpha+\beta-1-\delta})^{d} }+\|\rho^{\sharp}\|_{\mathscr{M}_{T}^{\gamma^{\prime}}\mathscr{C}_{p}^{2( \alpha+\beta)-2-\delta}}\big{)}\] \[\lesssim\|F\|_{\mathscr{X}_{\infty}^{\beta,\gamma^{\prime}}}(1+\| F\|_{\mathscr{X}_{\infty}^{\beta,\gamma^{\prime}}})\big{(}\|\rho\|_{\mathscr{M}_{T}^{ \gamma^{\prime}}\mathscr{C}_{p}^{\alpha+\beta-1}}+\|\rho^{\prime}\|_{( \mathscr{E}_{T,p}^{\gamma,\alpha+\beta-1})^{d}}+\|\rho^{\sharp}\|_{\mathscr{E }_{T,p}^{\gamma,2(\alpha+\beta)-2}}\big{)}\] \[\lesssim\|F\|_{\mathscr{X}_{\infty}^{\beta,\gamma^{\prime}}}(1+\| F\|_{\mathscr{X}_{\infty}^{\beta,\gamma^{\prime}}})\|\rho\|_{\mathscr{D}_{T,p}^{ \gamma}}\]
for \(\delta=\alpha-\alpha\frac{\gamma^{\prime}}{\gamma}\), using moreover the interpolation estimate from [13, Lemma 3.7, (3.13)].
The contraction map will be defined as
\[\mathscr{D}_{T,p}^{\gamma}\ni(\rho,\rho^{\prime})\mapsto(\phi( \rho),\rho)\in\mathscr{D}_{T,p}^{\gamma}\]
with
\[\phi(\rho)_{t}:=P_{t}\mu+I_{t}(-\nabla\cdot(F\rho)).\]
Here, \(\overline{T}\) will be chosen small enough, such that the above map becomes a contraction. Afterwards the solutions on the subintervals of length \(\overline{T}\) are patched together. Notice that the fixed point
satisfies \(\rho^{\prime}=\rho\).
As \(\varepsilon=-((2-\tilde{\gamma})\alpha+2\beta-2)\), we obtain by the semigroup estimates from [13, Lemma 2.5], that
\[\left\|P_{t}\mu\right\|_{\mathscr{C}_{p}^{2(\alpha+\beta)-2}}\lesssim t^{- \tilde{\gamma}}\|\mu\|_{\mathscr{C}_{p}^{-\varepsilon}}. \tag{3.3}\]
Utilizing the Schauder estimates [13, Corollary 3.2] (which apply by a time change also for blow-up-spaces with blow-up at \(t=0\) instead of blow-ups at \(t=T\)) and the estimate for the resonant product yields
\[\left\|I(\nabla\cdot(F\rho))\right\|_{\mathscr{X}_{T,p}^{\gamma, \alpha+\beta-1}} \lesssim T^{\gamma-\gamma^{\prime}}\|\nabla\cdot(F\rho)\|_{\mathscr{ M}_{T,0}^{\gamma^{\prime}}\mathscr{C}_{p}^{\beta-1}}\] \[\lesssim T^{\gamma-\gamma^{\prime}}\|F\|_{\mathscr{X}_{\infty}^{ \beta,\gamma^{\prime}}}(1+\|F\|_{\mathscr{X}_{\infty}^{\beta,\gamma^{\prime} }})\|\rho\|_{\mathscr{M}_{T,p}^{\gamma}}.\]
Moreover, we have that for a solution \(\rho\),
\[\rho_{t}^{\sharp}=P_{t}\mu+C_{2}(\rho,\nabla\cdot F)_{t}+I_{t}(-\nabla\cdot( \rho\odot F))+I_{t}(-\nabla\cdot(\rho\odot F))+I_{t}(-\nabla\rho\odot F)\]
for the semigroup commutator
\[C_{2}(u,v)=I(u\otimes v)-u\otimes I(v).\]
Using (3.3) and [13, Corollary 3.2], we obtain
\[\|\rho^{\sharp}\|_{\mathscr{L}_{T,p}^{\gamma,2(\alpha+\beta)-2}}\lesssim\|\mu \|_{\mathscr{C}_{p}^{-\varepsilon}}+T^{\gamma-\gamma^{\prime}}\|F\|_{\mathscr{ X}_{\infty}^{\beta,\gamma^{\prime}}}\|\rho\|_{\mathscr{X}_{T,p}^{\gamma^{\prime}, \alpha+\beta-1}}.\]
Hence, as \(\gamma>\gamma^{\prime}\), replacing \(T\) by \(\overline{T}\leqslant T\) small enough, we obtain a paracontrolled solution in \(\mathscr{D}_{\overline{T},p}^{\gamma}\). Then, we paste the solutions on the subintervals together to obtain a solution on \([0,T]\), cf. in the proof of [13, Theorem 4.7].
It remains to justify that the solution at fixed times \(t>0\) satisfies \((\rho_{t},\rho_{t}^{\sharp})\in\mathscr{C}^{\alpha+\beta-1}\times\mathscr{C} ^{2(\alpha+\beta-1)}\), i.e. that we can increase the integrability from \(p\) to \(\infty\). From the above, we obtain \((\rho,\rho^{\sharp})\in C([t,T],\mathscr{C}_{p}^{\alpha+\beta-1})\times C([t, T],\mathscr{C}_{p}^{2(\alpha+\beta-1)})\). Then, we can apply the argument to increase the integrability, that was carried out in the end of the proof of [13, Proposition 2.4], to obtain that indeed \((\rho,\rho^{\sharp})\in C([t,T],\mathscr{C}^{\alpha+\beta-1})\times C([t,T], \mathscr{C}^{2(\alpha+\beta-1)})\) for any \(t\in(0,T)\).
The continuous dependence of the solution on the data \((F,\mu)\) follows analogously as in [13, Theorem 4.12], with the above estimates and a Gronwall-type argument.
If \((F,\mu)\) are \(1\)-periodic distributions, then \(P_{t}\mu=p_{t}*\mu\) is \(1\)-periodic, as the convolution with the fractional heat-kernel \(p_{t}\) with a periodic distribution yields a periodic function and the fixed point argument can be carried out in the periodic solution space \(\mathscr{D}_{T,p}^{\gamma}(\mathbb{T}^{d})\).
**Corollary 3.2**.: _Let \(X\) be the unique martingale solution of the singular periodic SDE (1.1) for \(\mathfrak{L}\) (acting on functions \(f:\mathbb{R}^{d}\to\mathbb{R}\)), starting at \(x\in\mathbb{R}^{d}\). Let \((t,y)\mapsto\rho_{t}(x,y)\) be the mild solution of the Fokker-Planck equation with \(\rho_{0}=\delta_{x}\) from Theorem 3.1. Then for any \(t>0\), the map \((x,y)\mapsto\rho_{t}(x,y)\) is continuous. Furthermore, for any \(f\in L^{\infty}(\mathbb{R}^{d})\),_
\[\operatorname{E}_{X_{0}=x}[f(X_{t})]=\int_{\mathbb{R}^{d}}f(y)\rho_{t}(x,y)dy, \tag{3.4}\]
_that is, \(\rho_{t}(x,\cdot)\) is the density of \(Law(X_{t})\), if \(X_{0}=x\), with respect to the Lebesgue measure. In particular, for the projected solution \(X^{\mathbb{T}^{d}}\) with drift \(F\in\mathscr{X}_{\infty}^{\beta,\gamma^{\prime}}(\mathbb{T}^{d})\) and \(f\in L^{\infty}(\mathbb{T}^{d})\) and \(z\in\mathbb{T}^{d}\),_
\[\operatorname{E}_{X_{0}^{\mathbb{T}^{d}}=z}[f(X_{t}^{\mathbb{T}^{d}})]=\int_{ \mathbb{T}^{d}}f(w)\rho_{t}(z,w)dw, \tag{3.5}\]
_where, by abusing notation to not introduce a new symbol for the density on the torus, \(\rho_{t}(z,w):=\rho_{t}(x,y)\) for \((x,y)\in\mathbb{R}^{d}\) with \((\iota(x),\iota(y))=(z,w)\), \(\iota:\mathbb{R}^{d}\to\mathbb{T}^{d}\) denoting the canonical projection._
**Remark 3.3**.: _Let \(\rho(x,\cdot)\) be the solution of the Fokker-Planck equation started in \(\delta_{x}\) from Theorem 3.1 and let \(u^{y}\) solve the Kolmogorov backward equation with terminal condition \(u_{T}=\delta_{y}\) whose existence follows from [13, Theorem 4.7]. Then due to (3.4) and the Feynman-Kac formula (approximating \(F\) and utilizing the continuity of the solutions maps) we see the equality \(\rho_{t}(x,y)=u_{T-t}^{y}(x)\)._
**Remark 3.4**.: _If \(F\in\mathscr{X}_{\infty}^{\beta,\gamma^{\prime}}(\mathbb{T}^{d})\), then by definition of \((P_{t}^{\mathbb{T}^{d}})\), \(\rho(z,\cdot)\) is the mild solution of the Fokker-Planck equation on the torus (that is, \((P_{t})\) replaced by \((P_{t}^{\mathbb{T}^{d}})\) in (3.1)) with \(\rho_{0}(z,\cdot)=\delta_{z}\)._
Proof.: Continuity in \(y\) follows from \(\rho_{t}(x,\cdot)\in\mathscr{C}^{\alpha+\beta-1}\) and \(\alpha+\beta-1>0\). Continuity in \(x\) follows from the continuous dependence of the solution on the initial condition \(\delta_{x}\) and continuity of the map \(x\mapsto\delta_{x}\in\mathscr{C}_{1}^{-\varepsilon}\) for \(\varepsilon>0\).
That \(\rho_{t}\) is the density of \(\operatorname{Law}(X_{t})\) follows by approximation of \(F\) by \(F^{m}\in C_{b}^{\infty}(\mathbb{R}^{d})\) with \(F^{m}\to F\) in \(\mathscr{C}_{\mathbb{R}^{d}}^{\beta}\), respectively in \(\mathscr{X}_{\infty}^{\beta,\gamma^{\prime}}\), using that \(\rho\) depends continuously on the data \((F,\mu)\) and that \(X^{m}\to X\) in distribution, where \(X^{m}\) is the strong solution to the SDE with drift term \(F^{m}\) (cf. the proof of Theorem 2.9) and the Feynman-Kac formula for classical SDEs. Indeed, for \(m\in\mathbb{N}\), we have that for \(f\in C_{b}^{2}\) (and thus for \(f\in L^{\infty}\) by approximation),
\[u_{T-t}^{m}(x)=\mathds{E}_{X_{0}^{m}=x}[f(X_{t}^{m})]=\int f(y)\rho_{t}^{m}(x,y)dy\]
with \((\partial_{t}+\mathfrak{L}^{m})u^{m}=\mathscr{G}^{F^{m}}u^{m}=0\), \(u_{T}^{m}=f\), and \((\partial_{t}-(\mathfrak{L}^{m})^{*})\rho=0\), \(\rho_{0}=\delta_{x}\). Now, we let \(m\to\infty\) to obtain (3.4). In particular, \(\rho_{t}\geqslant 0\) and \(\rho_{t}\in L^{1}(dx)\). That \(\rho_{t}\) is well-defined follows as \(\rho_{t}\) is periodic (due to the periodicity assumption on \(F\)). Equality (3.5) follows from (3.4) considering \(f\circ\iota\) instead of \(f\).
**Proposition 3.5**.: _Let \(\mu\in\mathscr{C}_{1}^{0}\) be a positive, nontrivial (\(\mu\neq 0\)) measure. Let \(\rho\) be the mild solution of the Fokker-Planck equation \((\partial_{t}-\mathfrak{L}^{*})\rho_{t}=0\) with \(\rho_{0}=\mu\). Then for any compact \(K\subset\mathbb{R}^{d}\) and any \(t>0\), there exists \(c>0\) such that_
\[\min_{x\in K}\rho_{t}(x)\geqslant c>0.\]
_Let \(\rho_{t}\) be as in Remark 3.4. Then, in particular, for any \(z\in\mathbb{T}^{d}\), \(t>0\), there exists \(c>0\) such that_
\[\min_{x\in\mathbb{T}^{d}}\rho_{t}(z,x)\geqslant c>0.\]
Proof.: In the Brownian case, \(\alpha=2\), this follows from the proof of [13, Theorem 5.1]. We give the adjusted argument for \(\alpha\in(1,2]\).
Let \(p_{t}\) be the \(\alpha\)-stable density of \(L_{t}\). Without loss of generality, we assume \(\mu=u\in C_{b}(\mathbb{R}^{d})\) with \(u\geqslant 0\) and with \(u\geqslant 1\) on a ball \(B(0,\kappa)\), \(\kappa>0\). Otherwise, we may consider \(\rho_{s}\) for \(s>0\) as an initial condition, for which we know that \(\rho_{s}\in\mathscr{C}^{\alpha+\beta-1}\subset C_{b}(\mathbb{R}^{d})\) and that \(\rho_{s}\geqslant 0\) by Corollary 3.2. Then by continuity there exists a ball \(B(x,\kappa)\) where \(\rho_{s}>0\). Dividing by the lower bound and shifting \(\rho_{s}\), we can assume that \(\rho_{s}>1\) on \(B(0,\kappa)\).
Let now \(\kappa>0\) and \(u\in C_{b}(\mathbb{R}^{d})\) with \(u\geqslant 0\) and with \(u\geqslant 1\) on the ball \(B(0,\kappa)\). Then by the scaling property, we have that
\[p_{t}*u(y)\geqslant\mathds{P}(|y+t^{1/\alpha}L_{1}|\leqslant\kappa)=\mathds{P} (L_{1}\in B(yt^{-1/\alpha},\kappa t^{-1/\alpha}))\]
Let \(y=(\kappa+t\rho)z\) for \(z\in B(0,1)\), \(\rho\geqslant 0\), so that \(y\in B(0,\kappa+t\rho)\). Then we obtain
\[\mathds{P}(L_{1} \in B(yt^{-1/\alpha},\kappa t^{-1/\alpha}))\] \[=\mathds{P}(L_{1}\in B(z(\kappa t^{-1/\alpha}+\rho t^{1-1/\alpha} ),\kappa t^{-1/\alpha}))\] \[\geqslant\mathds{P}(2z\cdot L_{1}\geqslant|L_{1}|^{2}(\kappa t^{- 1/\alpha}+\rho t^{1-1/\alpha})^{-1}+(|z|^{2}-1)[\kappa t^{-1/\alpha}+\rho t^{1- 1/\alpha}])\] \[\geqslant\inf_{|z|\leqslant 1}\mathds{P}(2z\cdot L_{1}\geqslant|L _{1}|^{2}(\kappa t^{-1/\alpha}+\rho t^{1-1/\alpha})^{-1}+(|z|^{2}-1)[\kappa t^{ -1/\alpha}+\rho t^{1-1/\alpha}])\] \[=\inf_{|z|=1}\mathds{P}(2z\cdot L_{1}\geqslant|L_{1}|^{2}(\kappa t ^{-1/\alpha}+\rho t^{1-1/\alpha})^{-1})\] \[\to\inf_{|z|=1}\mathds{P}(z\cdot L_{1}\geqslant 0)=\frac{1}{2}\]
for \(t\to 0\). Here we used that \(\alpha>1\) and that by symmetry of \(L\), for any \(z\in B(0,1)\) with \(|z|=1\), \(\mathds{P}(z\cdot L_{1}\geqslant 0)=\mathds{P}(z\cdot L_{1}\leqslant 0)=1- \mathds{P}(z\cdot L_{1}\geqslant 0)\), because \(\mathds{P}(z\cdot L_{1}=0)=\mathds{P}(L_{1}=0)=0\).
Thus, we conclude, that there exists \(t_{\rho}>0\), such that for all \(t\in[0,t_{\rho}]\) and all \(y\in B(0,\kappa+t\rho)\), \(p_{t}*u(y)\geqslant\frac{1}{4}\).
Moreover, we have
\[\rho_{t}=P_{t}u+\int_{0}^{t}P_{t-s}(-\nabla\cdot(F\rho_{s}))ds\]
with \(P_{t}u=p_{t}*u\) and
\[\left\|\int_{0}^{t}P_{t-s}(-\nabla\cdot(F\rho_{s}))ds\right\|_{L^{\infty}} \leqslant Ct^{(\alpha+\beta-1-\varepsilon)/\alpha}\]
for \(\varepsilon\in(0,\alpha+\beta-1)\) by the semigroup estimates, [1, Lemma 2.5], with \(\alpha+\beta-1>0\). Hence, for small enough \(t\), we can achieve
\[\left\|\int_{0}^{t}P_{t-s}(-\nabla\cdot(F\rho_{s}))ds\right\|_{L^{\infty}}< \frac{1}{8}.\]
Together with the lower bound for \(p_{t}*u\), we obtain that there exists \(t_{\rho}>0\), such that for all \(t\in[0,t_{\rho}]\) and all \(y\in B(0,\kappa+t\rho)\), it holds that
\[\rho_{t}(y)\geqslant\frac{1}{8}.\]
Using linearity of the equation, we can repeat that argument on \([t_{\rho},2t_{\rho}]\) etc. Because \(K\) is compact, finitely many steps suffice (for large enough \(t\), the ball \(B(0,\kappa+t\rho)\) will cover \(K\)) to conclude that for all \(T>0\) there exists \(c>0\) such that for all \(y\in K\) and all \(t\in[0,T]\),
\[\rho_{t}(y)\geqslant c>0.\qed\]
## 4 Singular resolvent equation
In this and all subsequent sections of this paper, we write \((P_{t})\), respectively \((T_{t})\), for the semigroups acting on the periodic Besov spaces \(\mathscr{C}_{p}^{\theta}(\mathbb{T}^{d})\), \(p=2,\infty\), omitting the supercript \(\mathbb{T}^{d}\) that we introduced earlier.
We solve the resolvent equation in Theorem 4.2 for the singular operator \(\mathfrak{L}\) and for singular paracontrolled right-hand sides \(G=G^{\sharp}+G^{\prime}\owedge F\), \(G^{\sharp}\in\mathscr{C}_{2}^{0}(\mathbb{T}^{d})\), \(G^{\prime}\in(\mathscr{C}_{2}^{\alpha+\beta-1}(\mathbb{T}^{d}))^{d}\), that is
\[(\lambda-\mathfrak{L})g=G,\]
obtaining a solution \(g\in\mathscr{C}_{2}^{\alpha+\beta}(\mathbb{T}^{d})\).
The next Lemma proves semigroup and commutator estimates for the \(I_{\lambda}\)-operator.
**Lemma 4.1**.: _Let \(\lambda\geqslant 1\), \(\delta\in\mathbb{R}\) and \(v\in\mathscr{C}_{2}^{\delta}\). Let again \(I_{\lambda}(v):=\int_{0}^{\infty}e^{-\lambda t}P_{t}vdt\). Then, \(I_{\lambda}(v)\) is well-defined in \(\mathscr{C}_{2}^{\beta+\vartheta}(\mathbb{T}^{d})\) for \(\vartheta\in[0,\alpha]\) and the following estimate holds true_
\[\|I_{\lambda}(v)\|_{\mathscr{C}_{2}^{\delta+\vartheta}(\mathbb{T}^{d})} \lesssim\lambda^{-(1-\vartheta/\alpha)}\|v\|_{\mathscr{C}_{2}^{ \delta}(\mathbb{T}^{d})}. \tag{4.1}\]
_Furthermore, for \(v\in\mathscr{C}_{2}^{\alpha}(\mathbb{T}^{d})\), \(\sigma<1\), \(u\in\mathscr{C}^{\beta}(\mathbb{T}^{d})\), \(\beta\in\mathbb{R}\), and \(\vartheta\in[0,\alpha]\), the following commutator estimate holds true:_
\[\|C_{\lambda}(v,u)\|_{\mathscr{C}_{2}^{\sigma+\beta+\vartheta}( \mathbb{T}^{d})} :=\|I_{\lambda}(v\otimes u)-v\otimes I_{\lambda}(u)\|_{\mathscr{ C}_{2}^{\sigma+\beta+\vartheta}(\mathbb{T}^{d})}\] \[\lesssim\lambda^{-(1-\vartheta/\alpha)}\|v\|_{\mathscr{C}_{2}^{ \sigma}(\mathbb{T}^{d})}\|u\|_{\mathscr{C}^{\beta}(\mathbb{T}^{d})}. \tag{4.2}\]
Proof.: The proof of (4.1) follows from the semigroup estimates, Lemma 2.10. Indeed, we have
\[\|I_{\lambda}(v)\|_{\mathscr{C}_{2}^{\delta+\vartheta}(\mathbb{T} ^{d})} \leqslant\int_{0}^{\infty}e^{-\lambda t}\|P_{t}v\|_{\mathscr{C}_{2 }^{\delta+\vartheta}(\mathbb{T}^{d})}dt\] \[\lesssim\|v\|_{\mathscr{C}_{2}^{\vartheta}(\mathbb{T}^{d})}\int_ {0}^{\infty}e^{-\lambda t}[t^{-\vartheta/\alpha}\lor 1]dt\] \[=\|v\|_{\mathscr{C}_{2}^{\vartheta}(\mathbb{T}^{d})}\bigg{(} \lambda^{-(1-\vartheta/\alpha)}\int_{0}^{1}e^{-t}t^{-\vartheta/\alpha}dt+ \lambda^{-1}\int_{1}^{\infty}e^{-t}dt\bigg{)}\] \[\lesssim\lambda^{-(1-\vartheta/\alpha)}\|v\|_{\mathscr{C}_{2}^{ \vartheta}(\mathbb{T}^{d})},\]
since \(\lambda\geqslant 1\) and where we use that \(\int_{0}^{1}e^{-t}t^{-\vartheta/\alpha}dt\leqslant\int_{0}^{1}t^{-\vartheta/ \alpha}dt<\infty\) if \(\vartheta\in[0,\alpha)\) and \(\int_{1}^{\infty}e^{-t}dt<\infty\). The bound in the case \(\vartheta=\alpha\) follows with
\[\|I_{\lambda}(v)\|_{\mathscr{C}_{p}^{\delta+\alpha}(\mathbb{T}^{ d})} \leqslant\bigg{\|}\int_{0}^{1}e^{-\lambda t}P_{t}vdt\bigg{\|}_{ \mathscr{C}_{2}^{\delta+\alpha}(\mathbb{T}^{d})}+\int_{1}^{\infty}e^{-\lambda t }\|P_{t}v\|_{\mathscr{C}_{p}^{\delta+\alpha}(\mathbb{T}^{d})}dt\] \[\lesssim\|v\|_{\mathscr{C}_{p}^{\delta}(\mathbb{T}^{d})},\]
using [12, Lemma 3.1] to estimate the integral over \([0,1]\) (with, in the notation of that lemma, \(T=1\), \(\gamma=0\), \(\sigma=\delta\), \(\varsigma=\alpha\), \(f_{0,t}=e^{-\lambda t}P_{t}v\)).
The commutator (4.2) is proven analogously using [12, Lemma 2.7].
**Theorem 4.2**.: _Let \(\alpha\in(1,2]\) and \(F\in\mathscr{C}^{\beta}(\mathbb{T}^{d})\) for \(\beta\in(\frac{1-\alpha}{2},0)\) or \(F\in\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})\) for \(\beta\in(\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]\) and \(\gamma\in(\frac{2\beta+2\alpha-1}{\alpha},1)\). Then, for \(\lambda>0\) large enough, the resolvent equation_
\[R_{\lambda}g=(\lambda-\mathfrak{L})g=G \tag{4.3}\]
_with right-hand side \(G=G^{\sharp}+G^{\prime}\otimes F\), \(G^{\sharp}\in\mathscr{C}_{2}^{0}(\mathbb{T}^{d})\), \(G^{\prime}\in(\mathscr{C}_{2}^{\alpha+\beta-1}(\mathbb{T}^{d}))^{d}\), possesses a unique solution \(g\in\mathscr{C}_{2}^{\theta}(\mathbb{T}^{d})\), \(\theta\in((2-\beta)/2,\beta+\alpha)\)._
_If \(\beta\in(\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]\), the solution is paracontrolled, that is,_
\[g=g^{\sharp}+(G^{\prime}+\nabla g)\otimes I_{\lambda}(F),\quad g^{\sharp}\in \mathscr{C}_{2}^{2\theta-1}(\mathbb{T}^{d}). \tag{4.4}\]
Proof.: Consider the paracontrolled solution space
\[\mathscr{D}_{2}^{\theta}:=\{(g,g^{\prime})\in\mathscr{C}_{2}^{\theta}(\mathbb{ T}^{d})\times(\mathscr{C}_{2}^{\theta-1}(\mathbb{T}^{d}))^{d}\mid g^{\sharp}:=g-g^{ \prime}\otimes I_{\lambda}(F)\in\mathscr{C}_{2}^{2\theta-1}(\mathbb{T}^{d})\} \tag{4.5}\]
with norm \(\|g-h\|_{\mathscr{D}_{2}^{\theta}}:=\|g-h\|_{\mathscr{C}_{2}^{\theta}(\mathbb{ T}^{d})}+\|g^{\sharp}-h^{\sharp}\|_{\mathscr{C}_{2}^{2\theta-1}(\mathbb{T}^{d})}+\|g^{ \prime}-h^{\prime}\|_{\mathscr{C}_{2}^{\theta-1}(\mathbb{T}^{d})}\), which makes it a Banach space.
The solution \(g\) satisfies
\[g=\int_{0}^{\infty}e^{-\lambda t}P_{t}(G+F\cdot\nabla g)dt,\]
i.e. it is the fixed point of the map \(\mathscr{C}_{2}^{\theta}(\mathbb{T}^{d})\ni g\mapsto\phi_{\lambda}(g):=\int_{0}^{ \infty}e^{-\lambda t}P_{t}(G+F\cdot\nabla g)dt\in\mathscr{C}_{2}^{\theta}( \mathbb{T}^{d})\), respectively, in the rough case \(\beta\in((2-2\alpha)/3,(1-\alpha)/2]\), of the map
\[\mathscr{D}_{2}^{\theta}\ni(g,g^{\prime})\mapsto(\phi_{\lambda}(g),G^{\prime} +\nabla g)=:\Phi_{\lambda}(g,g^{\prime})\in\mathscr{D}_{2}^{\theta}.\]
The product is defined as \(F\cdot\nabla g:=F\odot\nabla g+F\odot\nabla g+F\odot\nabla g\), where for \(F\in\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})\) and \(g\in\mathscr{D}_{2}^{\theta}\),
\[F\odot\nabla g=\sum_{i=1}^{d}F^{i}\odot\partial_{i}g:=\sum_{i=1 }^{d}\Big{[}F^{i}\odot[\partial_{i}g^{\sharp}+\partial_{i}g^{\prime}\odot I_{ \lambda}(F)]+g(I_{\lambda}(\partial_{i}F)\odot F^{i})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+C_{1}(g,I_{ \lambda}(\partial_{i}F),F^{i})\Big{]},\]
with paraproduct commutator
\[C_{1}(g,f,h):=(g\odot f)\odot h-g(f\odot h) \tag{4.6}\]
from [1, Lemma 2.4]. Analogously as before, the product of \(F\in\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})\) and \(g\in\mathscr{D}_{2}^{\theta}\) with \(\theta>(2-\beta)/2\) can thus be estimated by
\[\|F\cdot g\|_{\mathscr{C}_{2}^{\theta}(\mathbb{T}^{d})}\lesssim\|F\|_{ \mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})}(1+\|F\|_{\mathscr{X}_{ \infty}^{\beta,\gamma}(\mathbb{T}^{d})})\|g\|_{\mathscr{D}_{2}^{\theta}}.\]
The unique fixed point is obtained by the Banach fixed point theorem, where, in the Young case the map \(\phi\), and in the rough case, \(\Phi_{\lambda}^{2}=\Phi_{\lambda}\circ\Phi_{\lambda}\) are contractions for large enough \(\lambda>0\). This can be seen by estimating
\[\|\phi_{\lambda}(g)-\phi_{\lambda}(h)\|_{\mathscr{C}_{2}^{\theta} (\mathbb{T}^{d})} \lesssim\lambda^{(\theta-\beta-\alpha)/\alpha}\|F\cdot\nabla(g-h) \|_{\mathscr{C}_{2}^{\theta}(\mathbb{T}^{d})}\] \[\lesssim\lambda^{(\theta-\beta-\alpha)/\alpha}\|F\|_{\mathscr{X} _{\infty}^{\beta,\gamma}(\mathbb{T}^{d})}(1+\|F\|_{\mathscr{X}_{\infty}^{ \beta,\gamma}(\mathbb{T}^{d})})\|g-h\|_{\mathscr{D}_{2}^{\theta}}\]
using (4.1) and the estimate for the product. Thus a contraction is obtained by choosing \(\lambda\) large enough, such that \(\lambda^{(\theta-\beta-\alpha)/\alpha}\|F\|_{\mathscr{X}_{\infty}^{\beta, \gamma}(\mathbb{T}^{d})}(1+\|F\|_{\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{ T}^{d})})<1\), using \(\theta<\alpha+\beta\). To check that indeed \(\Phi_{\lambda}(g,g^{\prime})\in\mathscr{D}_{2}^{\theta}\), we note that
\[\Phi_{\lambda}(g,g^{\prime})^{\sharp} =\phi_{\lambda}(g)-[G^{\prime}e_{i}+\nabla g]\odot I_{\lambda}(F)\] \[=I_{\lambda}(G^{\sharp}+F\odot g+F\odot\nabla g)+C_{\lambda}(G^{ \prime}e_{i}+\nabla g,F)\]
for the commutator \(C_{\lambda}\) from (4.2). Notice that, if \(\beta<(1-\alpha)/2\), for \(G^{\sharp}\in\mathscr{C}_{2}^{0}(\mathbb{T}^{d})\), \(I_{\lambda}(G^{\sharp})\in\mathscr{C}_{2}^{\alpha}(\mathbb{T}^{d})\subset \mathscr{C}_{2}^{2\theta-1}(\mathbb{T}^{d})\) as \(\theta>(1+\alpha)/2\). Hence, together with Lemma 4.1, it follows that \(\Phi_{\lambda}(g,g^{\prime})^{\sharp}\in\mathscr{C}^{2\theta-1}(\mathbb{T}^{d})\). Thereby we also get the small factor of \(\lambda^{(\theta-\alpha-\beta)/\alpha}\) in the estimate. To see that \(\Phi_{\lambda}^{2}=\Phi_{\lambda}\circ\Phi_{\lambda}\) is a contraction, we furthermore check
\[\|\Phi_{\lambda} (\Phi_{\lambda}(g,g^{\prime}))^{\prime}-\Phi_{\lambda}(\Phi_{ \lambda}(h,h^{\prime}))^{\prime}\|_{\mathscr{C}_{2}^{\theta-1}(\mathbb{T}^{d})}\] \[=\|\nabla\phi_{\lambda}(g)-\nabla\phi_{\lambda}(h)\|_{\mathscr{C} _{2}^{\theta-1}(\mathbb{T}^{d})}\] \[\lesssim\|\phi_{\lambda}(g)-\phi_{\lambda}(h)\|_{\mathscr{C}_{2}^ {\theta}(\mathbb{T}^{d})}\] \[\lesssim\lambda^{(\theta-\beta-\alpha)/\alpha}\|F\|_{\mathscr{X}_{ \infty}^{\beta,\gamma}(\mathbb{T}^{d})}(1+\|F\|_{\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})})\|g-h\|_{\mathscr{D}_{2}^{\theta}},\]
by the above estimate.
## 5 Existence of an invariant measure and spectral gap estimates
In this section, we prove with Theorem 5.2 existence and uniqueness of an invariant, ergodic probability measure for the process \(X^{\mathbb{T}^{d}}\) with state space \(\mathbb{T}^{d}\), in the following for short denoted by \(X\). The theorem moreover shows that \(X\) is exponentially ergodic, in the sense that pointwise spectral gap estimates for its semigroup \((T_{t})\) hold. Furthermore, we characterize the domain of \(\mathfrak{L}\) in \(L^{2}(\pi)\) in Theorem 5.7 and define the mean of \(F\in\mathscr{X}_{\infty}^{\mathscr{B},\gamma}\) with respect to the invariant measure \(\pi\) in Lemma 5.10.
Existence and uniqueness of the invariant measure together with the pointwise spectral gap estimates on the semigroup are obtained by an application of Doeblin's theorem (see e.g. [1, Theorem 3.1, Section 3, p. 365]), that we state here in the continuous time setting.
**Lemma 5.1** (Doeblin's theorem).: _Let \((X_{t})_{t\geqslant 0}\) be a time-homogeneous Markov process with state space \((S,\Sigma)\) for a compact metric space \(S\) and its Borel-sigma-field \(\Sigma\). Let \((T_{t})_{t\geqslant 0}\) be the associated semigroup, \(T_{t}f(x):=\mathds{E}[f(X_{t})\mid X_{0}=x]\) for \(x\in S\) and \(f:S\to\mathbb{R}\) bounded measurable. Assume further, that there exists a probability measure \(\mu\) on \((S,\Sigma)\) and, for any \(t>0\), a continuous function \(\rho_{t}:S\times S\to\mathbb{R}^{+}\), such that \(T_{t}\mathbf{1}_{E}(x)=\int_{E}\rho_{t}(x,y)\mu(dy)\), \(E\in\Sigma\). Assume moreover that, for any \(t>0\), there exists an open ball \(U_{0}\), such that \(\mu(U_{0})>0\) and \(\rho_{t}(x,y)>0\) for all \(x\in S\) and \(y\in U_{0}\)._
_Then, there exists a unique invariant probability measure \(\pi\) (i.e. \(\int_{S}T_{t}\mathbf{1}_{E}(x)\pi(dx)=\pi(E)\) for all \(E\in\Sigma\) and all \(t\geqslant 0\)) on \((S,\Sigma)\) with the property that there exist constants \(K,\nu>0\), such that for all \(t\geqslant 0\), \(x\in S\) and \(\phi:S\to\mathbb{R}\) bounded measurable,_
\[\left|T_{t}\phi(x)-\int_{S}\phi(y)\pi(dy)\right|\leqslant K|\phi|e^{-\nu t} \tag{5.1}\]
_where \(|\phi|:=\sup_{x\in S}|\phi(x)|\)._
Proof.: For discrete time Markov chains, the result follows immediately from [1, Theorem 3.1, p. 365]. For continuous time Markov processes, the proof is similar. Indeed, in the same manner one proves that if \(\pi\) is such that (5.1) holds, then \(\pi\) is unique and \(\pi\) is invariant for \((T_{t})\). Furthermore, using the assumptions on the density \(\rho\) and the same proof steps as in [1, Theorem 3.1, p. 365], one obtains existence of an invariant measure \(\pi\) with \(\pi(E)\) given as the limit of \((T_{n}\mathbf{1}_{E}(x))_{n}\) for any \(x\in S\) and with (5.1) for \(t\) replaced by \(n\in\mathbb{N}\). Then, using the semigroup property, we also obtain (5.1) for any \(t\geqslant 0\), with a possibly different constant \(K>0\). Indeed, let \(t>0\) and \(n=\lfloor t\rfloor\). Then for bounded measurable \(\phi\) with \(\int_{S}\phi d\pi=0\), we obtain
\[|T_{t}\phi(x)|=|T_{n}T_{t-n}\phi(x)|\leqslant K|T_{t-n}\phi|e^{-\nu n} \leqslant K|\phi|e^{-\nu n}=Ke^{\nu(t-n)}|\phi|e^{-\nu t}\leqslant Ke^{\nu}| \phi|e^{-\nu t}.\]
Now, by changing the constant \(K\), we obtain (5.1) for all \(t\geqslant 0\).
**Theorem 5.2**.: _Let \(X\) be the martingale solution to the singular periodic SDE (1.1) projected onto \(\mathbb{T}^{d}\) with contraction semigroup \((T_{t})_{t\geqslant 0}\) on bounded measurable functions \(f:\mathbb{T}^{d}\to\mathbb{R}\). Then there exists a unique invariant probability measure \(\pi\) for \((T_{t})\). In particular, \(\pi\) is ergodic for \(X\). Furthermore there exist constants \(K,\mu>0\) such that for all \(f\in L^{\infty}(\mathbb{T}^{d})\),_
\[\|T_{t}f-\langle f\rangle_{\pi}\|_{L^{\infty}}\leqslant K\|f\|_{L^{\infty}}e^{ -\mu t}. \tag{5.2}\]
_That is, \(L^{\infty}\)-spectral gap estimates for the associated Markov semigroup \((T_{t})\) hold true. In particular, \(\pi\) is absolutely continuous with respect to the Lebesgue measure on the torus, with density denoted by \(\rho_{\infty}\)._
Proof.: The proof is an application of Doeblin's theorem. We check, that the assumptions of Lemma 5.1 are satisfied. To that aim, note that for the Fokker-Planck density \(\rho_{t}(x,\cdot)\) with \(\rho_{0}=\delta_{x}\), the map \((x,y)\mapsto\rho_{t}(x,y)\) is continuous by Theorem 3.1. It remains to show that there exists an open ball \(U_{0}\) and a constant \(c>0\), such that \(\rho_{t}\) is bounded from below by \(c\) on \(\mathbb{T}^{d}\times U_{0}\). We choose \(U_{0}=\mathbb{T}^{d}\) and obtain
\[\min_{x\in\mathbb{T}^{d},y\in U_{0}}\rho_{t}(x,y)=\rho_{t}(x^{*},y^{*})\geqslant c >0.\]
Indeed, this follows from the strict maximum principle for \(y\mapsto\rho_{t}(x^{*},y)\) by Proposition 3.5 with \(c=c(x^{*})>0\).
The spectral gap estimates also imply absolute continuity, as
\[\langle 1_{A}\rangle_{\pi}=\lim_{t\to\infty}\mathds{E}_{X_{0}=x}[1_{A}(X_{t})] =\lim_{t\to\infty}\int 1_{A}(y)\rho_{t}(x,y)dy\]
and thus any Lebesgue nullset \(A\) is also a \(\pi\)-nullset. The existence of the density thus follows by the Radon-Nikodym theorem.
**Corollary 5.3**.: _Let \(\rho_{\infty}\) be the Lebesgue density of the invariant measure \(\pi\). Then \(\rho_{\infty}\in\mathscr{C}^{\alpha+\beta-1}(\mathbb{T}^{d})\) and it follows the paracontrolled structure_
\[\rho_{\infty}=\rho_{\infty}^{\sharp}+\rho_{\infty}\otimes I_{\infty}(\nabla \cdot F),\]
_where \(\rho_{\infty}^{\sharp}\in\mathscr{C}^{2(\alpha+\beta)-2}(\mathbb{T}^{d})\) and \(I_{\infty}(\nabla\cdot F):=\int_{0}^{\infty}P_{s}(\nabla\cdot F)ds\). Furthermore, the density is strictly positive,_
\[\min_{x\in\mathbb{T}^{d}}\rho_{\infty}(x)>0.\]
_In particular, \(\pi\) is equivalent to the Lebesgue measure._
Proof.: Let \(t>0\). By invariance of \(\pi\), i.e. \(\langle T_{t}f\rangle_{\pi}=\langle f\rangle_{\pi}\) for all \(f\in L^{\infty}(\mathbb{T}^{d})\), and \(d\pi=\rho_{\infty}dx\), we obtain that almost surely
\[\rho_{\infty}=T_{t}^{*}\rho_{\infty},\]
where \(T_{t}^{*}\) denotes the adjoint of \(T_{t}\) with respect to \(L^{2}(\lambda)\). Here \(\lambda\) denotes the Lebesgue measure and \(\langle f\rangle_{\pi}:=\int_{\mathbb{T}^{d}}f(x)\pi(dx)\).
Denote \(y_{t}(x):=T_{t}^{*}\rho_{\infty}(x)\). Then we show that \(y\) is a mild solution of the Fokker-Planck equation started in \(\rho_{\infty}\), that is
\[(\partial_{t}-\mathfrak{L}^{*})y=0,\quad y_{0}=\rho_{\infty}. \tag{5.3}\]
Here the density satisfies \(\rho_{\infty}\in L^{1}(\lambda)\), i.p. \(\rho_{\infty}\in\mathscr{C}_{1}^{0}(\mathbb{T}^{d})\). Indeed, that \(y_{t}=T_{t}^{*}\rho_{\infty}\) is a mild solution of the Fokker-Planck equation follows from approximation of \(F\) by \(F^{m}\in C^{\infty}(\mathbb{T}^{d})\) with \(F^{m}\to F\) in \(\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})\) using that for \(m\in\mathbb{N}\), \(y^{m}=(T_{t}^{m})^{*}\rho_{\infty}\) solves \((\partial_{t}-(\mathfrak{L}^{m})^{*})y^{m}=0\), \(y^{m}=\rho_{\infty}\) by the classical Fokker-Planck theory, where \(T^{m}\) denotes the semigroup for the strong solution of the SDE with drift \(F^{m}\) and generator \(\mathfrak{L}^{m}:=-\mathscr{L}_{\nu}^{\alpha}+F^{m}\cdot\nabla\). By continuity of the Fokker-Planck solution map from Theorem 3.1 for converging data \((F^{m},\rho_{\infty})\to(F,\rho_{\infty})\) in \(\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})\times\mathscr{C}_{1}^{0}( \mathbb{T}^{d})\), we deduce \(y^{m}\to y\) in the paracontrolled solution space, where \(y\) is the mild solution of (5.3).
The lower bound away from zero then also follows from Theorem 3.1, as well as the paracontrolled structure
\[\rho_{\infty}=\rho_{\infty}^{\sharp}+\rho_{\infty}\otimes I_{t}(\nabla\cdot F),\]
where \(\rho_{\infty}^{*}:=y_{t}^{*}\in\mathscr{C}^{2(\alpha+\beta)-2}(\mathbb{T}^{d})\) and \(I_{t}(\nabla\cdot F):=\int_{0}^{t}P_{t-s}(\nabla\cdot F)ds\).
Due to \(\mathscr{F}_{\mathbb{T}^{d}}(\nabla\cdot F)(0)=0\), we have that, for any \(\theta\geqslant 0\), there exists \(c>0\), such that, uniformly in \(s>0\),
\[\|P_{s}(\nabla\cdot F)\|_{\mathscr{C}^{\beta-1+\theta}(\mathbb{T}^{d})} \lesssim s^{-\theta/\alpha}e^{-cs}\|\nabla\cdot F\|_{\mathscr{C}^{\beta-1}( \mathbb{T}^{d})}. \tag{5.4}\]
Indeed, this follows from Lemma 2.11. Thus we obtain, for \(t>0\) and any \(\theta\geqslant 0\), that
\[I_{t}(\nabla\cdot F)-I_{\infty}(\nabla\cdot F)=\int_{t}^{\infty}P_{s}(\nabla \cdot F)ds\in\mathscr{C}^{\theta}(\mathbb{T}^{d}).\]
That is, the remainder is smooth and thus can be absorbed into \(\rho_{\infty}^{*}\). Notice, that \(\int_{0}^{t}P_{s}(\nabla\cdot F)ds\in\mathscr{C}^{\alpha+\beta-1}(\mathbb{T}^ {d})\) by (5.4) and in particular that \(I_{\infty}(\nabla\cdot F)\in\mathscr{C}^{\alpha+\beta-1}(\mathbb{T}^{d})\) is well-defined.
**Corollary 5.4**.: _Let \(X\) and \((T_{t})\) be as before. Then, the semigroup \((T_{t})_{t\geqslant 0}\) can be uniquely extended to a strongly continuous contraction semigroup on \(L^{2}(\pi)\), i.e. \(T_{t+s}=T_{t}T_{s}\), \(T_{t}1=1\), \(T_{t}f\to f\) for \(t\downarrow 0\) and \(f\in L^{2}(\pi)\) and \(\|T_{t}f\|_{L^{2}(\pi)}\leqslant\|f\|_{L^{2}(\pi)}\), such that and for (possibly different) constants \(K,\mu>0\), the \(L^{2}(\pi)\)-spectral gap estimates hold true:_
\[\|T_{t}f-\langle f\rangle_{\pi}\|_{L^{2}(\pi)}\leqslant K\|f\|_{L^{2}(\pi)}e^ {-\mu t}\quad\text{ for all }f\in L^{2}(\pi).\]
Proof.: That the semigroup \((T_{t})_{t\geqslant 0}\) can be uniquely extended to a contraction semigroup on \(L^{2}(\pi)\) follows from Jensen's inequality,
\[\|T_{t}f\|_{L^{2}(\pi)}^{2}=\int|\mathds{E}_{X_{0}=x}[f(X_{t})]|^{2}\pi(dx) \leqslant\int\mathds{E}_{X_{0}=x}[|f(X_{t})|^{2}]\pi(dx)=\|f\|_{L^{2}(\pi)}^{2},\]
for \(f\in L^{\infty}\), using the invariance of \(\pi\) (by Theorem 5.2). By approximation, we then also obtain for the extension, that \(T_{t}f(x)=\mathds{E}_{X_{0}=x}[f(X_{t})]\) for \(f\in L^{2}(\pi)\).
We check strong continuity of the semigroup on \(L^{2}(\pi)\). Using the contraction property in \(L^{2}(\pi)\), we obtain
\[\|T_{t}f-f\|_{L^{2}(\pi)}^{2}=\|T_{t}f\|_{L^{2}(\pi)}^{2}+\|f\|_{L^{2}(\pi)}^{ 2}-2\langle T_{t}f,f\rangle_{\pi}\leqslant 2\|f\|_{L^{2}(\pi)}^{2}-2 \langle T_{t}f,f\rangle_{\pi}. \tag{5.5}\]
It is left to prove that the right-hand side vanishes as \(t\downarrow 0\). By Fatou's lemma and using that \(X\) is almost surely cadlag, we have that for \(x\in\mathbb{T}^{d}\) and \(f\in C(\mathbb{T}^{d},\mathbb{R})\),
\[\lim_{t\downarrow 0}|T_{t}f(x)-f(x)|\leqslant\mathds{E}_{X_{0}=x}[\lim_{t \downarrow 0}|f(X_{t})-f(X_{0})|]=0. \tag{5.6}\]
Furthermore, we can bound uniformly in \(x\in\mathbb{T}^{d}\) and \(t>0\),
\[|T_{t}f(x)|=\left|\int\rho_{t}(x,y)f(y)dy\right|\leqslant\frac{\sup_{t>0}\max_ {x,y\in\mathbb{T}^{d}}\rho_{t}(x,y)}{\min_{y\in\mathbb{T}^{d}}\rho_{\infty}(y)} \|f\|_{L^{1}(\pi)}\leqslant C\|f\|_{L^{2}(\pi)} \tag{5.7}\]
where \(C>0\) is a constant (not depending on \(t\), \(f\)) and \(\rho_{t}(x,y)\) denotes the Fokker-Planck density with \(\rho_{0}(x,y)=\delta_{x}\). Here, we have \(\min_{y\in\mathbb{T}^{d}}\rho_{\infty}(y)>0\) by Corollary 5.3. Furthermore we have
\[\sup_{t>0}\max_{x,y\in\mathbb{T}^{d}}\rho_{t}(x,y)<\infty. \tag{5.8}\]
Indeed, by the \(L^{\infty}\)-spectral gap estimates, it follows that
\[\sup_{t\geqslant 0}\|\rho_{t}*f\|_{L^{\infty}}\leqslant K\|f\|_{L^{\infty}}+| \langle f\rangle_{\pi}|,\]
with convolution \((\rho_{t}*f)(x):=\int_{\mathbb{T}^{d}}\rho_{t}(x,y)f(y)dy\). We can apply this bound for \(f^{\varepsilon,\widetilde{y}}(y):=\mathbf{1}_{|y-\widetilde{y}|<\varepsilon}\) for \(\widetilde{y}\in\mathbb{T}^{d}\) and \(\varepsilon>0\) and let \(\varepsilon\downarrow 0\). By continuity of \(y\mapsto\rho_{t}(x,y)\) and the dominated convergence theorem, \((\rho_{t}*f^{\varepsilon,\widetilde{y}})(x)\to\rho_{t}(x,\tilde{y})\lambda( \mathbb{T}^{d})\), which yields (5.8).
In particular, by (5.7), \(\sup_{t>0}\|T_{t}f\|_{L^{\infty}}\lesssim\|f\|_{L^{2}(\pi)}\) and an application of the dominated convergence theorem using (5.6), yields that for \(f\in C(\mathbb{T}^{d},\mathbb{R})\),
\[\lim_{t\downarrow 0}\langle T_{t}f,f\rangle_{\pi}=\|f\|_{L^{2}(\pi)}^{2}.\]
We conclude with (5.5), that for all \(f\in C(\mathbb{T}^{d},\mathbb{R})\), \(\|T_{t}f-f\|_{L^{2}(\pi)}\to 0\) as \(t\downarrow 0\).
As \((T_{t})\) is a contraction semigroup on \(L^{2}(\pi)\), the operator norm is trivially bounded, that is \(\sup_{t>0}\|T_{t}\|_{L^{(2}(\pi))}\leqslant 1\). Above, we proved that \((T_{t})\) is strongly continuous on a dense subset of \(L^{2}(\pi)\). Thus, together with boundedness of the operator norm, \((T_{t})\) is also strongly continuous on \(L^{2}(\pi)\) as a consequence of the Banach-Steinhaus theorem.
It remains to prove that the \(L^{2}(\pi)\)-spectral gap estimates follow from the \(L^{\infty}\)-spectral gap estimates and the bound (5.7). Indeed, we obtain for \(f\in L^{2}(\pi)\) with \(\langle f\rangle_{\pi}=0\) and all \(t>1\),
\[\|T_{t}f\|_{L^{2}(\pi)}=\|T_{t-1}T_{1}f\|_{L^{2}(\pi)}\leqslant Ke^{-\mu(t-1)} \|T_{1}f\|_{L^{\infty}}\leqslant e^{\mu}CKe^{-\mu t}\|f\|_{L^{2}(\pi)}.\]
For \(t\in[0,1]\), we trivially estimate, using the contraction property,
\[\|T_{t}f\|_{L^{2}(\pi)}\leqslant\|f\|_{L^{2}(\pi)}\leqslant e^{\mu}e^{-\mu t }\|f\|_{L^{2}(\pi)}.\]
**Remark 5.5**.: _The argument in the above proof of Corollary 5.4 (using the bound (5.8) and \(\rho_{\infty}>0\)) can be adapted to prove the stronger estimate (for constants \(K,\mu>0\))_
\[\|T_{t}f-\langle f\rangle_{\pi}\|_{L^{\infty}}\leqslant Ke^{-\mu t}\|f- \langle f\rangle_{\pi}\|_{L^{1}(\pi)},\]
_which in particular implies the \(L^{2}(\pi)\)-\(L^{2}(\pi)\)-bound from the corollary._
**Remark 5.6**.: _More generally, one can show the Feller property, that is \((T_{t})\) is strongly continuous on \(C(\mathbb{T}^{d})\). Using [10, Proposition III.2.4] and (5.6), it is left to show \(T_{t}f\in C(\mathbb{T}^{d})\) for \(f\in C(\mathbb{T}^{d})\subset\mathscr{C}^{0}(\mathbb{T}^{d})\). But this follows from [10, Theorem 4.7], since for \(R>t\), \(y_{t}=T_{R-t}f\) solves the backward Kolmogorov equation with periodic terminal condition \(y_{R}=f\in\mathscr{C}^{0}\) and \(y\in\mathscr{M}_{R}^{\infty}\mathscr{C}^{\alpha+\beta}\), such that in particular \(x\mapsto y_{t}(x)\) is continuous._
The next theorem relates the semigroup \((T_{t})_{t\geqslant 0}\) from above with the generator \(\mathfrak{L}\) and gives an explicit representation of its domain in terms of paracontrolled solutions of singular resolvent equations.
**Theorem 5.7**.: _Let \((T_{t})\) be the contraction semigroup on \(L^{2}(\pi)\) from Corollary 5.4 and denote its generator by \((A,dom(A))\) with \(A:dom(A)\subset L^{2}(\pi)\to L^{2}(\pi)\) and domain \(dom(A):=\{f\in L^{2}(\pi)\mid\lim_{t\to 0}(T_{t}f-f)/t=:Af\text{ exists in }L^{2}(\pi)\}\). Let \(\theta\in((1+\alpha)/2,\alpha+\beta)\) and_
\[D:=\{g\in\mathscr{D}_{2}^{\theta}\mid R_{\lambda}g=G\text{ for some }G\in L^{2}(\pi)\text{ and }\lambda>0\},\]
_where \(R_{\lambda}:=(\lambda-\mathfrak{L})\). Then it follows \(D=dom(A)\) and \((A,D)=(\mathfrak{L},D)\). In particular, \((\mathfrak{L},D)\) is the generator of the Markov process \(X\) with state space \(\mathbb{T}^{d}\) and transition semigroup \((T_{t})\)._
**Remark 5.8**.: _Since the drift \(F\) does not depend on a time variable, one could reformulate the martingale problem for \(X\) in terms of the elliptic generator \(\mathfrak{L}\) and the domain \(D\subset L^{2}(\pi)\)._
Proof.: We first show that \(D\subset dom(A)\). To this aim, note that for \(f\in D\), we obtain \(R_{\lambda}f=G\) for \(G\in L^{2}(\pi)\). For a mollification \((G^{n})\subset C^{\infty}(\mathbb{T}^{d})\) of \(G\) and \((f^{n})\subset C^{\infty}(\mathbb{T}^{d})\), such that \(R_{\lambda}f^{n}=G^{n}\), we obtain that in particular \(f^{n}\) is a mild solution of the Kolmogorov backward equation on the torus for \(\mathscr{G}=\partial_{t}+\mathfrak{L}\) with right-hand side \(\lambda f^{n}-G^{n}\in L^{\infty}\) and terminal condition \(f^{n}\in\mathscr{C}^{3}\). Equivalently, its periodic version is the periodic solution of the Kolmogorov backward equation on \(\mathbb{R}^{d}\). As \(X\) equals the projected solution of the \((\mathscr{G},x)\)-martingale problem onto the torus, we have, for \(n\in\mathbb{N}\) and \(x\in\mathbb{T}^{d}\), that
\[T_{t}f^{n}(x)-f^{n}(x) =\mathds{E}_{X_{0}=x}[f^{n}(X_{t})-f^{n}(X_{0})]\] \[=\mathds{E}_{X_{0}=x}\bigg{[}\int_{0}^{t}(\lambda f^{n}-G^{n})(X _{s})ds\bigg{]}=\int_{0}^{t}T_{s}(\lambda f^{n}-G^{n})(x)ds.\]
Using that \(f^{n}\to f\) in \(L^{2}(\pi)\) as \(G^{n}\to G\) by continuity of the resolvent solution map, we obtain that for \(f\in D\),
\[T_{t}f-f=\int_{0}^{t}T_{s}(\lambda f-G)ds.\]
By continuity of the map \(s\mapsto T_{s}(\lambda f-G)\in L^{2}(\pi)\), since \(T\) is strongly continuous on \(L^{2}(\pi)\), we obtain that for \(f\in D\), \(\lim_{t\to 0}(T_{t}f-f)/t\) exists in \(L^{2}(\pi)\) and
\[Af=\lambda f-G=\lambda f-R_{\lambda}f=\mathfrak{L}f.\]
To prove that also \(dom(A)\subset D\), we use that for \(\chi\in dom(A)\), there trivially exists \(f\in L^{2}(\pi)\) with \(A\chi=f\). Notice that by Theorem 4.2, we can solve the resolvent equation for \(\lambda>0\) large enough,
\[R_{\lambda}\tilde{\chi}=\lambda\chi-f,\]
with right-hand side \(\lambda\chi-f\in L^{2}(\pi)\subset\mathscr{C}_{2}^{0}\), obtaining a solution \(\tilde{\chi}\in D\). By the above, we have that \(A_{|D}=\mathfrak{L}_{|D}\), such that \(\mathfrak{L}\tilde{\chi}=A\tilde{\chi}\). This yields by inserting in the equation for \(\tilde{\chi}\) and since \(f=A\chi\), that \(A(\tilde{\chi}-\chi)=\lambda(\tilde{\chi}-\chi)\). As \(\lambda>0\), by uniqueness of the solution of the resolvent equation for the generator \(A\), we obtain \(\tilde{\chi}=\chi\). Thus with the equation for \(\tilde{\chi}\) this yields \(\chi\in D\) and \(\mathfrak{L}\chi=f\).
**Corollary 5.9**.: _Let \(f\in L^{2}(\pi)\) with \(\langle f\rangle_{\pi}=0\). Then there exists a unique solution \(\chi\in D\) of the Poisson equation \(\mathfrak{L}\chi=f\) such that \(\langle\chi\rangle_{\pi}=0\)._
Proof.: This follows from the \(L^{2}(\pi)\)-spectral gap estimates. We can solve the Poisson equation in \(L^{2}(\pi)\) for the given right-hand side \(f\in L^{2}(\pi)\) with \(\langle f\rangle_{\pi}=0\). The solution is explicitly given by \(\chi=\int_{0}^{\infty}T_{t}fdt\in L^{2}(\pi)\).
We check that \(\chi\) is indeed a solution. By [1, Proposition 1.1.5 part a)], we have that for \(f\in L^{2}(\pi)\), \(\int_{0}^{t}T_{s}fds\in dom(A)\) and
\[T_{t}f-f=A\int_{0}^{t}T_{s}fds,\]
where \((A,dom(A))\) denotes again the generator of \((T_{t})\) on \(L^{2}(\pi)\). By the \(L^{2}\)-spectral gap estimates and \(\langle f\rangle_{\pi}=0\), we obtain that \((\int_{0}^{t}T_{s}fds)_{t}\) converges in \(L^{2}(\pi)\) for \(t\to\infty\) to a limit \(\chi\), and that \((T_{t}f)_{t}\) converges to zero in \(L^{2}(\pi)\) for \(t\to\infty\). Hence, since \(A\) is a closed operator (cf. [1, Corollary 1.1.6]), we obtain in the limit \(t\to\infty\), that \(f=A\int_{0}^{\infty}T_{t}fdt=A\chi\) and \(\chi\in dom(A)\). Now, using \(dom(A)=D\) and \((A,D)=(\mathfrak{L},D)\) by Theorem 5.7, this yields \(\chi\in D\) and \(\mathfrak{L}\chi=f\).
Thanks to the regularity of the density of the invariant measure \(\pi\), we can finally define the mean of the singular drift \(F\) under \(\pi\), \(\langle F\rangle_{\pi}=\langle F,\rho_{\infty}\rangle_{\lambda}\), respectively the product \(F\cdot\rho_{\infty}\).
**Lemma 5.10**.: _Let \(\rho_{\infty}\) be the density of \(\pi\). Let \(\langle F\rangle_{\pi}=(\langle F^{i}\rangle_{\pi})_{i=1,\ldots,d}\) for_
\[\langle F^{i}\rangle_{\pi} =(F^{i}\cdot\rho_{\infty})(\mathbf{1})\] \[:=[(F^{i}\cdot\rho_{\infty}^{\sharp})+(F^{i}\odot I_{\infty}( \nabla\cdot F))\cdot\rho_{\infty}+C_{1}(\rho_{\infty},I_{\infty}(\nabla\cdot F ),F^{i})](\mathbf{1}),\]
_where \(\mathbf{1}\in C^{\infty}(\mathbb{T}^{d})\) is the constant test function and \(C_{1}\) denotes the paraproduct commutator defined in (4.6). Then, \(\langle F^{i}\rangle_{\pi}\) is well-defined and continuous, that is, \(\langle F^{m}\rangle_{\pi}\to\langle F\rangle_{\pi}\) for \(F^{m}\to F\) in \(\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})\). Moreover, the following Lipschitz bound holds true_
\[\|F\cdot\rho_{\infty}\|_{\mathscr{X}^{\beta}(\mathbb{T}^{d})}\lesssim\|F\|_{ \mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})}(1+\|F\|_{\mathscr{X}_{ \infty}^{\beta,\gamma}(\mathbb{T}^{d})})[\|\rho_{\infty}\|_{\mathscr{X}^{ \alpha+\beta-1}}+\|\rho_{\infty}^{\sharp}\|_{\mathscr{X}^{2(\alpha+\beta-1)}}].\]
Proof.: The proof follows directly from Theorem 3.1 and Corollary 5.3.
## 6 Solving the Poisson equation with singular right-hand side
To prove the central limit theorem for the solution of the martingale problem \(X\), we utilize the classical approach of decomposing the additive functional in terms of a martingale and a boundary term, using the solution of the Poisson equation for \(\mathfrak{L}\) with singular right-hand side \(F-\langle F\rangle_{\pi}\). For solving the Poisson equation in Theorem 6.4 below, Corollary 5.9 is not applicable, as \(F\) is a distribution and therefore not an element of \(L^{2}(\pi)\). Consider an approximation \((F^{m})\subset C^{\infty}(\mathbb{T}^{d})\) with \(F^{m}\to F\) in \(\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})\). Then, we can apply Corollary 5.9 for the right-hand sides \(F^{m}-\langle F^{m}\rangle_{\pi}\in L^{2}(\pi)\), \(m\in\mathbb{N}\). This way we obtain solutions \(\chi^{m}=(\chi^{m,i})_{i=1,\ldots,d}\in D^{d}\subset L^{2}(\pi)^{d}\) of the Poisson equations
\[(-\mathfrak{L})\chi^{m,i}=F^{m,i}-\langle F^{m,i}\rangle_{\pi} \tag{6.1}\]
for \(m\in\mathbb{N}\).
In this section, we show that the sequence \((\chi^{m})_{m}\) converges in a space of sufficient regularity to a the limit \(\chi\) that indeed solves the Poisson equation
\[(-\mathfrak{L})\chi=F-\langle F\rangle_{\pi}. \tag{6.2}\]
Let us define the space \(\mathscr{H}^{1}(\pi)\) as in [12, Section 2.2],
\[\mathscr{H}^{1}(\pi):=\{f\in D\mid\|f\|_{\mathscr{H}^{1}(\pi)}^{2}:=\langle( -\mathfrak{L})f,f\rangle_{\pi}<\infty\}, \tag{6.3}\]
which is the Sobolev space for the operator \(\mathfrak{L}\) with respect to \(L^{2}(\pi)\). Its dual is defined by
\[\mathscr{H}^{-1}(\pi):=\{F:\mathscr{H}^{1}(\pi)\to\mathbb{R}\mid\text{$F$ linear with $\|F\|_{\mathscr{H}^{-1}(\pi)}:=\sup_{\|f\|_{\mathscr{H}^{1}(\pi)}=1}|F(f)|<\infty$}\}. \tag{6.4}\]
The space \(\mathscr{H}^{1}(\pi)\) is related to the quadratic variation of Dynkin's martingale, see [12, Section 2.4], which motivates the definition.
To prove convergence of \((\chi^{m})_{m}\) in \(L^{2}(\pi)^{d}\), we first establish in Corollary 6.3 convergence of \((\chi^{m})_{m}\) in the space \(\mathscr{H}^{1}(\pi)^{d}\) and utilize a Poincare-type bound on the operator \(\mathfrak{L}\). A standard argument as in [10, Property 2.4] shows that the \(L^{2}(\pi)\)-spectral gap estimates from Corollary 5.4 for the constant \(K=1\), imply the Poincare estimate for the operator \(\mathfrak{L}\):
\[\|f-\langle f\rangle_{\pi}\|_{L^{2}(\pi)}^{2}\leqslant\mu\langle(-\mathfrak{ L})f,f\rangle_{\pi}=\mu\|f\|_{\mathscr{H}^{1}(\pi)}^{2},\quad\text{ for all $f\in D$}.\]
In general, the constant \(K>0\) in the spectral gap estimates from Corollary 5.4 does not need to satisfy \(K=1\) and the above argument breaks down for \(K\neq 1\). Hence, we show below in (6.7) that \(\|f-\langle f\rangle_{\pi}\|_{L^{2}(\pi)}^{2}\leqslant C\|f\|_{\mathscr{H}^{1}( \pi)}^{2}\) holds true for some constant \(C>0\). That constant may differ from the constant \(\mu\) and may not be optimal, but the bound suffices for our purpose of concluding on \(L^{2}(\pi)^{d}\) convergence given \(\mathscr{H}^{1}(\pi)^{d}\) convergence of \((\chi^{m})_{m}\).
An optimal estimate, that however applies for a much more general situation of weak Poincare inequalities and slower than exponential convergences, can be found in [13, Theorem 2.3].
The \(\mathscr{H}^{1}(\pi)^{d}\) convergence of \((\chi^{m})_{m}\) follows from \(\mathscr{H}^{-1}(\pi)^{d}\)-convergence of \((F^{m})_{m}\) for the approximating sequence \(F^{m}\to F\) in \(\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})\). Convergence of \((F^{m})_{m}\) in \(\mathscr{H}^{-1}(\pi)^{d}\) is established in Theorem 6.2. The following lemma is an auxiliary result, which proves that the semi-norms in \(\mathscr{H}^{1}(\pi)\) and the homogeneous Besov space \(\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})\), cf. (2.12), are equivalent.
**Lemma 6.1**.: _Let \(\alpha\in(1,2]\). Define the carre-du-champ operator of the generalized fractional Laplacian as \(\Gamma^{\alpha}_{\nu}(f)=\Gamma^{\alpha}_{\nu}(f,f):=\frac{1}{2}((-\mathscr{L }^{\alpha}_{\nu})f^{2}-2f(-\mathscr{L}^{\alpha}_{\nu})f)\). Then, there exist constants \(c,C>0\), such that for all \(f\in\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})\),_
\[c\|f\|_{\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})}^{2}\leqslant\langle\Gamma^ {\alpha}_{\nu}(f)\rangle_{\lambda}\leqslant C\|f\|_{\dot{B}^{\alpha/2}_{2,2}( \mathbb{T}^{d})}^{2}. \tag{6.5}\]
Proof.: By [16, part (v) of Theorem, Section 3.5.4] we obtain that the periodic Lizorkin space \(F^{s}_{2,2}(\mathbb{T}^{d})\) coincides with the periodic Bessel-potential space \(H^{s}(\mathbb{T}^{d})\). Furthermore \(F^{s}_{2,2}(\mathbb{T}^{d})\) coincides with \(B^{s}_{2,2}(\mathbb{T}^{d})\) (cf. [16, Section 3.5.1, Remark 4]). Thus, we obtain that in particular
\[\dot{B}^{s}_{2,2}(\mathbb{T}^{d})=\dot{H}^{s}(\mathbb{T}^{d}).\]
It remains to show (6.5) with \(\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})\) replaced by \(\dot{H}^{s}(\mathbb{T}^{d})\). We prove the claim for \(\alpha\in(1,2)\), for \(\alpha=2\) the proof is similar. To that aim, we calculate, using the definition of \(\mathscr{L}^{\alpha}_{\nu}\) for a Schwartz function \(f\in\mathscr{S}(\mathbb{T}^{d})\) and \(\psi^{\alpha}_{\nu}(0)=0\),
\[\langle\Gamma^{\alpha}_{\nu}(f)\rangle_{\lambda}=\int_{\mathbb{T} ^{d}}\Gamma^{\alpha}_{\nu}(f)(x)dx =\mathscr{F}_{\mathbb{T}^{d}}(\Gamma^{\alpha}_{\nu}(f))(0)\] \[=\frac{1}{2}\mathscr{F}_{\mathbb{T}^{d}}((-\mathscr{L}^{\alpha}_{ \nu})f^{2})(0)-\mathscr{F}_{\mathbb{T}^{d}}(f(-\mathscr{L}^{\alpha}_{\nu})f)(0)\] \[=-\frac{1}{2}\psi^{\alpha}_{\nu}(0)(\hat{f}*\hat{f})(0)+(\hat{f}* \psi^{\alpha}_{\nu}\hat{f})(0)\] \[=\sum_{k\in\mathbb{Z}^{d}}\hat{f}(-k)\hat{f}(k)\psi^{\alpha}_{\nu }(k)\] \[=\sum_{k\in\mathbb{Z}^{d}}|\hat{f}(k)|^{2}\psi^{\alpha}_{\nu}(k).\]
By Assumption 2.4 on the spherical component of the jump measure \(\nu\), we obtain, that there exist constants \(c,C>0\) with
\[c|k|^{\alpha}\leqslant\psi^{\alpha}_{\nu}(k)=\int_{S}|\langle k,\xi\rangle|^{ \alpha}\nu(d\xi)\leqslant C|k|^{\alpha}.\]
Thus it follows that
\[c\|f\|_{\dot{H}^{\alpha/2}(\mathbb{T}^{d})}^{2}=c\sum_{k\in\mathbb{Z}^{d}}|k| ^{\alpha}|\hat{f}(k)|^{2}\leqslant\langle\Gamma^{\alpha}_{\nu}(f)\rangle_{ \lambda}\leqslant C\sum_{k\in\mathbb{Z}^{d}}|k|^{\alpha}|\hat{f}(k)|^{2}=C\|f \|_{\dot{H}^{\alpha/2}(\mathbb{T}^{d})}^{2}.\]
By a density argument, the claim follows for all \(f\in\dot{H}^{\alpha/2}(\mathbb{T}^{d})=\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})\).
**Theorem 6.2**.: _Let \(F\in\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})\) for \(\beta\in(\frac{2-2\alpha}{3},0)\) and \(\alpha\in(1,2]\). Then, equivalence of the semi-norms \(\|\cdot\|_{\mathscr{H}^{1}(\pi)}\simeq\|\cdot\|_{\dot{B}^{\alpha/2}_{2,2}( \mathbb{T}^{d})}\) follows and \(\overline{F}:=F-\langle F\rangle_{\pi}\in\mathscr{H}^{-1}(\pi)^{d}\). In particular, \(F^{m}\to F\) in \(\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})\) implies \(\overline{F}^{m}\to\overline{F}\) in \(\mathscr{H}^{-1}(\pi)^{d}\)._
Proof.: By invariance of \(\pi\) we obtain \(\langle\mathfrak{L}g\rangle_{\pi}=0\) for \(g\in D\), because for \(g\in D\), \((\frac{d}{dt}T_{t})_{|t=0}f=\mathfrak{L}f\in L^{2}(\pi)\). We now apply this for \(g=f^{2}\) for which we need to check that if \(f\in D\), then \(\mathfrak{L}f^{2}\) is well-defined and \(\mathfrak{L}f^{2}\in L^{1}(\pi)\). This follows by calculating
\[f^{2}=(f^{\sharp}+\nabla f\ominus I_{\lambda}(F))^{2}=g^{\sharp}+g^{\prime} \ominus I_{\lambda}(F),\]
where
\[g^{\sharp}=(f^{\sharp})^{2}+2f^{\sharp} \odot(\nabla f\ominus I_{\lambda}(F))+2f^{\sharp}\odot(\nabla f \ominus I_{\lambda}(F))\] \[+(\nabla f\ominus I_{\lambda}(F))\odot(\nabla f\ominus I_{ \lambda}(F))\in\mathscr{C}_{1}^{2\theta-1}(\mathbb{T}^{d})\]
and
\[g^{\prime}=2f^{\sharp}\ominus\nabla f+\nabla f\ominus I_{\lambda}(F)\ominus \nabla f+I_{\lambda}(F)\ominus\nabla f\ominus I_{\lambda}(F)\in(\mathscr{C}_{ 1}^{\theta-1}(\mathbb{T}^{d}))^{d}.\]
Hence, we conclude that for \(f\in D\), \(f^{2}\) admits a paracontrolled structure with \(g^{\sharp}\in\mathscr{C}_{1}^{2\theta-1}(\mathbb{T}^{d})\) and \(g^{\prime}\in(\mathscr{C}_{1}^{\theta-1}(\mathbb{T}^{d}))^{d}\), such that \(\mathfrak{L}f^{2}\) is well-defined and
\[\mathfrak{L}f^{2}=2f\mathfrak{L}f+2\Gamma_{\nu}^{\alpha}(f)=2\lambda f^{2}-2 fR_{\lambda}f+2\Gamma_{\nu}^{\alpha}(f)\in L^{1}(\pi).\]
Herein we used that \(2\lambda f^{2}-2fR_{\lambda}f\in L^{1}(\pi)\) as \(f\in D\) and \(\Gamma_{\nu}^{\alpha}(f)=\Gamma_{\nu}^{\alpha}(f,f)=\frac{1}{2}(\mathscr{L}_{ \nu}^{\alpha}f^{2}-2f\mathscr{L}_{\nu}^{\alpha}f)\in L^{1}(\pi)\) for \(f\in\mathscr{C}_{2}^{\theta}(\mathbb{T}^{d})\) by Lemma 6.1 as \(\theta\) can be chosen close to \(\alpha+\beta\), such that \(\theta>\alpha/2\).
Analogously, if we denote the domain of \(\mathfrak{L}\) with integrability \(p\) by \(D_{p}\), then for \(f,g\in D_{2}\), we concluded that \(f\cdot g\in D_{1}\), which in particular implies that the carre-du-champ operator
\[\Gamma^{\mathfrak{L}}(f,g)=\frac{1}{2}(\mathfrak{L}(fg)-f\mathfrak{L}g-g \mathfrak{L}f)\in L^{1}(\pi)\]
for \(f,g\in D\) is well-defined in \(L^{1}(\pi)\).
Applying invariance of \(\pi\) for \(g=f^{2}\), we can add \(\frac{1}{2}\langle\mathfrak{L}f^{2}\rangle_{\pi}=0\) yielding
\[\|f\|_{\mathscr{H}^{1}(\pi)}^{2}=\langle(-\mathfrak{L})f,f\rangle_{\pi}= \langle\Gamma^{\mathfrak{L}}(f)\rangle_{\pi}=\langle\Gamma_{\nu}^{\alpha}(f) \rangle_{\pi}.\]
where \(\Gamma^{\mathfrak{L}}(f)=\frac{1}{2}\mathfrak{L}f^{2}-f\mathfrak{L}f=\Gamma_ {\nu}^{\alpha}(f)\). Thus, we obtain
\[\|f\|_{\mathscr{H}^{1}(\pi)}^{2}=\langle\Gamma_{\nu}^{\alpha}(f)\rangle_{\pi} \simeq\langle\Gamma_{\nu}^{\alpha}(f)\rangle_{\lambda}\simeq\|f\|_{\dot{B}^{ \alpha/2}_{2,2}}^{2}, \tag{6.6}\]
where \(\simeq\) denotes that the norms are equivalent.
Here, we used that absolute continuity of \(\pi\) with respect to the Lebesgue-measure, with density \(\rho_{\infty}\) that is uniformly bounded from above and from below, away from zero by Corollary 5.3. Moreover, note that the carre-du-champ is non-negative, \(\Gamma_{\nu}^{\alpha}(f)\geqslant 0\). Furthermore we utilized (6.5) from Lemma 6.1.
Thus applying the duality estimate from Lemma 2.12 (for functions \(f-\langle f\rangle_{\lambda},g-\langle g\rangle_{\lambda}\) to obtain the result for the homogeneous Besov spaces), we get for \(\overline{F}:=F-\langle F\rangle_{\pi}\) with mean \(\langle F\rangle_{\pi}\) from Lemma 5.10,
\[|\langle\overline{F}^{i},g\rangle_{\pi}| =|\langle\overline{F}^{i}\rho_{\infty},g\rangle|\] \[\lesssim\|\overline{F}^{i}\rho_{\infty}\|_{\dot{B}^{-\alpha/2}_{2, 2}(\mathbb{T}^{d})}\|g\|_{\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})}\] \[\lesssim\|\overline{F}^{i}\rho_{\infty}\|_{\dot{B}^{\beta}_{2,2}( \mathbb{T}^{d})}\|g\|_{\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})}\] \[\lesssim\|\overline{F}^{i}\rho_{\infty}\|_{\dot{B}^{\beta}_{2,2}( \mathbb{T}^{d})}\|g\|_{\mathscr{H}^{1}(\pi)},\]
for \(i=1,...,d\), using \(\beta>-\alpha/2\) and (6.6). Hence, we find
\[\|\overline{F}^{i}\|_{\mathscr{H}^{-1}(\pi)} \lesssim\|\overline{F}^{i}\rho_{\infty}\|_{\dot{B}^{\beta}_{2,2}( \mathbb{T}^{d})}\] \[\lesssim\|\overline{F}^{i}\rho_{\infty}\|_{\dot{B}^{\beta}_{2,2}( \mathbb{T}^{d})}\] \[\lesssim\|\overline{F}^{i}\rho_{\infty}\|_{\mathscr{H}^{\beta+(1 -\gamma)\alpha}(\mathbb{T}^{d})}\] \[\lesssim\|F\|_{\mathscr{Z}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d })}(1+\|F\|_{\mathscr{Z}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})})[\|\rho_{ \infty}\|_{\mathscr{C}^{\alpha+\beta-1}}+\|\rho_{\infty}^{\sharp}\|_{\mathscr{ C}^{2(\alpha+2\beta-1)}}],\]
where the estimate for the product of \(\overline{F}^{i}\) and \(\rho_{\infty}\) follows from Lemma 5.10.
This proves that \(\overline{F}\in\mathscr{H}^{-1}(\pi)^{d}\). Convergence follows by the same estimate.
**Corollary 6.3**.: _Let \(F\in\mathscr{Z}^{\beta,\gamma}_{\infty}\) and \(F^{m}\in C^{\infty}(\mathbb{T}^{d})\) with \(F^{m}\to F\) in \(\mathscr{Z}^{\beta,\gamma}_{\infty}(\mathbb{T}^{d})\). Let \(\chi^{m}=(\chi^{m,i})_{i=1,...,d}\in L^{2}(\pi)^{d}\) denote the unique solution of_
\[(-\mathfrak{L})\chi^{m,i}=F^{m,i}-\langle F^{m,i}\rangle_{\pi}=:\overline{F}^ {m,i}\]
_with \(\langle\chi^{m,i}\rangle_{\pi}=0\). Then \((\chi^{m})_{m}\) converges in \(\mathscr{H}^{1}(\pi)^{d}\cap L^{2}(\pi)^{d}\) to a limit \(\chi\)._
Proof.: Convergence in \(\mathscr{H}^{1}(\pi)\) follows from the estimate
\[\|\chi^{m,i}-\chi^{m^{\prime},i}\|^{2}_{\mathscr{H}^{1}(\pi)} =\langle(-\mathfrak{L})(\chi^{m,i}-\chi^{m^{\prime},i}),\chi^{m,i }-\chi^{m^{\prime},i}\rangle_{\pi}\] \[=\langle\overline{F}^{m,i}-\overline{F}^{m^{\prime},i},\chi^{m,i }-\chi^{m^{\prime},i}\rangle_{\pi}\] \[\leqslant\|\overline{F}^{m,i}-\overline{F}^{m^{\prime},i}\|_{ \mathscr{H}^{-1}(\pi)}\|\chi^{m,i}-\chi^{m^{\prime},i}\|_{\mathscr{H}^{1}( \pi)}.\]
Thus we obtain
\[\|\chi^{m,i}-\chi^{m^{\prime},i}\|_{\mathscr{H}^{1}(\pi)}\leqslant\| \overline{F}^{m,i}-\overline{F}^{m^{\prime},i}\|_{\mathscr{H}^{-1}(\pi)}.\]
And indeed the \(\mathscr{H}^{-1}(\pi)\)-norm on the right-hand side is small, when \(m,m^{\prime}\) are close, by Theorem 6.2.
It remains to conclude on \(L^{2}(\pi)\) convergence. By Theorem 6.2, we also obtain the seminorm equivalences, \(\|\cdot\|_{\mathscr{H}^{1}(\pi)}\simeq\|\cdot\|_{\dot{H}^{\alpha/2}(\mathbb{T }^{d})}\simeq\|\cdot\|_{\dot{B}^{\alpha/2}_{2,2}(\mathbb{T}^{d})}\). Combining with the fractional Poincare inequality on the torus,
\[\|u-\langle u\rangle_{\lambda}\|^{2}_{L^{2}}=\sum_{k\in\mathbb{Z}^{d}\setminus \{0\}}|\hat{u}(k)|^{2}\leqslant\sum_{k\in\mathbb{Z}^{d}\setminus\{0\}}|k|^{ \alpha}|\hat{u}(k)|^{2}=\|u\|^{2}_{\dot{H}^{\alpha/2}(\mathbb{T}^{d})},\]
with Lebesgue measure \(\lambda\) on \(\mathbb{T}^{d}\), we can thus estimate
\[\|\chi-\langle\chi\rangle_{\lambda}\|_{L^{2}(\pi)}\lesssim\|\chi-\langle\chi \rangle_{\lambda}\|_{L^{2}(\lambda)}\leqslant\|\chi\|_{\dot{H}^{\alpha/2}( \mathbb{T}^{d})}\lesssim\|\chi\|_{\mathscr{H}^{1}(\pi)}. \tag{6.7}\]
Furthermore, as \(\langle\chi\rangle_{\pi}=0\), we obtain \(\|\chi-\langle\chi\rangle_{\lambda}\|^{2}_{L^{2}(\pi)}=\|\chi\|^{2}_{L^{2}(\pi )}+\langle\chi\rangle^{2}_{\lambda}\). Together, we thus find
\[\|\chi\|^{2}_{L^{2}(\pi)}\lesssim\|\chi\|^{2}_{L^{2}(\pi)}+\langle\chi\rangle^{2 }_{\lambda}\lesssim\|\chi\|^{2}_{\mathscr{H}^{1}(\pi)}. \tag{6.8}\]
In particular, we conclude that \(\mathscr{H}^{1}(\pi)\)-convergence implies \(L^{2}(\pi)\)-convergence of the sequence \((\chi^{m})\).
**Theorem 6.4**.: _Let \((F^{m})_{m}\), \((\chi^{m})_{m}\) and \(\chi\) be as in Corollary 6.3. Then, \((\chi^{m})_{m}\) converges to \(\chi\) in \((\mathscr{C}_{2}^{\theta}(\mathbb{T}^{d}))^{d}\), \(\theta\in((1-\beta)/2,\alpha+\beta)\) and there exists \(\lambda>0\), such that_
\[\chi=\chi^{\sharp}+\nabla\chi\owd I_{\lambda}(\overline{F}) \tag{6.9}\]
_for \(\chi^{\sharp}\in(\mathscr{C}_{2}^{2\theta-1}(\mathbb{T}^{d}))^{d}\). Furthermore, the limit \(\chi\) solves the singular Poisson equation with singular right-hand side \(\overline{F}\),_
\[(-\mathfrak{L})\chi=\overline{F}. \tag{6.10}\]
Proof.: Trivially, for \(\lambda>0\), \(\chi^{m}\) solves the resolvent equation
\[R_{\lambda}\chi^{m}=(\lambda-\mathfrak{L})\chi^{m}=\lambda\chi^{m}+\overline {F}^{m}\]
with right-hand side \(G^{m}:=\lambda\chi^{m}+\overline{F}^{m}\). The right-hand sides \((G^{m})\) converge in \((\mathscr{C}_{2}^{\beta}(\mathbb{T}^{d}))^{d}\) to \(G=\lambda\chi+\overline{F}\), because \(\chi^{m}\to\chi\) in \(L^{2}(\pi)^{d}\) by Corollary 6.3 and, thanks to the equivalence of \(\pi\) and the Lebesgue measure \(\lambda_{\mathbb{T}^{d}}\), thus also in \(L^{2}(\lambda_{\mathbb{T}^{d}})^{d}\). Choosing \(\lambda>1\) big enough, by Theorem 4.2, we can solve the resolvent equation
\[R_{\lambda}g^{i}=G^{i}=G^{\sharp,i}+G^{\prime,i}\owd F, \tag{6.11}\]
with \(G^{\sharp,i}:=\lambda\chi^{i}\in L^{2}(\lambda)\subset\mathscr{C}_{2}^{0}( \mathbb{T}^{d})\) and \(G^{\prime,i}:=(1-\langle F^{i}\rangle_{\pi})e_{i}\in\mathscr{C}^{\alpha+\beta -1}(\mathbb{T}^{d})\). Thereby we obtain a paracontrolled solution \(g^{i}\in\mathscr{D}_{2}^{\theta}\) for \(\theta<\alpha+\beta\), with \(g^{i}=g^{\sharp,i}+\nabla g^{i}\owd I_{\lambda}(F)\), \(g^{\sharp,i}\in\mathscr{C}_{2}^{2\theta-1}(\mathbb{T}^{d})\) and \(I_{\lambda}(F):=\int_{0}^{\infty}e^{-\lambda t}P_{t}Fdt\in\mathscr{C}^{\alpha +\beta}(\mathbb{T}^{d})\). By continuity of the solution map for the resolvent equation, we obtain convergence of \(\chi^{m,i}\to g^{i}\) in \(\mathscr{D}_{2}^{\theta}\) for \(m\to\infty\). Convergence of \((\chi^{m})\) to \(g\) in \((\mathscr{D}_{2}^{\theta})^{d}\) in particular implies convergence in \(L^{2}(\lambda_{\mathbb{T}^{d}})^{d}\) and thus in \(L^{2}(\pi)^{d}\), which implies that almost surely \(g=\chi\) and hence, by (6.11), that \(\chi\in(\mathscr{D}_{2}^{\theta})^{d}\) solves \((-\mathfrak{L})\chi=\overline{F}\).
## 7 Fluctuations in the Brownian and pure Levy noise case
In this section, we prove the central limit Theorem 7.2 for the diffusion \(X\) with periodic coefficients. In the following, we again explicitly distinguish between \(X\) and the projected process \(X^{\mathbb{T}^{d}}\). Of course, the central limit theorem in particular implies that for \(t>0\), \(\frac{1}{n}X_{nt}\to t\langle F\rangle_{\pi}\) with convergence in probability for \(n\to\infty\), i.e. a weak law of large numbers. The central limit theorem then quantifies the fluctuations around the mean \(t\langle F\rangle_{\pi}\).
Due to ergodicity of \(\pi\), it follows by the von Neumann ergodic theorem that, if the projected process is started in \(X_{0}^{\mathbb{T}^{d}}\sim\pi\), \(\frac{1}{n}\int_{0}^{nt}b(X_{s}^{\mathbb{T}^{d}})ds\to t\langle b\rangle_{\pi}\) in \(L^{2}(\mathds{P}_{\pi})\) as \(n\to\infty\) for \(b\in L^{\infty}(\mathbb{T}^{d})\). As \(\mathds{P}_{\pi}=\int_{\mathbb{T}^{d}}\mathds{P}_{x}\pi(dx)\), this implies in particular the convergence (along a subsequence) in \(L^{2}(\mathds{P}_{x})\) for \(\pi\)-almost all \(x\).
The pointwise spectral gap estimates yield the following slightly stronger ergodic theorem for the process started in \(X_{0}^{\mathbb{T}^{d}}=x\) for any \(x\in\mathbb{T}^{d}\). In particular, in the periodic homogenization result for the PDE, Corollary 7.3 below, pointwise convergence (for every \(x\in\mathbb{T}^{d}\)) of the PDE solutions can be proven.
**Lemma 7.1**.: _Let \(b\in L^{\infty}(\mathbb{T}^{d})\) and \(x\in\mathbb{T}^{d}\). Let \(X^{\mathbb{T}^{d}}\) be the projected solution of the \(\mathscr{G}=\partial_{t}+\mathfrak{L}\)-martingale problem on the torus \(\mathbb{T}^{d}\) started in \(X_{0}^{\mathbb{T}^{d}}=x\in\mathbb{T}^{d}\). Then the following convergence holds in \(L^{2}(\mathds{P})\):_
\[\frac{1}{n}\int_{0}^{nt}b(X_{s}^{\mathbb{T}^{d}})ds\to t\langle b\rangle_{\pi}.\]
Proof.: Without loss of generality, we assume that \(\langle b\rangle_{\pi}=0\), otherwise we subtract the mean. With the Markov property we obtain
\[\left\|\frac{1}{n}\int_{0}^{nt}b(X_{s}^{\mathbb{T}^{d}})ds\right\| _{L^{2}(\mathds{P})}^{2} =\frac{1}{n^{2}}\int_{0}^{nt}\int_{0}^{nt}\mathds{E}[b(X_{s}^{ \mathbb{T}^{d}})b(X_{r}^{\mathbb{T}^{d}})]dsdr\] \[=\frac{2}{n^{2}}\int_{0}^{nt}\int_{0}^{nt}\mathbf{1}_{s\leqslant r }\mathds{E}\Big{[}b(X_{s}^{\mathbb{T}^{d}})\mathds{E}_{s}[b(X_{r}^{\mathbb{T} ^{d}})]\Big{]}dsdr\]
Using the spectral gap estimate (5.2), we can estimate
\[\left|\frac{2}{n^{2}}\int_{0}^{nt}\int_{0}^{nt}\mathbf{1}_{s\leqslant r }T_{s}(bT_{r-s}b)(x)dsdr\right| \leqslant\frac{2K^{2}\|b\|_{L^{\infty}}^{2}}{n^{2}}\int_{0}^{nt} \int_{0}^{nt}e^{-\mu s}e^{-\mu(r-s)}dsdr\] \[=\frac{tK^{2}\|b\|_{L^{\infty}}^{2}}{n\mu}(1-e^{-\mu nt})\to 0,\]
for \(n\to\infty\).
**Theorem 7.2**.: _Let \(\alpha\in(1,2]\) and \(F\in\mathscr{C}^{\beta}(\mathbb{T}^{d})\) for \(\beta\in(\frac{1-\alpha}{2},0)\) or \(F\in\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})\) for \(\beta\in(\frac{2-2\alpha}{3},\frac{1-\alpha}{2}]\) and \(\gamma\in(\frac{2\beta+2\alpha-1}{\alpha},1)\). Let \(X\) be the solution of the \(\mathscr{G}=(\partial_{t}+\mathfrak{L})\)-martingale problem started in \(X_{0}=x\in\mathbb{R}^{d}\). In the case \(\alpha=2\) and \(L=B\) for a standard Brownian motion \(B\), the following functional central limit theorem holds:_
\[\left(\frac{1}{\sqrt{n}}(X_{nt}-nt\langle F\rangle_{\pi})\right)_{t\in[0,T]} \Rightarrow\sqrt{D}(W_{t})_{t\in[0,T]},\]
_with convergence in distribution in \(C([0,T],\mathbb{R}^{d})\), a \(d\)-dimensional standard Brownian motion \(W\) and constant diffusion matrix \(D\) given by_
\[D(i,j):=\int_{\mathbb{T}^{d}}(e_{i}+\nabla\chi^{i}(x))^{T}(e_{j}+\nabla\chi^{ j}(x))\pi(dx)\]
_for \(i,j=1,\ldots,d\) and the \(i\)-th euclidean unit vector \(e_{i}\). Here, \(\chi\) solves the singular Poisson equation \((-\mathfrak{L})\chi^{i}=F^{i}-\langle F^{i}\rangle_{\pi}\), \(i=1,...,d\), according to Theorem 6.4. In the case \(\alpha\in(1,2)\), the following non-Gaussian central limit theorem holds:_
\[\left(\frac{1}{n^{1/\alpha}}(X_{nt}-nt\langle F\rangle_{\pi})\right)_{t\in[0,T ]}\Rightarrow(\tilde{L}_{t})_{t\in[0,T]},\]
_with convergence in distribution in \(D([0,T],\mathbb{R}^{d})\), where \(\tilde{L}\) is a \(d\)-dimensional symmetric \(\alpha\)-stable nondegenerate Levy process (with generator \(-\mathscr{L}_{\nu}^{\alpha}\))._
Proof.: As a byproduct of [14, Theorem 5.10], we obtain that there exists a probability space \((\Omega,\mathscr{F},\mathds{P})\) with an \(\alpha\)-stable symmetric non-degenerate process \(L\), such that \(X=x+Z+L\), where \(Z\) is given by
\[Z_{t}=\lim_{m\to\infty}\int_{0}^{t}F^{m}(X_{s})ds \tag{7.1}\]
for a sequence \((F^{m})\) of smooth functions \(F^{m}\) with \(F^{m}\to F\) in \(\mathscr{X}_{\infty}^{\beta,\gamma}(\mathbb{T}^{d})\) and where the limit is taken in \(L^{2}(\mathds{P})\), uniformly in \(t\in[0,T]\).
We write the additive functional \(\int_{0}^{\cdot}(\overline{F^{m}})^{\mathbb{R}^{d}}(X_{s})ds=\int_{0}^{\cdot} \overline{F^{m}}(X_{s}^{\mathbb{T}^{d}})ds\) in terms of the periodic solution \(\chi^{m}\) of the Poisson equation (6.1) with right hand side \(F^{m}-\langle F^{m}\rangle_{\pi}=:\overline{F^{m}}\), such that
\[X_{t}-t\langle F\rangle_{\pi} =X_{0}+(Z_{t}-t\langle F\rangle_{\pi})+L_{t} \tag{7.2}\] \[=X_{0}+\lim_{m\to\infty}\int_{0}^{t}\overline{F^{m}}(X_{s}^{ \mathbb{T}^{d}})ds+L_{t}\] (7.3) \[=X_{0}+\lim_{m\to\infty}\bigl{(}[\chi^{m}(X_{0}^{\mathbb{T}^{d}}) -\chi^{m}(X_{t}^{\mathbb{T}^{d}})]+M_{t}^{m}\bigr{)}+L_{t}\] (7.4) \[=X_{0}+[\chi(X_{0}^{\mathbb{T}^{d}})-\chi(X_{t}^{\mathbb{T}^{d}}) ]+M_{t}+L_{t}. \tag{7.5}\]
Here, the limit is again taken in \(L^{2}(\mathbb{P})\) and \(\chi\) is the solution of the Poisson equation (6.2) with right-hand side \(\overline{F}\), which exists by Theorem 6.4.
To justify (7.3), we use the convergence from (7.1) and \(\langle F\rangle_{\pi}=\lim_{m\to\infty}\langle F^{m}\rangle_{\pi}\) by Lemma 5.10. In (7.4), we applied Ito's formula to \((\chi^{m})^{\mathbb{R}^{d}}(X_{t})\) for \(m\in\mathbb{N}\). For the equality (7.5), we utilized that \(\chi^{m}\to\chi\) in \(L^{\infty}(\mathbb{T}^{d})\) by Theorem 6.4 and that the sequence of martingales \((M^{m})\) converges in \(L^{2}(\mathbb{P})\) uniformly in time in \([0,T]\) to the martingale \(M\). Here, for \(\alpha\in(1,2)\), the martingales are given by (notation: \([y]:=y\mod\mathbb{Z}^{d}=\iota(y)\))
\[M_{t}^{m}=\int_{0}^{t}\int_{\mathbb{R}^{d}\setminus\{0\}}[\chi^{m}(X_{s-}^{ \mathbb{T}^{d}}+[y]))-\chi^{m}(X_{s-}^{\mathbb{T}^{d}})]\hat{\pi}(ds,dy),\]
where \(\hat{\pi}(ds,dy)=\pi(ds,dy)-ds\mu(dy)\) is the compensated Poisson random measure associated to \(L\). \(M\) is given by an analogue expression, where we replace \(\chi^{m}\) by \(\chi\).
In the Brownian noise case, \(\alpha=2\), we have that \(M_{t}^{m}=\int_{0}^{t}\nabla\chi^{m}(X_{s}^{\mathbb{T}^{d}})\cdot dB_{s}\) and \(M_{t}\) is defined analogously with \(\chi^{m}\) replaced by \(\chi\). Indeed, convergence of the martingales in \(L^{2}(\mathbb{P})\) follows from the convergence of \((\chi^{m})\) to \(\chi\) in \(\mathscr{C}_{2}^{d}(\mathbb{T}^{d})\) with \(\theta\in(1,\alpha+\beta)\) by Theorem 6.4, which in particular implies uniform convergence of \((\chi^{m})\) and \((\nabla\chi^{m})\).
Let now first \(\alpha=2\) and \(L=B\) for a standard Brownian motion \(B\). Then we have by the above, almost surely,
\[\frac{1}{\sqrt{n}}(X_{nt}-nt\langle F\rangle_{\pi})=\frac{1}{\sqrt{n}}X_{0}+ \frac{1}{\sqrt{n}}[\chi(X_{0}^{\mathbb{T}^{d}})-\chi(X_{nt}^{\mathbb{T}^{d}})] +\frac{1}{\sqrt{n}}(M_{nt}+B_{nt})\]
with \(M_{t}=\int_{0}^{t}\nabla\chi(X_{s}^{\mathbb{T}^{d}})\cdot dB_{s}\).
To obtain the central limit theorem, we will apply the functional martingale central limit theorem, [1, Theorem 7.1.4], to
\[\left(\frac{1}{\sqrt{n}}(M_{nt}+B_{nt})\right)_{t\in[0,T]}.\]
To that aim, we check the convergence of the quadratic variation
\[\frac{1}{n}\langle M^{i}+B^{i},M^{j}+B^{j}\rangle_{nt}=\frac{1}{n}\int_{0}^{ nt}(\mathrm{Id}+\nabla\chi(X_{s}^{\mathbb{T}^{d}}))^{T}(\mathrm{Id}+\nabla \chi(X_{s}^{\mathbb{T}^{d}}))(i,j)ds\]
in probability to
\[t\int_{\mathbb{T}^{d}}(\mathrm{Id}+\nabla\chi(x))^{T}(\mathrm{Id}+\nabla\chi( x))(i,j)\pi(dx)=tD(i,j).\]
This is a consequence of Lemma 7.1.
The boundary term \(\frac{1}{\sqrt{n}}[\chi(X_{0}^{\mathbb{T}^{d}})-\chi(X_{nt}^{\mathbb{T}^{d}})]\) vanishes when \(n\to\infty\) as \(\chi\in L^{\infty}(\mathbb{T}^{d})\). Furthermore, as a processes,
\[\left(\frac{1}{\sqrt{n}}[\chi(X_{0}^{\mathbb{T}^{d}})-\chi(X_{nt}^{\mathbb{T}^{ d}})]\right)_{t\in[0,T]}\]
converges to the constant zero process almost surely with respect to the uniform topology in \(C([0,T],\mathbb{R}^{d})\).
Using Slutsky's lemma and combining with the functional martingale central limit theorem above, we obtain weak convergence of \((n^{-1/2}X_{nt})_{t\in[0,T]}\) to the Brownian motion \(\sqrt{D}W\) with the constant diffusion matrix \(D\) stated in the theorem.
Let now \(\alpha\in(1,2)\). We rescale by \(n^{-1/\alpha}\) and claim that the martingale \(n^{-1/\alpha}M_{nt}\) vanishes in \(L^{2}(\mathrm{P})\) for \(n\to\infty\). Indeed, in this case the martingale \(M\) is given by
\[M_{t}=\int_{0}^{t}\int_{\mathbb{R}^{d}\setminus\{0\}}[\chi(X_{s-}^{\mathbb{T} ^{d}}+[y])-\chi(X_{s-}^{\mathbb{T}^{d}})]\hat{\pi}(ds,dy).\]
Using the estimate from [10, Lemma 8.22] and the mean-value theorem, we obtain
\[\mathds{E}[\sup_{t\in[0,T]}|M_{nt}|^{2}] \lesssim\int_{0}^{nT}\int_{\mathbb{R}^{d}\setminus\{0\}} \mathds{E}[|\chi(X_{s-}^{\mathbb{T}^{d}}+[y])-\chi(X_{s-}^{\mathbb{T}^{d}})|^{ 2}]\mu(dy)ds\] \[=\int_{0}^{nT}\int_{\mathbb{R}^{d}\setminus\{0\}}\mathds{E}[| \chi(X_{s}^{\mathbb{T}^{d}}+[y])-\chi(X_{s}^{\mathbb{T}^{d}})|^{2}]\mu(dy)ds\] \[\leqslant\int_{0}^{nT}\int_{B(0,1)^{c}}\mathds{E}[|\chi(X_{s}^{ \mathbb{T}^{d}}+[y])-\chi(X_{s}^{\mathbb{T}^{d}})|^{2}]\mu(dy)ds\] \[\qquad+\int_{0}^{nT}\int_{B(0,1)\setminus\{0\}}\mathds{E}[|\chi (X_{s}^{\mathbb{T}^{d}}+[y])-\chi(X_{s}^{\mathbb{T}^{d}})|^{2}]\mu(dy)ds\] \[\leqslant 2nT\mu(B(0,1)^{c})\|\chi\|_{L^{\infty}(\mathbb{T}^{d})^{d} }^{2}+2nT\|\nabla\chi\|_{L^{\infty}(\mathbb{T}^{d})^{d\times d}}^{2}\int_{B(0, 1)\setminus\{0\}}|y|^{2}\mu(dy)\] \[\lesssim nT.\]
Hence, we conclude
\[\mathds{E}[\sup_{t\in[0,T]}|n^{-1/\alpha}M_{nt}|^{2}]\lesssim Tn^{1-2/\alpha} \tag{7.6}\]
and since \(\alpha<2\), we obtain the claimed convergence to zero.
As the \(J_{1}\)-metric (for definition, see [11, Chapter VI, Equation 1.26]) can be bounded by the uniform norm, (7.6) implies in particular, that the process \((n^{-1/\alpha}M_{nt})_{t\in[0,T]}\) converges to the constant zero process in probability with respect to the \(J_{1}\)-topology on the Skorokhod space \(D([0,T],\mathbb{R}^{d})\). Furthermore, \((n^{-1/\alpha}L_{nt})_{t\geqslant 0}\stackrel{{ d}}{{=}}(L_{t})_{t\geqslant 0}\). Using [11, Chapter VI, Proposition 3.17] and that the constant process is continuous, we thus obtain that \((n^{-1/\alpha}X_{nt})_{t\geqslant 0}\) convergences in distribution in \(D([0,T],\mathbb{R}^{d})\) to the \(\alpha\)-stable process \((\tilde{L}_{t})_{t\in[0,T]}\), that has the same law as \((L_{t})_{t\in[0,T]}\).
Utilizing the correspondence of the solution of the SDE (i.e. the solution of the martingale problem) to the parabolic generator PDE via Feynman-Kac, we can now show the corresponding periodic homogenization result for the PDE as a corollary.
**Corollary 7.3**.: _Let \(F\) and \(F^{\mathbb{R}^{d}}\) be as in Theorem 7.2. Assume moreover that \(\langle F\rangle_{\pi}=0\) and let \(f\in C_{b}(\mathbb{R}^{d})\). Let \(T>0\) and let \(u\in D_{T}=\{u\in C_{T}\mathscr{E}^{\alpha+\beta}\cap C_{T}^{1}\mathscr{C}^{ \beta}\mid u^{\sharp}:=u-\nabla u\vartriangleleft I(F)\in C_{T}\mathscr{C}^{2( \alpha+\beta)-1}\cap C_{T}^{1}\mathscr{C}^{\alpha+2\beta-1}\}\) with \(I_{t}(f):=\int_{0}^{t}P_{t-s}(f)ds\) be the mild solution of the singular parabolic PDE_
\[(\partial_{t}-\mathfrak{L})u=0,\quad u_{0}=f^{\varepsilon},\]
_where \(f^{\varepsilon}(x):=f(\varepsilon x)\). Let \(u^{\varepsilon}(t,x):=u(\varepsilon^{-\alpha}t,\varepsilon^{-1}x)\) with \(u^{\varepsilon}(0,\cdot)=f\). Let furthermore, for \(\alpha=2\) and \(-\mathscr{L}_{\nu}^{\alpha}=\frac{1}{2}\Delta\), \(\overline{u}\) be the solution of_
\[(\partial_{t}-D:\nabla\nabla)\overline{u}=0,\quad\overline{u}_{0}=f,\]
_with notation \(D:\nabla\nabla:=\sum_{i,j=1,\ldots,d}D(i,j)\partial_{x_{i}}\partial_{x_{j}}\), and for \(\alpha\in(1,2)\), let \(\overline{u}\) be the solution of_
\[(\partial_{t}+\mathscr{L}_{\nu}^{\alpha})\overline{u}=0,\quad\overline{u}_{0}=f.\]
_Then, for any \(t\in(0,T]\), \(x\in\mathbb{R}^{d}\), we have the convergence \(u_{t}^{\varepsilon}(x)\to\overline{u}_{t}(x)\) for \(\varepsilon\to 0\)._
**Remark 7.4**.: _Note that \(u^{\varepsilon}\) solves \((\partial_{t}-\mathfrak{L}^{\varepsilon})u^{\varepsilon}=0\), \(u_{0}^{\varepsilon}=f\) with operator \(\mathfrak{L}^{\varepsilon}g=-\mathscr{L}_{\nu}^{\alpha}g+\varepsilon^{1-\alpha }F(\varepsilon^{-1}\cdot)\nabla g\)._
**Remark 7.5**.: _If \(\alpha=2\) and \(F\) is of gradient-type, that is, \(F=\nabla f\) for \(f\in\mathscr{C}^{1+\beta}\) (\(f\) is a continuous function, as \(1+\beta>0\)), the invariant measure is explicitly given by \(d\pi=c^{-1}e^{-f(x)}dx\) with suitable normalizing constant \(c>0\), since the operator is of divergence form, \(\mathfrak{L}=e^{f}\nabla\cdot(e^{-f}\nabla\cdot)\). Then it follows that \(\langle F\rangle_{\pi}=\int_{\mathbb{T}^{d}}\nabla e^{-f(x)}dx=0\). Thus, \(F\) satisfies the assumptions of Corollary 7.3._
Proof of Corollary 7.3.: Notice that \((\tilde{u}_{s}:=u_{t-s})_{s\in[0,t]}\) solves the backward Kolmogorov equation \((\partial_{s}+\mathfrak{L})\tilde{u}=0,\tilde{u}(t,\cdot)=f^{\varepsilon}\). Approximating \(f\) by \(\mathscr{C}^{3}(\mathbb{R}^{d})\) functions and using that \(X\) solves the \((\partial_{t}+\mathfrak{L},x)\)-martingale problem, we obtain
\[u^{\varepsilon}(t,x)=\mathds{E}_{X_{0}=\varepsilon^{-1}x}[f(\varepsilon X_{ \varepsilon^{-\alpha}t})].\]
The stated convergence then follows from Theorem 7.2. Indeed, if \(X_{0}=\varepsilon^{-1}x\), then \(\varepsilon X_{\varepsilon^{-2}.}\to W^{x}\) in distribution, where \(W^{x}\) is the Brownian motion started in \(x\) with covariance \(D\), respectively \(\varepsilon X_{\varepsilon^{-\alpha}.}\to L^{x}\) if \(\alpha\in(1,2)\) for the \(\alpha\)-stable process \(L\) with generator \((-\mathscr{L}_{\nu}^{\alpha})\) and \(L_{0}=x\). The Feynman-Kac formula for the limit process then gives that the limit of \((u^{\varepsilon}(t,x))\) equals \(\overline{u}(t,x)=\mathds{E}[f(W^{x})]\) if \(\alpha=2\), respectively \(\overline{u}(t,x)=\mathds{E}[f(L^{x})]\) if \(\alpha\in(1,2)\).
**Remark 7.6** (Box diffusion with Levy noise).: _We can apply our theory to obtain the long-time behaviour of the periodic Brox diffusion with Levy noise (see [10] for the construction). As \(\alpha\in(1,2]\), Theorem 7.2 yields that \(|X_{t}|\sim t^{1/\alpha}\) for \(t\to\infty\). In the non-periodic situation, the long-time behaviour of the Brox diffusion with Brownian noise is however very different. Brox [11] proved, that the diffusion gets trapped in local minima of the white noise environment and thus slowed down (that is, for almost all environments: \(|X_{t}|\sim\log(t)^{2}\) for \(t\to\infty\), cf. [11, Theorem 1.4]). In the non-periodic pure stable noise case, the long-time behaviour of the Brox diffusion is an open problem, that we leave for future research._
## Appendix A Appendix
Proof of Lemma 2.10.: To show (2.9), we notice that by the isometry of the spaces \(L^{2}(\mathbb{T}^{d})\), \(l^{2}(\mathbb{Z}^{d})\) by the Fourier transform,
\[\|\Delta_{j}\mathscr{L}_{\nu}^{\alpha}u\|_{L^{2}(\mathbb{T}^{d})}^{2}=\sum_{k \in\mathbb{Z}^{d}}|\rho_{j}(k)\psi_{\nu}^{\alpha}(k)\hat{u}(k)|^{2}.\]
Due to \(\rho_{j}(k)\neq 0\) only if \(|k|\sim 2^{j}\) and \(|\psi_{\nu}^{\alpha}(k)|\lesssim|k|^{\alpha}\), we obtain that
\[\|\Delta_{j}\mathscr{L}_{\nu}^{\alpha}u\|_{L^{2}(\mathbb{T}^{d})}^{2}\lesssim 2 ^{2j\alpha}\sum_{k\in\mathbb{Z}^{d}}|\rho_{j}(k)\hat{u}(k)|^{2}=2^{2j\alpha} \|\Delta_{j}u\|_{L^{2}(\mathbb{T}^{d})}^{2}\]
and thus
\[\|\mathscr{L}_{\nu}^{\alpha}u\|_{\mathscr{L}_{2}^{\beta-\alpha}(\mathbb{T}^{ d})}=\sup_{j}2^{j(\beta-\alpha)}\|\Delta_{j}\mathscr{L}_{\nu}^{\alpha}u\|_{L^{2}( \mathbb{T}^{d})}\lesssim\sup_{j}2^{j\beta}\|\Delta_{j}u\|_{L^{2}(\mathbb{T}^{ d})}=\|u\|_{\mathscr{L}_{2}^{\beta}(\mathbb{T}^{d})}.\]
To show (2.10), we again use the isometry, such that
\[\|\Delta_{j}P_{t}u\|_{L^{2}(\mathbb{T}^{d})}^{2}=\sum_{k\in\mathbb{Z}^{d}}| \rho_{j}(k)\exp(-t\psi_{\nu}^{\alpha}(k))\hat{u}(k)|^{2}.\]
For \(j=-1\), \(\rho_{j}\) is supported in a ball around zero and as \(|\exp(-t\psi_{\nu}^{\alpha}(k))|\leqslant 1\), the estimate \(\|\Delta_{j}P_{t}u\|_{L^{2}(\mathbb{T}^{d})}^{2}\lesssim(t^{-\theta/\alpha} \lor 1)2^{\theta}\sum_{k\in\mathbb{Z}^{d}}|\rho_{j}(k)\hat{u}(k)|^{2}\) holds trivially for \(\theta\geqslant 0\). For \(j>-1\), \(p_{j}\) is supported away from zero and we can use that \(\exp(-t\psi_{\nu}^{\alpha}(\cdot))\) is a Schwartz function away from \(0\) and thus, for \(|k|>0\), \(|\exp(-t\psi_{\nu}^{\alpha}(k))|\lesssim(t\psi_{\nu}^{\alpha}(k)+1)^{-\theta/ \alpha}\lesssim t^{-\theta/\alpha}|k|^{-\theta}\), for any \(\theta\geqslant 0\). Thus, for \(j>-1\), we obtain
\[\|\Delta_{j}P_{t}u\|_{L^{2}(\mathbb{T}^{d})}^{2}\leqslant 2^{-2j\theta}t^{- \theta/\alpha}\sum_{k\in\mathbb{Z}^{d}}|\rho_{j}(k)\hat{u}(k)|^{2}=2^{-2j \theta}t^{-\theta/\alpha}\|\Delta_{j}u\|_{L^{2}(\mathbb{T}^{d})}^{2},\]
such that together (2.10) follows. To obtain the remaining estimate, we argue in a similar manner using that, due to Holder-continuity of the exponential function, for \(\theta/\alpha\in[0,1]\), \(|\exp(-t\psi_{\nu}^{\alpha}(k))-1|\leqslant|t\psi_{\nu}^{\alpha}(k)|^{\theta/ \alpha}\leqslant t^{\theta/\alpha}|k|^{\theta}\).
Proof of Lemma 2.11.: By the assumption of vanishing zero-order Fourier mode, we have
\[P_{t}g=\sum_{|k|\geqslant 1}\exp(-t\psi_{\nu}^{\alpha}(k))\hat{g}(k)e_{k}.\]
Thus, we obtain by \(\psi_{\nu}^{\alpha}(k)\geqslant c|k|^{\alpha}\) for some \(c>0\) (follows from Assumption 2.4) the trivial estimate
\[\|\Delta_{j}(P_{t}g)\|_{L^{2}(\mathbb{T}^{d})}=\sum_{|k|\geqslant 1}|p_{j}(k)\exp(- t\psi_{\nu}^{\alpha}(k))\hat{g}(k)|^{2}\leqslant\|g\|_{\mathscr{L}_{2}^{\beta}( \mathbb{T}^{d})}2^{-j\beta}\exp(-tc).\]
Together with Lemma 2.10, we then obtain for any \(\theta\geqslant 0\),
\[\|\Delta_{j}(P_{t}g)\|_{L^{2}(\mathbb{T}^{d})}\lesssim\|g\|_{\mathscr{L}_{2}^{ \beta}(\mathbb{T}^{d})}\min\bigl{(}2^{-j\beta}\exp(-tc),\,2^{-j(\beta+\theta) }(t^{-\theta/\alpha}\lor 1)\bigr{)}.\]
The claim thus follows by interpolation.
## Acknowledgements
H.K. is supported by the Austrian Science Fund (FWF) Stand-Alone programme P 34992. Part of the work was done when H.K. was employed at Freie Universitat Berlin and funded by the DFG under Germany's Excellence Strategy - The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689). N.P. gratefully acknowledges financial support by the DFG via Research Unit FOR2402 and through the grant CRC 1114 "Scaling Cascades in Complex Systems".
|
2302.14740 | Fusion of ML with numerical simulation for optimized propeller design | In computer-aided engineering design, the goal of a designer is to find an
optimal design on a given requirement using the numerical simulator in loop
with an optimization method. In this design optimization process, a good design
optimization process is one that can reduce the time from inception to design.
In this work, we take a class of design problem, that is computationally cheap
to evaluate but has high dimensional design space. In such cases, traditional
surrogate-based optimization does not offer any benefits. In this work, we
propose an alternative way to use ML model to surrogate the design process that
formulates the search problem as an inverse problem and can save time by
finding the optimal design or at least a good initial seed design for
optimization. By using this trained surrogate model with the traditional
optimization method, we can get the best of both worlds. We call this as
Surrogate Assisted Optimization (SAO)- a hybrid approach by mixing ML surrogate
with the traditional optimization method. Empirical evaluations of propeller
design problems show that a better efficient design can be found in fewer
evaluations using SAO. | Harsh Vardhan, Peter Volgyesi, Janos Sztipanovits | 2023-02-28T16:42:07Z | http://arxiv.org/abs/2302.14740v1 | # Fusion of ML with numerical simulation for optimized propeller design +
###### Abstract
_In computer-aided engineering design, the goal of a designer is to find an optimal design on a given requirement using the numerical simulator in loop with an optimization method. In this design optimization process, a good design optimization process is one that can reduce the time from inception to design. In this work, we take a class of design problem, that is computationally cheap to evaluate but has high dimensional design space. In such cases, traditional surrogate-based optimization does not offer any benefits. In this work, we propose an alternative way to use ML model to surrogate the design process that formulates the search problem as an inverse problem and can save time by finding the optimal design or at least a good initial seed design for optimization. By using this trained surrogate model with the traditional optimization method, we can get the best of both worlds. We call this as Surrogate Assisted Optimization (SAO)- a hybrid approach by mixing ML surrogate with the traditional optimization method. Empirical evaluations of propeller design problems show that a better efficient design can be found in fewer evaluations using SAO._
Keywords:_Random forest Decision Tree Lagrange multiplier surrogate modeling openprop evolutionary algorithm Inverse Modeling
## 1 Introduction
In the last decades, considerable effort has been made to rapidly optimize the designs in different engineering problems [1][2][3]. The main bottleneck in rapid design optimization is either slow evaluation due to a complex numerical simulation process or high-dimensional design space or both. This high dimensional design space can be due to ranges of search of independent variables or a large number of such variables or both. In case of problems that involve complex numerical models and simulation processes, Surrogate-based optimization (SBO) [4] is the main approach, where a data-driven learning model is trained to replace numerical simulation in the optimization loop [4][5]. The motivation for creating a surrogate is cheap approximate evaluation in comparison to direct numerical
simulation. There are other cases where due to the availability of coarse approximate physics models, numerical simulations are cheap to evaluate and the only challenge arises from high dimensional design space. The traditional SBO does not offer much in these cases. In this work, we try to address this class of problems by exploring the possibility of using ML in these problems and its benefits during the design optimization process. For this purpose, we propose a surrogate-assisted optimization (SAO), where the surrogate is trained on earlier collected labeled data from the design and requirement space. By capitalizing on the generalization capability of trained ML models, we want to speed up the design process for a range of requirements. In such cases, the trained surrogate acts as a memory of experience (similar to an expert human designer) and is used to find good design directly or at least provide a good seed design for further optimization. For this purpose, the surrogate uses both nonlinear interpolation and nonlinear mapping to provide a good baseline for further optimization. The challenge of creating a surrogate in this case arises due to modeling expectations in this case. The modeling expectation is to try to get a good design from the requirement directly. Due to the acausal relationship between the requirement on design and the design parameter, it must be modeled as an inverse problem. Due to the causality principle, the forward problem in engineering systems has a distinct solution. On the other hand, the inverse problem might have numerous solutions if various system parameters predict the same effect. Generally, the Inverse modeling problem is formalized in a probabilistic framework which is complex and not very accurate for high dimensional input-output and design space. We attempt this problem from geometric data summarizing algorithms that can model inverse problems and are useful in these problems. To differentiate this approach from surrogate-based optimization (SBO), we call this approach **Surrogate-Assisted-Optimization (SAO)**. The main difference between SBO and SAO is that in SBO, we use a surrogate in the optimization loop while in SAO, a surrogate is external to the optimization loop and only used to get a good initial baseline, further design optimization starts with this initial seed design provided by surrogate. The other difference is, in SAO surrogate attempts to inverse modeling problem instead of forward modeling problem in SBO. In SAO, the role of a surrogate is to provide all possible good designs or seed designs. For surrogate modeling, our choice of models are random forest and decision tree. The random forest has empirically shown to work the best for inverse modeling problems [6]. We also selected to train one decision tree on the entire data to create a memory map of collected data. Empirically we observed adding one decision tree trained on the entire data set along with a random forest of decision trees trained on various sub-samples of the dataset and using averaging improves the predictive accuracy and control over-fitting.
For empirical evaluation, we take the use case problem of propeller design [2], and the design space after coarse discretization is of the order of approximately \(10^{38}\). Based on the collected data requirement and training, when the SAO approach is applied to multiple optimization problems sampled from the requirement space. In all cases, we found SAO that leverage on initial good seed design
from surrogate can find a better design on a given budget in comparison to the traditional method.
## 2 Background and Problem Formulation
### Background
**Propeller:** Propellers are mechanical devices that convert rotational energy to thrust by forcing incoming forward fluids axially toward the outgoing direction. On a given operating condition such as the advance ratio (\(J\)) rpm of the motor and desired thrust, the performance of a propeller is characterized by its physical parameters such as the number of blades (\(Z\)), diameter of the propeller (\(D\)), chord radial distribution (\(C/D\)), pitch radial distribution (\(P/R\)) and hub diameter(\(D_{hub}\)) [2, 7]. The goal of a propeller designer is to find the optimal geometric parameters that can meet this thrust requirement with maximum power efficiency (\(\eta\)) (refer Figure 1).
We use openprop [7] as our numerical simulation tool in this work. The output of simulation informs about the quality of the design choice, and accordingly, a bad design choice may result in poor efficiency or infeasible design and vice versa. The biggest challenge in the design search process arises from the exponentially large design space of the geometric parameter.
**Openprop:** Openprop is a propeller design tool based on the theory of the moderately loaded lifting line, with trailing vorticity oriented to the regional flow rate. Optimization processes in openprop involve solving the Lagrange multiplier (\(\lambda_{1}\)) for finding the ideal circulation distribution along the blade's span given the inflow conditions and blade \(2D\) section parameters. The openprop applies Coney's formulation [8] to determine produced torque \(Q\), thrust \(T\), and circulation distribution \(Gamma\) for a given required thrust \(TS\). For optimization purposes, an auxiliary function is defined as follows:
\[H=Q+\lambda_{1}(T-T_{s}) \tag{1}\]
Figure 1: Propeller design optimization process in openProp. Sample evaluation is done in openProp simulator and performance is measured by the efficiency of the propeller.
If \(T=T_{S}\) then a minimum value of \(H\) coincides with a minimum value of \(Q\). To find the minimum, the partial derivative with respect to unknowns is set to zero.
\[\frac{\partial H}{\partial\Gamma(i)}=0\;for\;i=1,2,...,M \tag{2}\]
\[\frac{\partial H}{\partial\lambda_{1}}=0 \tag{3}\]
By solving these \(M\) systems of non-linear equations using the iterative method -i.e. by thawing other variables and linearizing the equations with unknowns \(\hat{\Gamma},\hat{\lambda_{1}}\), an optimal circulation distribution and a physically realistic design can be found. For more details on numerical methods, refer to [7, 8].
#### 2.1.2 Random forest and Decision tree:
A random forest [9] is a non-parametric supervised machine learning method that is an ensemble of various decision trees. Each decision tree is a machine-learning model that can be trained for regression and classification purposes. The fundamental of random forest learning is bagging [10, 11], in which the decision tree algorithm is applied multiple times on a subset of data and then the output result is averaged. The goal is to train many uncorrelated trees by sub-sampling \(D\) data points with replacement from a data-set \(X\). This process reduces over-fitting by averaging the prediction from different trained on different data sets sampled from the same data distribution. A decision tree is created by recursive binary partitioning of the variable space until the partitioning of the space is complete.
### Problem Formulation
Based on a given requirement imposed on a design in terms of operational and performance conditions, the goal of a designer is to find an optimal geometric parameter of the propeller in minimum time. In OpenProp, the input design space can be split into two parts: (1) **Requirement space (\(\mathcal{R}\))** that comprises of thrust, velocity of vehicle, rpm, and (2) **Geometric design space (\(\mathcal{G}\))** comprises of chord profile radial distribution (\(C/D\)), diameter (\(D\)), hub diameter (\(Dhub\)),
Figure 2: OpenProp Numerical Simulation
etc). The design space considered for this study is taken from [2]. Once samples were taken from this space, the requirement, and geometric design are put in the iterative numerical simulation algorithm to find the efficiency (\(\eta\)) of the design. The goal of design optimization is formalized as :
\[\underset{g\in\mathcal{G}}{\operatorname{argmax}}\;\eta\;\;for\;a\;given\;r \sim\mathcal{R} \tag{4}\]
Since this design optimization process for a given requirement involves running a sequential design selection from the input geometric space (\(\mathcal{G}\)), its evaluation and optimization until the requirements are satisfied. In such a case, another important aspect is to reduce the inception to design time (\(\mathcal{T}_{design}\)) i.e. design optimization time. Collectively, it can be written as:
\[\underset{g\in\mathcal{G}}{\operatorname{argmax}}\;\eta\;\;for\;a\;given\;r \sim\mathcal{R} \tag{5}\]
\[min\;\mathcal{T}_{design} \tag{6}\]
## 3 Approach
### Formulating design search as inverse problem:
In forward modeling and prediction problems, we use a physical theory or simulation model for predicting the outcome (\(\eta\)) of parameter (\(g\)) defining a design behavior. The optimization process in the forward problem involves sampling from parameter space (\(\mathcal{G}\)) and striving to find the best parameter (\(g^{*}\)) that meets the requirement on the performance metrics (\(\eta\)). In the reciprocal situation, in inverse modeling and prediction problem, the values of the parameters representing a system are inferred from values of the desired output and the goal is to find the desired values of the parameters (\(g^{*}\)) that represent the output (\(\eta\)) directly.
In the propeller design use case, the objective of a designer is to determine the best geometric characteristics of the propeller in minimum time, based on a particular demand imposed on the design in terms of operational and performance conditions. The inverse setting in this case has some unique features:
1. One part of the input variables is known i.e. requirement. The other part of the input is unknown (geometry).
2. The effect or desired output is not fixed and the goal is to get the maximum possible efficiency that depends on requirements. ( for example, it is not possible to produce a thrust with a small rpm motor at some specific speed.)
To address these situations we formulate our inverse modeling problem as selecting and training a prediction model that can map a given requirement to the geometry and efficiency.
\[\mathcal{IM}:\mathcal{R}\mapsto\{\mathcal{G},\eta\}\]
Since it is not possible to find the maximum efficiency apriori, we filter all low-efficiency data sets (we treat these as infeasible designs) and keep only designs
whose efficiency is higher. To model this inverse problem, we rely on geometric data summarizing techniques that learn the mapping between input and output space as sketches and the ability to regress between them. A sketch is a compressed mapping of output data set onto a data structure.
### Why random forest and decision tree is our choice for modeling this inverse problem?
In the geometric data summarizing technique, the aim is to abstract data from the metric space to a compressed representation in a data structure that is quick to update with new information and supports queries. Let \(D=\{d_{1},d_{2},...,d_{n}\}\) are set of datapoints such that \(d_{i}\in R^{m}\). For the purpose of representing data in sketches (\(S\)), the main requirement is the relationship (\(\psi\)) between the data points in metric space must be preserved in this data structure i.e \(\psi\{T(d_{k},d_{l})\}\approx\psi\{S(d_{k},d_{l})\}\).
One of the selected relationships (\(\psi\)) between datapoints in metric space is \(L_{p}\) distance between datapoints. In such case, a distance-preserving embedding of this relationship in metric space is equivalent to tree distance between two data points \(d_{k}\) and \(d_{l}\) in data structure (\(S\)). Tree distance is defined as the weight of the least common ancestor of \(d_{k}\) and \(d_{l}\)[12], then according to the Johnson-Lindenstrauss lemma [13] the tree distance can be bounded from at least \(L_{1}(d_{k},d_{l})\) to maximum \(O(d*log|k|/L_{1}(d_{k},d_{l}))\). Accordingly, a point that is far from other points in the metric space will continue to be at least as far in a randomized decision tree.
\[L_{1}(d_{k},d_{l})\leq tree\ distance\leq O(d*log|k|/L_{1}(d_{k},d_{l}))\]
**Random Forest** is a collection of specific kind of decision tree where each tree in a random forest depends on the values of a random vector that was sampled randomly and with the same distribution for all the trees in the forest. When the number of trees in a forest increases, the generalization error converges to a limit. The strength of each individual tree in the forest and the correlation between them determine the accuracy of a forest of tree. The error rates are better than Adaboost when each node is split using a random selection of features [9]. To create a tree (\(h(x,\theta_{k})\) in the forest, \(\theta_{k}\) is independent identically distributed random vectors independent of the past random vectors \(\theta_{1},...,\theta_{k-1}\) but from the same distribution. Due to ensembling and randomness in the forest generation process, the variance in \(tree\ distance\) also reduces to \(L_{1}(d_{k},d_{l})\). Accordingly, geometric summarization of data from metric space to random forest can maintains the \(L_{1}\) norm between data points in expectation. The decision tree trained on entire data-set has over-fitting issue and not suitable for generalization but due to space partitioning nature, it can map each observed requirement with multiple geometric designs and its efficiency when trained. By using both trained models in parallel, we can capitalize on both nonlinear mapping feature of decision tree as well as non linear regression/interpolation feature of random forest.
### A hybrid optimization approach : Surrogate Assisted Optimization (SAO)
Figure 3 shows our approach to solve the propeller design optimization problem. It is a hybrid approach when ML model is fused with traditional algorithm with numerical physics in loop of optimization.
During training time, we train our random forest and decision tree. For training both models, we used requirement data (\(r\sim R\)) as an input and the corresponding geometric design values(\(g\sim G\)) and resulting efficiency (\(\eta\)) forming a tuple as output. The random forest is trained to learn the inverse regression and predict the design geometry along with efficiency on a given requirement. The decision tree on the other hand does inverse mapping from requirement space to design geometry and efficiency searched during data generation. The goal of random forest is to learn a function \(f:\mathcal{R}\mapsto\mathcal{G},\eta\) that is continuous so that we can regress for in between points however, the decision tree is memory map and just does space partitioning on seen data. Using both gives up good quality seed initial design. Since we do not know possible efficiency that can be achieved on given requirement, we possible take all possible prediction and sort on bases on efficiency to get the best design found yet. Direct prediction of random forest is an average of all geometric design and efficiency corresponding to the given requirement, which may or may not be a very good initial design. Here the role of random forest is generalisation and regression on unseen data. The role of decision tree is to does non-linear one to many inverse mapping. We selected all the designs that are on the leaf of decision tree and include those as well to our baseline designs- this is called baseline prediction. Using both models we get good quality initial seed designs. In the next stage, we take these baseline designs as initial population and start the genetic algorithm search for the final optimized design. (refer to fig 3).
In GA, chromosomes are represented by arrays of bits or character strings that have an optimization function encoded in them. Strings are then processed
Figure 3: Surrogate Assisted Design optimization for propeller design
by genetic operators, and the fittest candidates are chosen. We run GA in loop with openProp numerical simulator until budget.
### Data generation & Training
For data generation, we took the design space used by [2]. The geometric design space is of the order of \(10^{27}\) (diameter * nine alternative chord radial profiles), whereas the requirement space after coarse discretization is on the order of \(10^{11}\) (thrust x velShip x RPM) with combined search space is \(10^{38}\). We take a single sample point from the physical design space and the requirement space and input it into the OpenProp optimizer. OpenProp internally optimizes this design using iterative numerical methods and computes the performance metric (\(\eta\)). We used this 0.205 million valid design data point for our training and testing. Using this design corpus, we trained both random forest regression [9] model and the decision tree. Other hyperparameters of the random forest model are an ensemble of 100 decision trees with mean squared error as splitting criteria of the node. For the decision tree model, we chose squared error as the splitting criteria of the node, and nodes are expanded until all leaves are pure. Other hyperparameters are kept as default settings as in SKlearn [14].
## 4 Experiment and Results
For sharing the result, we have two things to share:
1. prediction accuracy of random forest on test data.
2. Empirical evaluation of SAO (on example design optimization problems and its comparison with baseline (Genetic Algorithm).
For testing the prediction accuracy of our trained model we selected 5% of data randomly from the dataset. To assess the quality of prediction, we used the following common statistics as evaluation metrics:
1. average **residual**, \(\Delta Z=(\eta_{truth}-\eta_{predicted})/\eta_{truth}\) per sample
2. the **accuracy**, percentage of the number of samples whose residual is within acceptable error of 5% i.e \(|\Delta Z|<0.05\).
It measures the percentage of test data on which the prediction of efficiency is within 5% of error (since efficiency is a good metric and target of final prediction). We found percentage prediction accuracy on test data for the random forest is around 90%. For the decision tree, we fitted it with the entire data, since we just want space partitioning of collected data.
For the empirical evaluation of SAO, we chose Genetic Algorithm as our baseline optimization algorithm that is frequently deployed in such situations. Figure 4 shows the evaluation traces of the optimization process. It can be observed that due to the trained surrogate, we get a better initial seed design, and further optimization in the second step using GA provides better designs on the given budget in comparison to applying GA which starts with a random seed design.
## 5 Related Works
ML has the ability to learn from raw data and its wide application in design and operation is shown in various works [15, 16, 17, 18, 19]. The optimization in the design process is also changing from traditional model-based optimization [20] to the availability of ML-based cheap surrogate that can replace the traditional first principle physics-based models [21] or by directly solving inverse problems [6, 22, 23]. Lee et al [24, 25] used a genetic algorithm for optimizing the propeller design. However, the application of AI and ML in real-world system design is relatively slow. [2, 26] are a few known works to apply AI-ML concepts in the design of propellers.
## 6 Conclusion and Future Work
We showed that even in high-dimensional design optimization problems, SAO can speed up the design optimization process. By adding more data, it would be possible to improve further. The future work in this direction would be adding more data to ML models and seeing what is the maximum performance that can be achieved. Based on our intuition we hope that it is possible to find an optimal design in \(O(1)\) time complexity if a sufficient amount of data is collected and models are trained on it.
|
2309.10867 | Dynamical Tests of a Deep-Learning Weather Prediction Model | Global deep-learning weather prediction models have recently been shown to
produce forecasts that rival those from physics-based models run at operational
centers. It is unclear whether these models have encoded atmospheric dynamics,
or simply pattern matching that produces the smallest forecast error. Answering
this question is crucial to establishing the utility of these models as tools
for basic science. Here we subject one such model, Pangu-weather, to a set of
four classical dynamical experiments that do not resemble the model training
data. Localized perturbations to the model output and the initial conditions
are added to steady time-averaged conditions, to assess the propagation speed
and structural evolution of signals away from the local source. Perturbing the
model physics by adding a steady tropical heat source results in a classical
Matsuno--Gill response near the heating, and planetary waves that radiate into
the extratropics. A localized disturbance on the winter-averaged North Pacific
jet stream produces realistic extratropical cyclones and fronts, including the
spontaneous emergence of polar lows. Perturbing the 500hPa height field alone
yields adjustment from a state of rest to one of wind--pressure balance over ~6
hours. Localized subtropical low pressure systems produce Atlantic hurricanes,
provided the initial amplitude exceeds about 5 hPa, and setting the initial
humidity to zero eliminates hurricane development. We conclude that the model
encodes realistic physics in all experiments, and suggest it can be used as a
tool for rapidly testing ideas before using expensive physics-based models. | Gregory J. Hakim, Sanjit Masanam | 2023-09-19T18:26:41Z | http://arxiv.org/abs/2309.10867v1 | # Dynamical Tests of a Deep-Learning Weather Prediction Model
###### Abstract
The Pangu-weather deep-learning weather prediction model exhibits physically realistic dynamical behavior
Steady tropical heating produces a Matsuno-Gill response in the tropics, and planetary waves that radiate into the extratropics
Localized initial conditions produce realistic hurricanes, extratropical cyclones, and adjustment to geostrophic balance
###### Abstract
Global deep-learning weather prediction models have recently been shown to produce forecasts that rival those from physics-based models run at operational centers. It is unclear whether these models have encoded atmospheric dynamics, or simply pattern matching that produces the smallest forecast error. Answering this question is crucial to establishing the utility of these models as tools for basic science. Here we subject one such model, Pangu-weather, to a set of four classical dynamical experiments that do not resemble the model training data. Localized perturbations to the model output and the initial conditions are added to steady time-averaged conditions, to assess the propagation speed and structural evolution of signals away from the local source. Perturbing the model physics by adding a steady tropical heat source results in a classical Matsuno-Gill response near the heating, and planetary waves that radiate into the extratropics. A localized disturbance on the winter-averaged North Pacific jet stream produces realistic extratropical cyclones and fronts, including the spontaneous emergence of polar flows. Perturbing the 500hPa height field alone yields adjustment from a state of rest to one of wind-pressure balance over \(\sim\)6 hours. Localized subtropical low pressure systems produce Atlantic hurricanes, provided the initial amplitude exceeds about 5 hPa, and setting the initial humidity to zero eliminates hurricane development. We conclude that the model encodes realistic physics in all experiments, and suggest it can be used as a tool for rapidly testing ideas before using expensive physics-based models.
## Plain Language Summary
Deep-learning weather forecast models have recently been shown to be as skillful as the best physics-based models, but it is unclear if they have encoded the laws of physics or simply an effective pattern-matching algorithm. Here we test one such model, Pangu-weather, with four idealized experiments aimed at assessing the physical realism of the model solution: steady tropical heating, extratropical cyclone development, geostrophic adjustment, and hurricane development. A common aspect of these experiments is perturbations that are local in space, since signal propagation from a local source is limited by physics. The outcomes of these four experiments are well-known from observations and theory, so they provide a useful basis for such an evaluation. In all cases, Pangu-weather produces physically realistic solutions that are qualitatively, if not quantitatively, consistent with the known outcomes for these experiments. This suggests that the model has encoded physical constraints that are applicable outside the realm of the data the model was trained on. We suggest an exciting new approach to science, where many hypotheses are rapidly tested using a deep-learning model, and a smaller set of promising results tested using physics-based models.
## 1 Introduction
In the past few years, deep-learning (DL) weather prediction models demonstrate forecast skill comparable to those from government operational centers (Weyn et al., 2019; Bi et al., 2023; Kurth et al., 2023; Lam et al., 2022). These models are trained on ERA5 analyses and have forecast skill on initial conditions independent of their training data. In contrast to DL approaches that explicitly enforce physical constraint (e.g., Beucler et al., 2021), it is unclear whether these models have encoded atmospheric physics, such as the dynamics of air motion and propagation of disturbances, or simply patterns that minimize the squared error of the next pattern in a sequence. Physical tests that examine the evolution of spatially localized disturbances are particularly effective in analyzing model physics, since the propagation of signals away from these disturbances is constrained by dynamics. For example, in the small-amplitude limit, the group velocity in linear wave theory sets the speed of energy dispersion away from a local disturbance.
Here we apply the localized-disturbance approach to the Pangu-weather model of Bi et al. (2023) using four canonical experiments, one involving perturbations to the model output, and the other three to perturbed initial conditions. The perturbations are applied to climatological time-mean steady states, which are smoother than any individual state that the model was trained on. These experiments are subjectively chosen, and solutions are not compared directly to identical experiments in a physics-based model, but provide an important plausibility study to motivate such additional experiments. Our hypothesis at the start of this research was that localized features will immediately produce a global response, because no constraint was imposed to prevent this during model training.
In addition to running orders of magnitude faster than physics-based models, these experiments with the Pangu-weather model are comparatively easy to configure. Performing any one of the experiments described here with a modern physics-based weather model is a significant undertaking, primarily due to complexities associated with model initialization. Consequently, if these models can be shown to produce physically realistic solutions, they offer an enormous opportunity for hypothesis testing much faster than is currently possible.
We proceed in section (2) with a description of the experiments and the data used to conduct them. Results for the four experiments described above are presented in section (3). Conclusions are drawn in section (4).
## 2 Method and experiment design
The Pangu-weather model uses a vision-transformer architecture trained on ERA5 reanalysis data from 1979-2017 (Bi et al., 2023); the trained model weights are publicly available. Model variables consist of global gridded fields of geopotential height, specific humidity of water vapor, temperature, and vector wind components on 13 isobaric levels (1000, 925, 850, 700, 600, 500, 400, 300, 250, 200, 150, 100, and 50hPa), and surface fields (mean-sea-level pressure, 2m air temperature, and 10m vector wind components). Data reside on the native \(0.25^{\circ}\) degree latitude-longitude grid of ERA5. There are four models, which are trained separately: 1h, 3h, 6h, and 24h. Bi et al. (2023) indicate that solutions are most accurate when using the sequence of models with the largest possible time steps to reach a desired lead time (e.g., a 32 hour forecast uses the 24h model, followed by the 6h model and then two steps of the 1h model).
Our experiments involve adding perturbations to a steady climatological-mean atmosphere. We perform the simulations by solving
\[\mathbf{x}(t+1)=\mathbf{N}(\mathbf{x}(t))-\mathbf{d}\overline{\mathbf{x}}+ \mathbf{f}. \tag{1}\]
Here, \(\mathbf{x}\) represents the model state vector, \(\mathbf{N}\) the Pangu-weather model, and \(t\) time indexed according to the version of the model (i.e., \(t\)+1 means a one-day forecast when using the 24-hour version of the model, and a 3-hour forecast for the 3-hour version). \(\mathbf{f}\) is a modification to the model output, taken here to be zero for all experiments except steady tropical heating, when it is fixed at a specified value. \(\mathbf{d}\overline{\mathbf{x}}\) represents the one-step solution of the model that renders the climatological mean atmosphere steady state:
\[\mathbf{d}\overline{\mathbf{x}}=\mathbf{N}(\overline{\mathbf{x}})-\overline{ \mathbf{x}}. \tag{2}\]
We may then take \(\overline{\mathbf{x}}\) independent of time. The full state vector, which we send to the Pangu-weather model, is defined by \(\mathbf{x}=\overline{\mathbf{x}}+\mathbf{x}^{\prime}\) where \(\mathbf{x}^{\prime}\) are anomalies from the climatological mean state. For \(\mathbf{x}^{\prime}=\mathbf{f}=0\), (1) with (2) gives \(\overline{\mathbf{x}}(t+1)=\mathbf{N}(\overline{\mathbf{x}})-\mathbf{N}( \overline{\mathbf{x}})+\overline{\mathbf{x}}=\overline{\mathbf{x}}\); i.e., \(\overline{\mathbf{x}}\) is time independent.
Taking a leading-order Taylor approximation to \({\bf N}({\bf x}(t))\), and using (2) in (1), gives a conceptual model for the perturbations,
\[{\bf x}(t+1)^{\prime}\approx{\bf N}^{\prime}({\bf x}^{\prime}(t))+{\bf f}, \tag{3}\]
where \({\bf N}^{\prime}\) is the gradient of \({\bf N}\) with respect to \({\bf x}\) evaluated at \({\bf x}\). We emphasize that we actually solve (1) and compute \({\bf x}^{\prime}={\bf x}-{\bf\overline{x}}\) from the solution for all \(t\); in other words, the solutions are fully nonlinear.
Since we are interested in spatially localized perturbations, we define \({\bf f}\) and the initial perturbations \({\bf x}^{\prime}(t=0)\) using a function that decays to zero from a local maximum at a specified distance. For this purpose we use the function defined by Gaspari and Cohn (1999, hereafter, GC) and define the distance at which the disturbance reaches zero by \(L\). Specifics of the four experiments follow.
For the steady heating experiment, we set \({\bf f}\) to be a constant vector with zeros everywhere except for the temperature field within a horizontal region at all levels between 1000hPa and 200 hPa, where it is set to \(0.1\)K(day)\({}^{-1}\). The region is defined in longitude by the GC function with \(L=10,000\)km and centered at \(120^{\circ}\)E, and in latitude, \(\phi\), by \(cos(6\phi)\) within \(15^{\circ}\) of the Equator. The initial condition is given by \({\bf\overline{x}}\), which is set to the December-January-February (DJF) ERA5 time average.
For the extratropical cyclone experiment, we define the anomaly field \({\bf x}^{\prime}(t\,=\,0)\) by regressing all fields in the state vector against a standardized time series of DJF 500hPa geopotential height at the point \(40^{\circ}\)N, \(150^{\circ}\)E. The regressed field is then multiplied by the GC function with \(L=2000\)km to insure a spatially localized disturbance, and added to the DJF time-mean field. We use the same perturbation initial condition for the geostrophic adjustment experiment, except we set to zero all variables at all levels except the 500hPa geopotential height.
For the hurricane experiments we take the same approach as for the extratropical cyclone experiment, except in this case we use the July-August-September (JAS) mean state. The disturbance is defined by regressing all fields in the state vector against a standardized time series of JAS mean-sea-level pressure at the point \(15^{\circ}\)N, \(40^{\circ}\)W. The regressed field is then multiplied by the GC function with \(L=1000\)km and added to the JAS time-mean field. For the results in this case we perform simulations by scaling the perturbation field by a multiplicative constant to vary the strength of the initial low-pressure system.
## 3 Results
### Steady tropical heating
The Pangu-weather response to weak DJF tropical heating (0.1 K/day), shows a small 500 hPa height increase over the heating region after 5 days, with a negative anomaly to the north (Fig. 1A). The extratropical wave train extends downstream and increases in amplitude during days 5-20, with maximum anomalies over 100m at day 20 (Fig. 1B,C). A wave-train appears in both hemispheres, with larger amplitude in the northern (wintertime) hemisphere, which has the stronger waveguide for stationary waves. This response is qualitatively similar to classical results (e.g., Hoskins & Karoly, 1981; Sardeshmukh & Hoskins, 1988), with differences in details dependent on the location, shape, and temporal structure of the heating, seasonality, and other factors.
A closer examination of the response in the lower troposphere near the heating reveals a pattern similar to the classical Matsuno-Gill (Matsuno, 1966; Gill, 1980) response to steady tropical heating (Fig. 2). Along the equator, wind anomalies are convergent toward the western end of the heating region. This signature is associated with a Kelvin-wave response to the heating. Off the equator, the western end of the heating is flanked
by cyclonic gyres in both hemispheres, which are associated with a mixed Rossby-gravity-wave response. Unlike idealized experiments, typically using the shallow-water equations, these solutions are influenced by surface boundary conditions, so that there are flow distortions over the Maritime Continent in particular, and myriad multiscale moist processes involving clouds and convection.
This experiment suggests that the Pangu-weather model responds qualitatively, if not quantitatively, consistent with idealized experiments for tropical heating. Anomalies emerge smoothly and locally from the heat source, and increases in amplitude with time as a nearly stationary wave response. Idealizing the problem further, to the zonal-mean DJF basic state produces a similar response, with a wave train extending across the North Pacific to North America (Fig. S1), but with differences in phase and amplitude related to the basic state on which the waves propagate. The Southern Hemisphere response is also notably weaker for the zonal-mean state.
### Extratropical cyclone development
The next experiment considers the time evolution of a localized 500hPa trough at the western end of the North Pacific storm track (Fig. 3A), which is the canonical initial condition preceding surface cyclogenesis (e.g., Gyakum & Danielson, 2000; Hakim, 2003; Yoshida & Asuma, 2004). After two days, the trough has progressed to the central Pacific, and begun to disperse, with the appearance of anticyclonic circulations both upstream and downstream (Fig. 3B). A surface cyclone develops to the east of the upper trough, with a smaller-scale secondary cyclone appearing upstream (Fig. 4B). By day 4, the upper trough has amplified and spread into a wave packet, with the leading edge along western North America (Fig. 3D), and a surface cyclone nearly coincident with the upper trough (Fig. 4D). Vertical alignment of extratropical cyclones is the hallmark of a developing cyclone that has reached the occluded phase of the life cycle. In contrast, the upstream surface cyclone remains downstream of the 500 hPa trough, and continues to deepen past day 4. A second upstream cyclone appears at day 4 west of the Date-line. These cyclones are accompanied by temperature anomalies having the largest horizontal gradients near the surface cold front (Fig. S2).
All aspects of this idealized baroclinic development are consistent with observations and modeling (e.g., Jablonowski & Williamson, 2006) of the North Pacific storm track. In particular, disturbances at the upstream end of the storm track produce a baroclinic wave packet (Simmons & Hoskins, 1979), which disperses and moves downstream at the group velocity (faster than the phase of individual troughs). As we find here, these solutions also show both upstream surface development and downstream upper-level development (Simmons & Hoskins, 1979; Chang, 1993; Hakim, 2003). Moreover, the upstream surface development we observe here has relatively smaller spatial scale, resembling a "polar low," which is frequently observed in winter over the North Pacific (e.g., Mullen, 1983; Rasmussen, 2003). Curiously, these polar lows appear first at the surface, and have a warm core, suggestive of the importance of surface fluxes due to cold air moving over relatively warmer water (Emanuel & Rotunno, 1989).
Idealizing the problem further, to the zonal-mean DJF atmosphere produces a similar response, with a wave packet that spreads downstream toward Europe by day 10 (Fig. S3). Furthermore, repeating the experiment, but for summer conditions (JAS time mean) shows much weaker cyclone development, and an absence of polar lows (not shown). We conclude that Pangu-weather appears to have implicitly encoded the seasonally varying physical processes of oceanic extratropical cyclone development in the neural-network weights that govern the dynamical evolution of its prognostic variables.
### Geostrophic adjustment
Here we test an initial perturbation similar to the extratropical cyclone case, except that it is localized completely to the 500hPa field; it does not extend in the vertical, and every other field has zero anomaly. This type of initial condition is unbalanced since there is no wind or temperature anomalies, whereas outside the deep tropics one commonly finds the wind blowing along the height contours (as evident in Fig. 3A). This is a particularly hard test, and one that likely cannot be performed without additional modification using a physics-based model, since unbalanced initial conditions produce rapid oscillations that are difficult to resolve. Here we use the 1h, 3h, and 6h versions of the Pangu-weather models.
At 1h, the wind accelerates from rest in the initial conditions to about 5 ms\({}^{-1}\), and is convergent on the area of low geopotential height (Fig. 5A). The center of convergence is to the west of the lowest height, which increases to \(-\)89m from \(-\)100m in the initial condition. At 3h, the wind accelerates to a maximum of about 10 ms\({}^{-1}\), and remains convergent on the area of low height, for which the minimum has increased to \(-\)74m (Fig. 5B). The wind direction has turned clockwise at all locations compared to the 1h solution, as one expects from Coriolis turning of the accelerating wind in the direction of the pressure gradient force. At 6h, the wind direction has continued to rotate clockwise such that it is nearly parallel to the geopotential height contours everywhere, reflecting a closer balance between the wind and geopotential height fields (Fig. 5C). The height minimum has increased to \(-\)58m, reflecting a conversion of available potential energy to kinetic energy.
A quarter turn of a Foucault pendulum at 40\({}^{\circ}\) N takes \(\sim\)9 hours, so the adjustment in the wind field indicated by the Pangu-weather solution is consistent with physical expectations. Once again, we conclude that the solution for this idealized initial-value problem is qualitatively, if not quantitatively, consistent with the expected dynamics.
Repeating the experiment, except for an initial disturbance on the equator, produces a notably different response (Fig. S4). The velocity field is again convergent on the area of low geopotential height, except in this case convergence is directed on the center of the low. The difference may be due to the basic-state jet stream in the previous case, with fast westerly winds and a strong meridional potential vorticity gradient that promotes westward Rossby-wave propagation. Another notable aspect of the equatorial case is slower Coriolis turning of the wind, and the fact that the model has learned about the asymmetry in this turning about the equator. An analysis of the time difference of the anomalous zonal wind on the equator reveals signals that propagate in both directions at around 20ms\({}^{-1}\) (Fig. S5), typical of tropical gravity waves. A weaker signal is also evident at the speed of sound (dashed black lines). Finally, we note that this figure shows the incompatibility between the different versions of the Pangu-weather model, with abrupt differences in the time tendency at intervals of 3, 6, and 24 hours. These single-step "shocks" do not appear to adversely affect the solution at subsequent times, but will affect temporal diagnostic calculations that span several time steps of the model.
### Atlantic hurricane development
The last example concerns the evolution of a localized disturbance in the subtropics for the July-September (JAS) averaged conditions. Seeds of Atlantic hurricanes take the form of weak low pressure systems, which may develop into mature storms given the right environmental conditions. Finite-amplitude disturbances are thought to be needed to reduce the time to development while the storm is in a favorable environment (e.g., McBride & Zehr, 1981; Nolan et al., 2007). Here we perform experiments for a localized area of low pressure at a reference location (15\({}^{\circ}\)N, 40\({}^{\circ}\)W), and vary the initial amplitude. The three-dimensional perturbation is constructed similarly to the initial condition for
the extratropical cyclone case, by regressing all variables and locations onto the mean-sea-level pressure at the reference location.
Results show that the low pressure systems take a familiar track toward the northwest around the climatological subtropical area of high pressure (Fig. 6). Stronger initial conditions take a progressively northward track, which is consistent with the known physical basis due to increasing amplitude of azimuthal wavenumber-one asymmetries ("\(\beta\) gyres"). Although Pangu-weather may at best poorly resolve these features, the weights in the neural network have identified this physical relationship between the strength of tropical cyclones and a northward track.
For initial disturbances with anomalous mean-sea-level pressure less than about \(\sim\)5hPa, the storms do not intensify, whereas initial disturbances stronger than this rapidly intensify (Fig. 7). An additional experiment for the 10x disturbance was performed by setting the water vapor specific humidity to zero and, unlike the original case that rapidly develops, the dry system rapidly decays. Pangu-weather doesn't explicitly model condensational heating, but the model weights have the conditional association between water vapor content and the development of tropical cyclones.
## 4 Conclusions
We have tested the Pangu-weather deep-learning weather prediction model on a set of four canonical experiments aimed at probing its dynamical response to local perturbations. These perturbations are helpful for determining whether disturbances evolve and propagate in a physically meaningful way. Our hypothesis at the outset of this work was that these localized features would immediately produce a global response because there is no constraint to prevent this during model training. The fact that every experiment produces signal propagation and structural evolution qualitatively in accord with previous research suggests that the model has encoded realistic physics. While we do not make a direct comparison to solutions from a physics-based model, the results here provide proof-of-concept motivating such experiments. We note that, due to differences in numerics and parameterizations for unresolved scales and processes, solutions from physics-based models for these experiments will differ in details, and it would be interesting to see if the Pangu-weather solutions fall within the uncertainty of the physics-based models.
Results from the canonical experiments show qualitative, if not quantitative, agreement with studies of similar phenomena in the literature. This agreement ranges from hourly timescales for the geostrophic adjustment process to approximately steady features beyond 10 days associated with stationary tropical heating. Highlights from these experiments include: a Matsuno-Gill response and extratropical planetary wave response to steady tropical heating; baroclinic wave-packet emergence and polar low development in the cold-air mass associated with a North Pacific extratropical cyclone; divergent flow yielding to rotational flow for an unbalanced initial condition; and the importance of initial-vortex amplitude and water vapor in the development and track of Atlantic hurricanes.
We conclude that the Pangu-weather model encodes realistic physics for the experiments considered here, motivating future basic research using this tool. Several attributes make this model particularly powerful for atmospheric dynamics and scientific hypothesis testing. First, the simulations are computationally inexpensive compared to traditional global weather models. This enables large ensembles, including iterations over varying parameters, initial conditions, and perturbations to model output. Second, experiments are extremely easy to configure, and the model is very forgiving in aspects that physics models are not. For example, initial imbalances in physics-based models can produce spurious oscillations at the model time step that are difficult to remove or filter without affecting the resolved scales of interest. Therefore, we speculate that models like Pangu
weather might be particularly useful for rapid evaluation of hypotheses, allowing tests over a wide range of ideas to quickly narrow the scope of investigation for experiments using expensive physics-based models. Among many possibilities, one particularly interesting path of research employs deep-learning models to examine multiscale phenomena involving convective clouds, such as the Madden-Julian Oscillation, where physics-based models and theory have not yet approximated the essential physical processes.
## Open Research Section
All code and data access will be released in an open Github repository upon acceptance of the paper for publication.
We thank Steve Penny for conversations related to deep-learning models in the geosciences, and Mike Pritchard for comments on an earlier draft of the manuscript.
|
2306.00229 | Minotaur: A SIMD-Oriented Synthesizing Superoptimizer | A superoptimizing compiler--one that performs a meaningful search of the
program space as part of the optimization process--can find optimization
opportunities that are missed by even the best existing optimizing compilers.
We created Minotaur: a superoptimizer for LLVM that uses program synthesis to
improve its code generation, focusing on integer and floating-point SIMD code.
On an Intel Cascade Lake processor, Minotaur achieves an average speedup of
7.3\% on the GNU Multiple Precision library (GMP)'s benchmark suite, with a
maximum speedup of 13\%. On SPEC CPU 2017, our superoptimizer produces an
average speedup of 1.5\%, with a maximum speedup of 4.5\% for 638.imagick.
Every optimization produced by Minotaur has been formally verified, and several
optimizations that it has discovered have been implemented in LLVM as a result
of our work. | Zhengyang Liu, Stefan Mada, John Regehr | 2023-05-31T22:57:37Z | http://arxiv.org/abs/2306.00229v3 | # Minotaur: A SIMD-Oriented Synthesizing Superoptimizer
###### Abstract.
Minotaur is a superoptimizer for LLVM's intermediate representation that focuses on integer SIMD instructions, both portable and specific to x86-64. We created it to attack problems in finding missing people optimizations for SIMD instructions--this is challenging because there are many such instructions and they can be semantically complex. Minotaur runs a hybrid synthesis algorithm where instructions are enumerated concretely, but literal constants are generated by the solver. We use Alive2 as a verification engine; to do this we modified it to support synthesis and also to support a large subset of Intel's vector instruction sets (SSE, AVX, AVX2, and AVX-512). Minotaur finds many profitable optimizations that are missing from LLVM. It achieves limited speedups on the integer parts of SPEC CPU2017, around 1.3%, and it speeds up the test suite for the libVUV library by 2.2%, on average, and by 1.64x maximum, when targeting an Intel Cascade Lake processor.
## 1. Introduction
Generating high-quality code for vector instruction set extensions remains challenging. For programs in high-level languages, it can be difficult to extract the necessary parallelism and to map source-level constructs onto semantically rich SIMD instructions. On the other hand, writing vector code in assembly is slow and expensive, and makes it difficult to support a wide variety of platforms. A popular middle ground--mostly writing high-level code, but employing SIMD intrinsic functions in hot loops--is workable but has its own issues such as an impedance mismatch where mid-level compiler optimizations lack semantics for intrinsics and cannot optimize around them effectively.
This paper presents Minotaur, a synthesis-based superoptimizer for the LLVM intermediate representation (Mindal, 2017), that focuses on supporting LLVM's portable vector operations as well as Intel-specific intrinsics. Our goal is to automatically discover useful optimizations that LLVM cannot currently perform. Since Minotaur's search-based approach requires significant compilation time, its primary intended audience is compiler developers, who can then implement the missing transformations. Even so, we have implemented a cache that allows synthesis results to persist across compilations; with a warm cache, the compile-time overhead when building SPEC CPU 2017 is 26%.1 Our current work is limited to integer operations; it is not technically difficult to support floating point operations, but the current state of the art in SMT solving is simply too slow to make this practical.
Footnote 1: Minotaur’s effect on compile time in the case of a cold cache is somewhat arbitrary, since the aggressiveness of its synthesis algorithm is highly tunable.
Minotaur works on code fragments that do not span multiple loop iterations; it is based on the assumption that existing compiler optimization passes such as loop unrolling, software pipelining, and automatic vectorization will create the necessary opportunities for its optimizations to work effectively. For example, consider this loop, in C, from the compression/decompression utility gzip, where name is the base address of a string and p is a pointer into the string:
do { if (*-p == '.') *p = '.'; } while (p!= name);
When this loop is compiled by LLVM 15 for a target supporting AVX2 vector extensions, this code is found inside the loop:
x1 = shufflevector X0, <31, 30, 29,..., &>
x2 = icmp eq X1, <46, 46, 46,..., 46>
X3 = shufflevector X2, <31, 30, 29,..., &>
The first shufflevector reverses a 32-byte chunk of the string, the icmp instruction checks which elements of the chunk are equal to 46 (ASCII for the period character), and then the second shufflevector reverses the vector containing the results of the computation. This code cannot be optimized further by LLVM; when it is lowered to object code and executed on an Intel Cascade Lake processor, it requires 13 uOps, or "micro-operations," processor-internal RISC-like instructions that modern x86 implementations actually execute. This is according to LLVM-MCA, LLVM's machine code analyzer, which estimates execution costs using a microarchitectural model.
Minotaur, on the other hand, observes that the vector reversals are unnecessary, and that this code fragment performs the same job as the original instruction sequence, but more cheaply (3 uOps):
x1 = icmp eq X0, <46, 46, 46,..., 46>
Our work builds on the Alive2 translation validation tool for LLVM IR (Krishnan et al., 2017), to which we added support for 165 vector instructions, and LLVM-MCA (Krishnan et al., 2017), which we use to build a cost model. On top of these foundations, we implemented a synthesis engine that searches for improved code sequences, a cache that stores previously derived optimizations, and an LLVM plugin that runs as a middle-end optimization pass. We evaluate Minotaur on a variety of benchmarks, showing that it can achieve speedups of up to 2.3x.
## 2. Background
### Vectors in LLVM
LLVM uses a typed, SSA-based intermediate representation (IR). It supports a derived _vector type_; for example, a vector with eight lanes, where each element is a 64-bit integer, would have type <8 x 164>. Many LLVM instructions, such as arithmetic operations, logical operations, and pointer arithmetic, can operate on vectors as well as scalars. IR-level vectors are target-independent; backends attempt to lower vector operations to native SIMD instructions, if available.
Beyond the vertical ALU instructions that are element-wise vector versions of scalar instructions, LLVM supports a variety of horizontal vector reduction intrinsics and an assortment of memory intrinsics such as vector load and store, strided load and store, and scatter/gather. Additionally, there are three vector-specific data movement instructions: _extracelement_ retrieves the element at a specified index from a vector, _insertelement_ non-destructively creates a new vector where one element of an old vector has
been replaced with a specified value, and _shufflevector_ returns a new vector that is a permutation of two input vectors using elements whose indices are specified by a constant mask vector. Finally, to provide direct access to platform-specific vector instructions, LLVM provides numerous intrinsic functions such as 01 llvm.x86.avx512.mask.cvttps2dq.512,aka "convert with truncation packed single-precision floating-point values to packed signed doubleword integer values."
### Alive2
Alive2 (Alive2, 2017) is an open-source, solver-based tool that takes a pair of functions in LLVM IR and attempts to either prove that the second one refines the first, or else provides a counterexample showing that a refinement relation does not hold. When refinement holds in both directions, two functions are equivalent. However, an equivalence checker is unsuitable for translation validation of LLVM optimizations: non-trivial refinements--transformations that are legal in one direction, but that cannot be soundly reversed--are very common.
### Llvm-Mca
Predicting throughput of code running on modern microprocessors is not straightforward. To help developers improve performance-critical code, the LLVM Machine Code Analyzer (LLVM-MCA) (Levy, 2017) was created. It is an interactive tool that emits a graphical depiction of pipeline behavior, but its functionality can also be accessed programmatically, and this is what Minotaur does. A problem with LLVM-MCA (and all similar tools that we are aware of) is that it is imperfect: in some cases it either over- or under-estimates the cost of certain code sequences (Levy, 2017). This is a limitation that we simply live with.
## 3. Synthesizing Optimizations using Minotaur
Minotaur is invoked by loading it into LLVM's optimization pipeline as a plugin. For every integer typed, or vector-of-integer typed, SSA value in the code being compiled, Minotaur performs a backwards slice following dataflow edges, control flow edges, and memory dependence edges to extract a loop-free program fragment. This fragment is used as the specification for a synthesis problem, where the objective is to synthesize a new program fragment that refines the old one and is cheaper, using LLVM-MCA as a cost model. When a cheaper fragment is found, Minotaur rewrites the code just like a non-superoptimizing compiler pass would, and it also caches the rewrite to avoid repeated synthesis calls. Figure 1 provides a high-level view of how Minotaur works; the rest of this section explains these steps in detail.
### Representing and Caching Rewrites
Minotaur stores each potential optimization as a tuple: \((F,V,R)\) where \(F\) is a function in the LLVM intermediate representation (IR), \(V\) is an SSA value from that function, and \(R\) is a rewrite--an expression in Minotaur's own intermediate representation that describes a different way to compute \(V\) in \(F\). Rewrites are directed acyclic graphs containing nodes that represent operations, and edges representing data flow. Although the elements found in Minotaur IR are similar to those found in LLVM IR, we could not reuse LLVM IR to represent rewrites since LLVM IR does not support incomplete code fragments, and also rewrites must contain enough information to support connecting the new code in the rewrite to code in the unoptimized function.
To support caching, rewrites must be serializable. The \(F\) and \(V\) elements of rewrite tuples can be serialized using existing LLVM functionality, and we created a simple S-expression syntax for serializing the \(R\) part. Rewrites are cached in a Redis instance--this implementation choice allows the cache to be persistent across multiple Minotaur runs and also makes the cache network-accessible.
Figure 2 shows the syntax of the IR. For example, if a 32-bit value is replaced by left shift by one bit position, the textual format for the expression is (shl (val i32 X0), (const i32 1), i32).
### Correctness for a Peephole Slicer
Peephole optimizers work because an optimization that is correct for a fragment of a program is also correct in the context of the entire program. Let us look at this in a bit more detail. The top-level correctness criterion for an optimizer is that the optimized code must refine the unoptimized code. The Alive2 paper (Alive2, 2017) discusses refinement for LLVM functions in detail. A peephole optimizer works by finding a program fragment that can be rewritten. For example, consider a peephole that rewrites \(y=x\times 8\) as \(y=x\ll 3\), where \(x\) and \(y\) are bitvectors. This rewrite is a refinement: \(x\times 8\Rightarrow x\ll 3\). The value of \(y\) is going to be consumed by subsequent instructions, let us call the function computed by those instructions \(f\). Since refinement is compositional, \(f(x\times 8)\Rightarrow f(x\ll 3)\). Using this kind of argument, we can establish whole-function (and whole-program) refinement. To be correct, Minotaur's slice extraction algorithm needs to faithfully extract an overapproximation of how a given SSA value is computed. As long as it does this, the compositionality of refinement guarantees that its rewrites will be correct.
### Extracting Slices
The state of the art in program synthesis is, at present, not even close to being able to synthesize, from scratch, an optimized version of an arbitrary LLVM function found in the wild. Instead, Minotaur uses a divide-and-conquer approach. However, it is fundamentally more aggressive than Bansal and Aiken's approach (Bansal and Aiken, 2017), which extracts a small window of sequential instructions, and it is also more aggressive than Souper (Bansal and Aiken, 2017), which refused to consider memory operations and took a very limited view of control flow.
Algorithm 1 shows Minotaur's value extraction algorithm; it takes an SSA value \(V\) in a source function and produces a well-formed, loop-free LLVM function that returns an overapproximation of the specified SSA value. The value extraction algorithm can be split into two stages. In the first stage, Minotaur extracts the SSA values involved in the computation, and their uses. These values are extracted with a depth-first search; during the search, two sets, _Harvest_ and _Unknown_, are propagated which will be used in the second stage for constructing the slice. To limit the size of the fragment, we set a user-defined depth limitation for the search. Minotaur uses LLVM's LoopInfo pass (Levy, 2017) to identify loops in the source function. If value \(V\) is in a loop, Minotaur will only extract values that are defined inside the loop. All unsupported operations, operations that are beyond the depth limit, and operations that are outside the loop are discarded and replaced with free inputs.
Minotaur extracts the condition of conditional branch instructions, since conditions carry control flow information that is useful during synthesis. Similarly, when it extracts a load from memory,
Minotaur results LLVM's MemorySSA pass [17] to get a list of stores that potentially influence the loaded value. MemorySSA marks memory operations with one of the three memory access tags: MemoryDef, MemoryUse, and MemoryPhi. Each memory operation is associated with a version of memory state. A MemoryDef can be a store, a memory fence, or any operation that creates a new version of the memory state. A MemoryPhi combines multiple MemoryDefs when control flow edges merge. A MemoryUse is a memory instruction that does not modify memory, it only reads the memory state created by MemoryDef or MemoryPhi; a load instruction is always a MemoryUse. Because it must overapproximate, Minotaur is conservative when finding the load-affecting store: it starts from the loads MemoryUse's memory version and walks along the MemorySSA's def-use chain, and when the associated memory operation is a MemoryDef, it checks if the operation is a store and pushes the stored value into the worklist. Minotaur gives up when the associated memory version is tagged with MemoryPhi, or when the version is tagged with MemoryDef but the operation is not a store instruction.
In the second stage, Minotaur builds the extracted function, which first clones the original function, and then deletes all instructions that are not in the extracted fragment. Minotaur then deletes all the loop backedges so that the extracted function is loop-free. Finally, a return instruction is added to return the SSA value that we are interested in.
The size of the extracted slice is important, and is controlled by the slicer's depth parameter. If too few instructions are sliced, the extracted code will be a broad overapproximation, making it difficult to find profitable optimizations. On the other hand, if too many instructions are sliced, the extracted code may overwhelm the solver, causing it to timeout during the synthesis phase. For all
Figure 1: Overview of how Minotaur works, and how it fits into the LLVM optimization pipeline
Figure 2: Syntax for Minotaur rewrites
experiments reported in this paper, we have used five as the slicing depth.
### Supporting Vector Intrinsies in Alive2
The version of Alive2 that we started with supports most of the core LLVM intermediate representation, including its target-independent vector operations. However, Alive2 did not have a semantics for any of the numerous LLVM-level intrinsic functions that provide predictable, low-level access to target-specific vector instructions.
We added semantics for 165 x86-64 vector intrinsics to Alive2; these come from the SSE, AVX, AVX2, and AVX512 ISA extensions. The resulting version of Alive2 supports all x86 vector intrinsics except those that use floating point, those that are specifically intended to support cryptographic applications, and some memory operations that we did not see appearing in programs in practice. There is significant overlapping functionality between vector instructions; for example, there are eight different variants of the pavg instruction that computes a vertical (element-wise) average of two input vectors. To exploit this overlap, our implementation is parameterized by vector width, vector element size, and by the presence of a masking feature that, when present, uses a bitvector to suppress the output of vector results in some lanes. Algorithms 2 and 3 show, for example, our implementation of the pavg (packed average) and pmadd.wd (packed multiply and add) family of instructions. This parameterized implementation enabled a high level of code reuse, and our implementation of these semantics is only 660 lines of C++. Note in particular that the semantics here differ from the semantics of the corresponding processor instruction because at the LLVM level, we must account for poison values--a form of deferred undefined behavior. Our strategy for dealing with poison follows the one used by existing LLVM vector instructions: poison propagates lane-wise, but does not contaminate non-dependent vector elements. LLVM has a second kind of deferred undefined behavior: the _undef_ value, which we have not supported in our instruction semantics. We believe this is the correct decision since under is in the process of being deprecated by the LLVM community.
Validating the vector instruction semanticsWe found ourselves having difficulty gaining confidence that our semantics for vector intrinsics were correct, so we performed some randomized differential testing of them. Each iteration of our tester creates constant inputs to a single vector intrinsic and then:
1. Creates a small LLVM function passing the chosen inputs to the intrinsic.
2. Evaluates the effect of the function using LLVM's JIT compilation infrastructure (Levy and Kwiep, 2017). The effect is always to produce a concrete value, since the inputs are concrete.
3. Converts the LLVM function into Alive2 IR and then asks Alive2 whether this is refined by the output of the JITted code.
Any failure of refinement indicates a bug. We choose input values both systematically (for example, values close to a power of two) and also randomly, hopefully catching most edge-case errors.
When we fielded this tester, it rapidly found 11 cases where our semantics produced an incorrect result. For example, the semantics for pavg were incorrect when the sum overflowed. It also found three cases where Minotaur generated SMT queries that failed to typecheck. For example, we set the wrong lane size when parameterizing the semantics for psra.w and psra.d, causing the solver to reject our malformed queries. After we fixed these 14 bugs, extensive testing failed to find additional defects.
### Augmenting Alive2 with Synthesis
To synthesize an optimization, Minotaur enumerates _partially symbolic_ candidates: instructions are always represented concretely, but literal constants are symbolic and are synthesized during the refinement check. The overall synthesis problem is bounded by a limit on the number of new instructions that are being synthesized. Minotaur generates all rewrites that fit into this limit. Next, the rewrites are sorted using the cost function from the TargetTransformInfo pass (Levy and Kwiep, 2017), which provides an approximate cost model that captures the details of the target. Then, it uses Alive2 to filter out every rewrite that does not refine the specification. When a candidate rewrite contains at least one symbolic constant, Minotaur issues an exists-forall query instead of the simpler query that Alive2 would otherwise have used, effectively asking the question: "Does there exist at least one set of literal constants that makes this rewrite work for all values of the inputs?" If this query succeeds, the resulting model produced by the SMT solver contains the literal values for the symbolic constants, giving a complete, sound optimization.
Unlike LLVM, Minotaur takes a low-level, untyped view of vector values. For example, it internally treats a 16-way vector of 8-bit values the same as an 8-way vector of 16-bit values: both of these are simply 128-bit quantities. At the LLVM level, these two types are not interchangeable: a "bitcast" instruction is required to convert between them. Bitcasts have no runtime representation on x86 processors and Minotaur does not expend synthesis power to create them: it inserts bitcasts as needed when converting its own IR into LLVM IR. A consequence of Minotaur's untyped view of vectors is that, during synthesis, it does not have a fixed idea of how to interpret a value. So, for example, when synthesizing addition for 256-bit vector values, it will try all of <32 x i8>, <!6 x i16>, <8 x i32>, and <4 x i64> additions.
### Identifying Profitable Rewrites
Traditional compiler optimizations are sometimes based on an implicit cost model. For example, eliminating a redundant store to memory is something that a compiler can always perform, because it is (presumably) always profitable. Other traditional optimizations have an explicit cost model. For example, we might decide to perform inline substitution of every function body that contains seven or fewer instructions. This kind of cost model is used not because it is particularly accurate, but because it is cheap to evaluate at compile time--the execution time of a compiler is an ever-present concern in compiler developers' minds--and it usually works well enough.
Since a superoptimizer like Minotaur is inherently expensive, and is not expected to be run by software developers during their routine builds, our cost model does not need to be particularly fast. Moreover, in our experience, simple and obvious cost models (such as assigning a fixed cost to each instruction in LLVM IR) are difficult to make work well in the context of vector computations. For one thing, LLVM backends are highly sophisticated and often turn one IR instruction into multiple machine instructions, or many IR instructions into a single machine instruction. For another, estimating the execution cost of machine instructions on modern cores is not straightforward. Thus, Minotaur's main cost model is based on compiling the LLVM IR to object code and then analyzing its execution on a specific target machine using LLVM-MCA: its machine code analyzer component. In other words, Minotaur accepts a rewrite when it not only refines the specification, but also when it has a lower predicted execution cost than the specification and all of the other candidate rewrites.
Although LLVM-MCA can estimate the cycle cost of code fragments, we instead use the number of uOps ("micro-operations," a modern x86 processor's internal instruction set) as the estimated code. We do this because we tried using both cycles and uOps, and uOps work better, perhaps because they represent the total pipeline resources used by the computation.
### Integration with LLVM
Minotaur is loaded into LLVM as a shared library where it runs as an optimization pass. We arranged for it to run at the end of LLVM's optimization pipeline. We call InstCombine and Dead Code Elimination pass after Minotaur to clean up the code.
## 4. Evaluation
This section evaluates Minotaur. We begin by showing some optimizations that it has found, and then we examine its efficacy in making code faster.
### Optimizations Discovered by Minotaur
The purpose of this section is to examine Minotaur's strengths by presenting some optimizations that it found while compiling benchmark programs. None of these optimizations can be performed by the version of LLVM that Minotaur is based on,2 at its -03 optimization level. We present optimizations in an SSA format that is close to LLVM IR, but we have edited it slightly for compactness and legibility.
Footnote 2: The version of Minotaur used for all results in this paper is based on an LLVM snapshot from March 15, 2023. Thus, our version of LLVM is slightly newer than LLVM 16, which was tagged on March 12.
One might be inclined to ask, while reading this section, "Why is LLVM incapable of performing this transformation?" Alas, there is no single answer. In some cases, performing the transformation would require the optimizer to have a semantic model of a processor-specific intrinsic function, but mostly these models do not exist. In other cases, such as Example 5 below, generic reasoning about the code would be very difficult, and a specific pattern matcher might not be robust enough to be worth implementing. Finally, our observation is that vector support in LLVM is somewhat newer and less mature than support for other IR features, and the optimizers have simply not had enough time to accumulate the requisite optimizations.
Example 1.This code is from perlbench in SPEC CPU 2017:
%0 = zext <16 x i8> %x to <16 x i16> %1 = zext <16 x i8> %y to <16 x i16> %2 = call @avx2.pavg.w(%20, %1) %3 = trunc <16 x i16> %2 to <16 x i8> ret <16 x i8> %3 => %0 = call @sese2.pavg.b(%x, %y) ret <16 x i8> %0 The unoptimized code zero-extends each 8-bit element of the two input vectors to 16 bits, calls the AVX2 variant of pavg to perform element-wise averaging of the extended vectors, and then truncates elements of the resulting vector back to eight bits. The optimized code simply calls an SSE2 version of the pavg instruction that operates on 8-bit elements, reducing the uOp cost of the operation from four to one.
Example 2.This code is from libYUV, "... an open source project that includes YUV scaling and conversion functionality";3
%0 = call @avx2.pmod.wd(%x, <0, 1, 0, 1,...) %1 = call @avx2.pmod.wd(%x, <1, 0, 1, 0,...) %2 = sub nsw <8 x i32> %1, %0 ret <8 x i32> %2 %2 %0 = call @avx2.pmod.wd(%x, <1, -1, 1, -1,...) ret <8 x i32> %0 The pmadd.wd (multiply and add packed integers) instruction multiplies signed 16-bit integers element-wise from two input vectors, and then computes its output by adding adjacent pairs of elements from the resulting vector. Thus, the input to this instruction is two 16-way vectors containing 16-bit elements, and its output is a single 8-way vector of 32-bit elements.
In this example, the second argument to each pmadd.wd instruction in the unoptimized code is a vector of alternating zeroes and ones, which has the effect of selecting odd-indexed elements into %0 and even-indexed elements into %1. Then, after the sub instruction, which simply performs element-wise subtraction of %0 and %1, the overall effect of this code is to compute the difference between adjacent pairs of elements of %x. Minotaur is able to perform this same computation using a single pmadd.wd instruction which negates odd-numbered elements of %x before performing the addition. The optimized code requires 5 uOps to execute whereas the original code requires 8.
**Example 3**.: This code is from libYUV:
%0 = shufflevector <32 x i8> Xx, poison, <3, 7, 11, 15, 19, 23, 27, 31> X1 = lshr <8 x i8> X0, <6, 6, 6, 6, 6, 6, 6, 6> X2 = zext <8 x i8> X1 to <8 x i32> ret <8 x i32> => X0 = bitcast <32 x i8> Xx to <8 x i32> X1 = call @avx2.psrli.d(<8 x i32> X0, i32 30) ret <8 x i32> X1
The shufflevector instruction in the unoptimized code selects every fourth byte-sized element from the input Xx. The resulting 8-way vector is right-shifted element-wise by six bit positions, and that result is zero-extended to an 8-way vector of 32-bit elements. Minotaur's optimized version (which executes in 4 uOps instead of 11) first reinterpts the input vector's data as 32-bit elements; this bitcast is relevant to LLVM's type system, but it is a not at the CPU level. Then, the prsli instruction shifts each 32-bit element to the right by 30 bit positions. This right-shift-by-30 achieves the same effect as the unoptimized code, where the shufflevector can be seen as a right-shift-by-24, followed by an explicit right-shift-by-6.
**Example 4**.: This code, from compiling perlbench from SPEC CPU 2017, illustrates Minotaur's ability to reason about control flow:
entry: br i1 %c, label %body, label Xif.end body: br label %if.end if.end: %p1 = phi [ Xa, %body ], [ Xb, Xentry ] %p2 = phi [ Xb, %body ], [ Xa, Xentry ] %r = call @avx2.pavg.b(%p1, %p2) ret <32 x i8> Xr => %r = call @avx2.pavg.b(%a, %b) ret <32 x i8> Xr
The intent of the code is to compute the element-wise average of input vectors %a and %b, with a Boolean value %c determining the order in which the input vectors are presented to the pavg instruction. However, the order of arguments to this instruction does not matter, and Minotaur's version executes in 4 uOps while the original code requires 10. Note that Minotaur was not explicitly taught that pavg is commutative; the necessary information was inferred naturally from the formal specification.
**Example 5**.: This is an optimization discovered by Minotaur when it was used to compile GMP, the GNU Multiple Precision Arithmetic Library, a widely-used library for arbitrary precision integer computation:4
Footnote 4: [https://gmpli.org/](https://gmpli.org/)
%0 = lshr 164 %x, 1 %1 = and 164 %0, @x555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555553535353535353535353535
and were idle except for a single core running our benchmarks. To reduce the performance variation caused by frequency scaling, we disabled turbo boost on the Intel machine and the core performance boost on the AMD machine. We invoked LLVM with the -march=native compilation flag to ask it to take maximum advantage of processor features; we left other compilation flags unchanged, except where noted. All benchmarks are compiled at the -03 optimization level. We set the timeout for Z3 [9] queries to 20 seconds. Finally, for each SSA value that it tries to optimize, Minotaur gives up if no solution is found within 20 minutes.
Benchmark selectionWe evaluate on SPEC CPU 20177 because it is a widely accepted standard benchmark. We only evaluate on the integer subset of the SPEC suite, and we omit 648.exchange as it is a Fortran benchmark. We additionally use GMP, the GNU Multiple Precision Library, and libIVU, which is used by Google Chrome/Chromium for manipulating images in the YUV format. We chose these libraries because they have been heavily tuned for performance, they rely on loops containing integer operations, and they come with performance benchmark suites that we could simply reuse.
Footnote 7: [https://www.spec.org/cpu2017/](https://www.spec.org/cpu2017/)
Compile timesTable 1 shows how long it takes Minotaur to process our benchmarks, along with the number of potentially optimizable values and the number of optimizations found. In most cases, Minotaur found more optimizations when targeting the AMD processor. We believe this is because LLVM is more mature targeting AVX2 than AVX512. As a result, Minotaur extracts more slices. Solving queries with 256-bit vectors is also less likely to cause Z3 to timeout than are 512-bit vectors. Minotaur is quite slow when it runs with a cold cache because it performs a large number of solver queries.
Optimizing GMP with MinotaurGMP provides a portable C-language implementation and then, for several platforms, a faster assembly language implementation. For this evaluation, we selected the C implementation, because Minotaur works on LLVM IR and cannot process assembly code at all. The benchmark suite that we used is GMPBench.8 Figure 4 summarizes the results. When Minotaur targets the Intel Cascade Lake processor, and when the resulting executables are run on that same microarchitecture, all the benchmarks sped up; across all of the benchmarks, the mean speedup was 7.3%. The analogous experiment using the AMD Zen 3 microarchitecture resulted in one benchmark slowing down, and the rest of benchmarks speeding up, for an overall mean speedup of 6.5%.
Footnote 8: [https://gmplib.org/gmbench](https://gmplib.org/gmbench)
Optimizing libVUV with MinotaurThis library has an extensive test suite, part of which is explicitly intended for performance testing; we used this part as a benchmark. Each of them scales, rotates, or converts a 1280 by 728 pixel image 1,000 times. Figure 5 shows the results of this experiment. When Minotaur targets an Intel processor, 148 programs slowed down, 72 did not change performance, and \(2,312\) sped up, for an overall speedup of 2.2%. Targeting an AMD processor, 188 programs slowed down, 85 did not change performance, and \(2,259\) sped up, for an overall speedup
Figure 3. Speedups—estimated by LLVM-MCA—due to running Minotaur on a loop micro-benchmark suite
of 2.9%. Minotaur can make code slower because it looks at optimizations in isolation; it does not attempt to model interactions between optimizations.
libYUV is portable code, but it has already been heavily tuned for performance; most commits to its repository over the last several years have been performance-related. Our hypothesis is that this manual tuning has already eaten up most of the performance gains that we would have hoped to gain from Minotaur. For some time now, Google's released versions of Chrome have been compiled using LLVM; the Chrome engineers have had ample time to ensure that this compiler achieves decent code generation for performance-critical libraries.
_Optimizing SPEC CPU2017 with Minotaur._ Figure 6 shows the effect of optimizing the integer-heavy benchmarks from SPEC CPU2017 using Minotaur. When optimizing for, and running on, the Intel processor, we observed a mean speedup of 1.3%. When optimizing for, and running on, the AMD processor, we observed a mean speedup of 1.2%. It is notoriously difficult to speed up the SPEC CPU benchmarks because compiler engineers have already put considerable effort into achieving good code generation for them.
## 5. Related Work
A _superoptimizer_ is a program optimizer that meaningfully relies on search to generate better code, in contrast with traditional compilers that attempt a fixed (but perhaps very large) sequence of transformations. The eponymous superoptimizer (Krishnan et al., 2017) exhaustively generated machine instruction sequences, using various strategies to prune the search space, and using testing to weed out infeasible candidates. Also predating modern solver-based methods, Davidson and Fraser (Davidson and Fraser, 2017) constructed pephole optimizations from machine description files. In contrast, modern superoptimizers rely on solvers to perform automated reasoning about program semantics.
Souper (Souper, 2017) is a synthesizing superoptimizer that works on LLVM IR; it is the most directly connected previous work to Minotaur. Souper's slicing strategy is similar to Minotaur's in that it
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{Intel Cascade Lake} & \multicolumn{4}{c|}{AMD 2ens3} \\ \cline{2-10} & \multicolumn{4}{c|}{Compilation Time (min)} & \multicolumn{4}{c|}{Optimizations Found} & \multicolumn{4}{c|}{Compilation Time (min)} & \multicolumn{4}{c|}{Optimizations Found} \\ \hline Benchmarks & cold cache & warm cache & clang & \# exp. & \# exp. & cold cache & warm cache & clang & \# exp. & \# exp. \\ \hline \hline SPEC CPU 2017 & 1.645 & 3 & 2 & 93,251 & 2,401 & 1,731 & 4 & 3 & 9,158 & 2,537 \\ \hline gmp-6-21 & 440 & less than 1 & less than 1 & 9,170 & 336 & 445 & less than 1 & less than 1 & 9,265 & 387 \\ \hline libYUV & 2.19\% & less than 1 & less than 1 & 6,840 & 334 & 2,193 & less than 1 & less than 1 & 6,809 & 357 \\ \hline \end{tabular}
\end{table}
Table 1. Compile-time statistics
Figure 4. GNU Multiple Precision Library (GMP) speedups
extracts a DAG of LLVM instructions that overapproximates how a given SSA value is computed. However, unlike Souper, Minotaur extracts memory operations and multiple basic blocks, so it is capable of (we believe) strictly more transformations than Souper is able to perform. Additionally, Souper's undefined behavior model does not capture all of the subtleties of undefined behavior in LLVM, whereas we reuse Alive2's model, which is probably the most widely used formalization of these semantics, and is generally recognized as being correct. Finally, Minotaur focuses on vector-related transformations, whereas Souper supports neither LLVM's portable vector instruction set nor its platform-specific intrinsics.
Minotaur is also strongly inspired by Bansal and Aiken's work [4]; their superoptimizer operated on x86 assembly code and was able to make interesting use of vector instructions. Starting from unoptimized assembly produced by GCC, it was able to produce code competitive with higher optimization levels. The overall structure of this superoptimizer, where program fragments are extracted, canonicalized, checked against a cache, and then optimized in the case of a cache miss, is very similar to Minotaur, but there are many differences in the details, particularly in Minotaur's slice extractor which allows its synthesis specification to approximate the original code's effect much more closely. Another assembly superoptimizer, STOKE [26, 27, 28], is not as closely related; it is based on randomly perturbing assembly-language functions. STOKE can potentially perform transformations that Minotaur cannot, but we believe that its results are more difficult to translate into standard peephole optimizations than are Minotaur's.
Several recent projects have focused not on optimizing individual programs but rather on generating program rewrite rules. OptGen [5] finds scalar peephole optimizations that meet a specified syntactic form. Even at small rewrite sizes, it was able to find numerous optimizations that were missing from the 2015 versions of GCC and LLVM. VeGen [6] generates SLP vectorization rules--an SLP vectorizer [14] merges a set of scalar operations into vector instructions. VeGen parses the Intel Intrinsics Guide [13] and uses this to build pattern matchers for x86 vector instructions. VeGen applies the pattern matchers to an input scalar program, and replaces scalar expressions with vector instructions when it finds a profitable match. VeGen uses syntactic pattern matching rather than solver-based equivalence/refinement checking. Diospyros [33] is another vector rewrite rule generator, it takes an equality saturation [30] approach and uses a translation validator to reject unsuitable candidates. As an equality saturation-based tool, Diospyros builds its search space with existing rewrite rules.
Program synthesis--generating implementations that conform to a given specification--is intimately related to superoptimization. Rake [1] performs instruction selection for vectorized Halide [24] expressions using a two stage synthesis algorithm. First, Rake synthesizes a data-movement-free sketch [29], and then in the second
Figure 5: LibYUV Library speedups
stage it concretizes data movement for the sketch via another synthesis query. Rake targets Hexagon DSP processors (Reed et al., 2017) which share some functionally similar SIMD instructions with x86. Cowan et al. (Cowan et al., 2017) synthesized quantized machine learning kernels. Their work introduces two sketches: a compute sketch, which computes a matrix multiplication, and a reduction sketch that collects the computation result to the correct registers. It relies on Rosette (Rossette, 2017) to generate an efficient NEON (Rossette, 2017) implementation that satisfies the specifications for those two sketches. Swizzle Inventor (Rossette, 2017) is another tool built on Rosette; it synthesizes data movement instructions for a GPU compute kernel, and it requires user-defined sketches describing the non-swizzle part of the program. MACVETH (Kumar et al., 2017) generates high-performance vector packings of regular strided-access loops, by searching for a SIMD expression that is equivalent to a gather specification. All of these works show good performance results, but they focus on relatively narrow tasks, whereas Minotaur attempts to improve SIMD programs in general.
Most previous superoptimizers and program synthesizers use simple cost models. For example, Souper (Souper, 2018) assigns each kind of instruction a weight and uses the weighted sum as the cost of a rewrite. This kind of cost model is not a very good predictor of performance on a modern out-of-order processor. Minotaur and MACVETH (Kumar et al., 2017) use the LLVM-MCA (Kumar et al., 2017) microarchitectural performance analyzer, which can still lead to mispredictions, but it is generally more accurate than simple approaches are.
## 6. Conclusion
We created Minotaur because we noticed that LLVM appeared to be missing relatively obvious optimizations in code containing both its portable vector instructions and also its platform-specific intrinsic functions that provide direct access to hardware-level primitives. Minotaur slices loop-free DAGs of instructions--including branches and memory operations--out of LLVM functions and then attempts to synthesize better implementations for them. When improved code is found, the optimization is performed and also the synthesis result is cached. On the libVUV test suite, Minotaur gives speedups up to 1.64x, with an average speedup of 2.2%. We expect to see impact not by convincing application developers to use Minotaur, but rather by convincing compiler developers to implement useful optimizations that we can discover.
Figure 6. SPEC CPU2017 integer benchmark performance and compilation time |
2306.17755 | An Improved Deterministic Algorithm for the Online Min-Sum Set Cover
Problem | We study the online variant of the Min-Sum Set Cover (MSSC) problem, a
generalization of the well-known list update problem. In the MSSC problem, an
algorithm has to maintain the time-varying permutation of the list of $n$
elements, and serve a sequence of requests $R_1, R_2, \dots, R_t, \dots$. Each
$R_t$ is a subset of elements of cardinality at most $r$. For a requested set
$R_t$, an online algorithm has to pay the cost equal to the position of the
first element from $R_t$ on its list. Then, it may arbitrarily permute its
list, paying the number of swapped adjacent element pairs.
We present the first constructive deterministic algorithm for this problem,
whose competitive ratio does not depend on $n$. Our algorithm is
$O(r^2)$-competitive, which beats both the existential upper bound of $O(r^4)$
by Bienkowski and Mucha [AAAI '23] and the previous constructive bound of
$O(r^{3/2} \cdot \sqrt{n})$ by Fotakis et al. [ICALP '20]. Furthermore, we show
that our algorithm attains an asymptotically optimal competitive ratio of
$O(r)$ when compared to the best fixed permutation of elements. | Mateusz Basiak, Marcin Bienkowski, Agnieszka Tatarczuk | 2023-06-30T16:07:12Z | http://arxiv.org/abs/2306.17755v1 | # An Improved Deterministic Algorithm for the Online Min-Sum Set Cover Problem
###### Abstract
We study the online variant of the Min-Sum Set Cover (Mssc) problem, a generalization of the well-known list update problem. In the Mssc problem, an algorithm has to maintain the time-varying permutation of the list of \(n\) elements, and serve a sequence of requests \(R_{1},R_{2},\dots,R_{t},\dots\). Each \(R_{t}\) is a subset of elements of cardinality at most \(r\). For a requested set \(R_{t}\), an online algorithm has to pay the cost equal to the position of the first element from \(R_{t}\) on its list. Then, it may arbitrarily permute its list, paying the number of swapped adjacent element pairs.
We present the first _constructive_ deterministic algorithm for this problem, whose competitive ratio does not depend on \(n\). Our algorithm is \(O(r^{2})\)-competitive, which beats both the _existential_ upper bound of \(O(r^{4})\) by Bienkowski and Mucha [AAAI '23] and the previous constructive bound of \(O(r^{3/2}\cdot\sqrt{n})\) by Fotakis et al. [ICALP '20]. Furthermore, we show that our algorithm attains an asymptotically optimal competitive ratio of \(O(r)\) when compared to the best fixed permutation of elements.
min-sum set cover, list update, derandomization, online algorithms, competitive analysis 2012 acm@cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.mumu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.cmu.mu.cmu.
### Model and notation
For any integer \(\ell\), let \([\ell]=\{1,\dots,\ell\}\). We use \(\mathcal{U}\) to denote the universe of \(n\) elements. By permutation \(\pi\) of \(\mathcal{U}\), we understand a mapping \(\mathcal{U}\to[n]\) (from items to their list positions). Thus, for an element \(z\in\mathcal{U}\), \(\pi(z)\) is its position on the list.
An input \(\mathcal{I}\) to the online Mssc problem consists of an initial permutation \(\pi_{0}\) of \(\mathcal{U}\) and a sequence of \(m\) sets \(R_{1},R_{2},\dots,R_{m}\). In step \(t\), an online algorithm Alg is presented a request \(R_{t}\) and is charged the _access cost_\(\min_{z\in R_{t}}\pi_{t-1}(z)\). Then, Alg chooses a new permutation \(\pi_{t}\) (possibly \(\pi_{t}=\pi_{t-1}\)) paying _reordering cost_\(d(\pi_{t-1},\pi_{t})\), equal to the minimum number of swaps of adjacent elements necessary to change permutation \(\pi_{t-1}\) into \(\pi_{t}\).1
Footnote 1: The value \(d(\pi_{t-1},\pi_{t})\) is also equal to the number of inversions between \(\pi_{t-1}\) and \(\pi_{t}\), i.e., number of unordered pairs \((x,y)\) such that \(\pi_{t-1}(x)<\pi_{t-1}(y)\) and \(\pi_{t}(x)>\pi_{t}(y)\).
We emphasize that the choice of \(\pi_{t}\) made by Alg has to be performed without the knowledge of future sets \(R_{t+1},R_{t+2},\dots\) and also without the knowledge of the sequence length \(m\). We use \(r\) to denote the maximum cardinality of requested sets \(R_{t}\).
### Benchmarks
In the following, for an input \(\mathcal{I}\) and an algorithm \(A\), we use \(A(\mathcal{I})\) to denote the total cost of \(A\) on \(\mathcal{I}\). To measure the effectiveness of online algorithms, we use the standard notion of competitive ratio, but we generalize it slightly, for more streamlined definitions of particular scenarios.
We say that an online algorithm Alg is \(c\)-competitive _against class \(\mathcal{C}\) of offline algorithms_ if there exists a constant \(\xi\), such that for any input \(\mathcal{I}\) and any offline algorithm \(\textsc{Off}\in\mathcal{C}\), it holds that \(\textsc{Alg}(\mathcal{I})\leq c\cdot\textsc{Off}(\mathcal{I})+\xi\). If \(\xi=0\), then Alg is called _strictly_ competitive. The competitive ratio of Alg against class \(\mathcal{C}\) is the infimum of values of \(c\), for which Alg is \(c\)-competitive against this class. For randomized algorithms, we replace the cost \(\textsc{Alg}(\mathcal{I})\) with its expected value \(\mathbf{E}[\textsc{Alg}(\mathcal{I})]\). We consider three scenarios.
Dynamic scenario.In the dynamic scenario, the considered class \(\mathcal{C}\) contains all possible offline algorithms, in particular those that adapt their permutation dynamically during runtime. This setting is equivalent to the traditional competitive ratio [8], where an online algorithm is compared to the optimal offline solution Opt. This scenario is the main focus of this paper.
Static scenario.Previous papers focused also on a simpler _static scenario_, where the considered class of algorithms Fixed contains all possible \(n!\) fixed strategies: an algorithm from class Fixed starts with its list ordered according to a fixed permutation and never changes it [12]. (In this scenario, the starting permutation of an online algorithm and an offline solution are different.) Note that such an offline algorithm incurs no reordering cost, and pays access costs only. It is worth mentioning that there exist inputs \(\mathcal{I}\), for which \(\min_{A\in\textsc{Fixed}}A(\mathcal{I})=\Omega(n)\cdot\textsc{Opt}(\mathcal{I})\)[12].
Learning scenario.The static scenario can be simplified further, by assuming that reordering incurs no cost on Alg. We call such a setting _learning scenario_. Clearly, the competitive ratios achievable in the learning scenario are not larger than those for the static scenario, which are in turn not larger than those in the dynamic scenario.
### Previous results
Below, we discuss known results for the Mssc problem in the three scenarios described above (dynamic, static, and learning). Furthermore, we make a distinction between ratios achievable for polynomial-time algorithms and algorithms whose runtime per request is not restricted. The lower and upper bounds described below are also summarized in Table 1.
Lower bounds.Feige et al. [11] studied the _offline_ variant of Mssc (where all \(R_{t}\)'s are given upfront and an algorithm has to compute a fixed permutation minimizing the access cost). They show that unless \(\mathsf{P}=\mathsf{NP}\), no offline algorithm can achieve an approximation ratio better than \(4\). This result implies a lower bound of \(4\) on the competitive ratio of any polynomial-time online algorithm (assuming \(\mathsf{P}\neq\mathsf{NP}\)) as such a solution can be used to solve the offline variant as well. We note that \(4\)-approximation algorithms for the offline variant are known as well [11, 4].
The online version of Mssc was first studied by Fotakis et al. [12]. They show that no deterministic algorithm can achieve a ratio better than \(\Omega(r)\) even in the learning scenario. This yields the same lower bound for the remaining scenarios as well.
Asymptotically tight upper bounds.For the static scenario, the randomized \((1+\varepsilon)\)-competitive solution (for any \(\varepsilon>0\)) follows by combining multiplicative weight updates [15, 2] with the techniques of Blum and Burch [7] designed for the metrical task systems. This approach has been successfully derandomized by Fotakis et al. [12], who gave a deterministic solution with an asymptotically optimal ratio of \(O(r)\). These algorithms clearly also work in the learning scenario. However, in both scenarios, they require exponential time as they keep track of access costs for all possible \(n!\) permutations.
Fotakis et al. [13] showed that in the learning scenario, one can maintain a sparse representation of all permutations and achieve asymptotically optimal results that work in polynomial time: a randomized \(O(1)\)-competitive algorithm and a deterministic \(O(r)\)-competitive one.
Non-tight upper bounds.Much of the effort in the previous papers was devoted to creating algorithms for the dynamic scenario with low competitive ratios. For \(r=1\), a simple Move-To-Front policy that moves the requested element to the first position is \(O(1)\)-competitive
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \cline{3-6} \multicolumn{1}{c}{} & \multicolumn{2}{c|}{randomized} & \multicolumn{2}{c|}{deterministic} \\ \cline{3-6} \multicolumn{1}{c}{} & & LB & UB & LB & UB \\ \hline \multirow{2}{*}{learning} & unrestr. & 1 & \(1+\varepsilon\) & \(\Omega(r)\)[12] & \(O(r)\) \\ \cline{2-6} & poly-time & 4 [11] & 11.713 [13] & \(\Omega(r)\) & \(O(r)\)[13] \\ \hline \hline \multirow{3}{*}{static} & unrestr. & 1 & \(1+\varepsilon\)[7] & \(\Omega(r)\) & \(O(r)\)[12] \\ \cline{2-6} & poly-time & 4 & \begin{tabular}{c} \(O(r^{2})\) \\ **O(r) Theorem 10)** \\ \end{tabular} & \(\Omega(r)\) & \(\begin{tabular}{c} \(\exp(O(\sqrt{\log n\cdot\log r})\)) [12] \\ **O(r) Theorem 10)** \\ \end{tabular} \\ \hline \hline \multirow{3}{*}{dynamic} & unrestr. & 1 & \(O(r^{2})\) & \(\Omega(r)\) & \begin{tabular}{c} \(O(r^{4})\)[6] \\ **O(r^{2}) Theorem 13)** \\ \end{tabular} \\ \cline{2-6} & poly-time & 4 & \(O(r^{2})\)[6] & \(\Omega(r)\) &
\begin{tabular}{c} \(O(r^{3/2}\cdot\sqrt{n})\)[12] \\ **O(r^{2}) Theorem 13)** \\ \end{tabular} \\ \hline \end{tabular}
\end{table}
Table 1: Known lower and upper bounds for the online Mssc problem for three scenarios (dynamic, static and learning), for polynomial-time and computationally unrestricted algorithms. Unreferenced results are trivial consequences of other results. The ratios proved in this paper are in bold.
Perhaps surprisingly, however, the competitive ratios of many of its natural generalizations were shown to be not better than \(\Omega(n)\)[12].
Fotakis et al. [12] gave an online deterministic \(O(r^{3/2}\cdot\sqrt{n})\)-competitive algorithm Move-All-Equally (Mae) and showed that its analysis is almost tight. They provided a better bound for the performance of Mae in the static scenario: in such a setting its competitive ratio is \(\exp(O(\sqrt{\log n\cdot\log r}))\)[12].
For randomized solutions, this result was improved by Bienkowski and Mucha [6], who gave an \(O(r^{2})\)-competitive _randomized_ algorithm Lma (for the dynamic scenario). Their analysis holds also against so-called adaptive-online adversaries, and therefore, by the reduction of [5], it implies the _existence_ of a deterministic \(O(r^{4})\)-competitive algorithm. While using the techniques of Ben-David et al. [5], the construction of such an algorithm is possible, it was not done explicitly, and furthermore, a straightforward application of these techniques would lead to a huge running time.
### Our contribution
In Section 2, we present the first constructive deterministic algorithm whose competitive ratio in the dynamic scenario is a function of \(r\) only (and does not depend on \(n\)). Our algorithm, dubbed Deterministic-And-Lazy-Move-To-Front (Dlm), runs in polynomial time, and we analyze its performance both in static and dynamic scenarios.
In the static scenario, studied in Section 4, Dlm attains the optimal competitive ratio of \(O(r)\), improving over the \(\exp(O(\sqrt{\log n\cdot\log r}))\)-competitive solution by Fotakis et al. [12] and matching \(O(r)\) bound achieved by the exponential-time algorithm of [12].
In the dynamic scenario, studied in Section 5, we show that Dlm is \(O(r^{2})\)-competitive. As \(r\leq n\), this bound is always better than the existing non-constructive upper bound of \(O(r^{4})\)[6] and the polynomial-time upper bound of \(O(r^{3/2}\cdot\sqrt{n})\)[12]. Our analysis is asymptotically tight: in Appendix A, we show that the ratio of \(O(r^{2})\) is best possible for the whole class of approaches that includes Dlm.
Finally, as the learning scenario is not harder than the static one, Dlm is \(O(r)\)-competitive also there. While an upper bound of \(O(r)\) was already known for the learning scenario [13], our algorithm uses a vastly different, combinatorial approach and is also much faster.
Our deterministic solution is inspired by the ideas for the randomized algorithm Lma of [6] and can be seen as its derandomization, albeit with several crucial differences.
* We simplify their approach as we solve the Msc problem directly, while they introduced an intermediate exponential caching problem.
* We update item budgets differently, which allows us to obtain an optimal ratio for the static scenario (Lma is not better in the static scenario than in the dynamic one).
* Most importantly, [6] uses randomization to argue that Lma makes bad choices with a small probability. In this context, bad choices mean moving elements that Opt has near the list front towards the list tail in the solution of Lma. In the deterministic approach, we obviously cannot prove the same claim, but we show that it holds on the average. Combining this claim with the amortized analysis, by "encoding" it in the additional potential function \(\Psi\), is the main technical contribution of our paper.
## 2 Our algorithm Dlm
Dlm maintains a budget \(b(z)\) for any element \(z\in\mathcal{U}\). At the beginning of an input sequence, all budgets are set to zero.
In the algorithm description, we skip step-related subscripts when it does not lead to ambiguity, and we simply use \(\pi(z)\) to denote the _current_ position of element \(z\) in the permutation of Dlm.
At certain times, Dlm moves an element \(z\) to the list front. It does so using a straightforward procedure fetch(\(z\)) (cf. Routine 1). It uses \(\pi(z)-1\) swaps that move \(z\) to the first position, and increment the positions of all elements that preceded \(z\). Next, it resets the budget of \(z\) to zero.
Assume now that Dlm needs to serve a request \(R=\{x,y_{1},y_{2},\ldots,y_{s-1}\}\) (where \(s\leq r\) and \(\pi(x)<\pi(y_{i})\) for all \(y_{i}\)). Let \(\ell=\pi(x)\). Dlm first executes routine fetch(\(x\)). Afterward, it performs a lazy counterpart of moving elements \(y_{i}\) towards the front: it increases their budgets by \(\ell/s\). Once a budget of any element reaches or exceeds its current position, Dlm fetches it to the list front. The pseudocode of Dlm on request \(R\) is given in Algorithm 2.
```
1for\(i=\pi(z),\ldots,3,2\)do
2 swap elements on positions \(i\) and \(i-1\)
3\(b(z)\gets 0\)
```
**Algorithm 2** A single step of Deterministic-Lazy-Move-All-To-Front (Dlm)
## 3 Basic properties and analysis framework
We start with some observations about elements' budgets; in particular, we show that Dlm is well defined, i.e., it terminates.
Dlm terminates after every request.
Proof.: Let \(C=\{z\in\mathcal{U}\ \mid\ b(z)\geq\pi(z)\}\). It suffices to show that the cardinality of \(C\) decreases at each iteration of the while loop in Line 5 of Algorithm 2. To this end, observe that in each iteration, we execute operation fetch(\(z\)) for some \(z\in C\). In effect, the budget of \(z\) is set to \(0\), and thus \(z\) is removed from \(C\). The positions of elements that preceded \(z\) are incremented without changing their budget: they may only be removed from \(C\) but not added to it.
Once Dlm finishes list reordering in a given step, \(b(z)<\pi(z)\) for any element \(z\in\mathcal{U}\). Moreover, \(b(z)<(3/2)\cdot\pi(z)\) also during list reordering.
Proof.: Once the list reordering terminates, by Lemma 1 and the while loop in Lines 5-6 of Algorithm 2, \(b(z)<\pi(z)\) for any element \(z\).
Within a step, the budgets are increased only for elements \(y_{i}\in R\), i.e., only when \(s\geq 2\). The budget of such an element \(y_{i}\) is increased from at most \(\pi(y_{i})\) by \(\pi(x)/s\leq\pi(x)/2<\pi(y_{i})/2\), i.e., its resulting budget is smaller than \((3/2)\cdot\pi(y_{i})\).
### Amortized analysis
In our analysis, we compare the cost in a single step of Dlm to the corresponding cost of an offline solution Off. For a more streamlined analysis that will yield the result both for the static and dynamic scenarios, we split each step into two stages. In the first stage, both Dlm and Off pay their access costs, and then Dlm reorders its list according to its definition. In the second stage, Off reorders its list. Note that the second stage exists only in the dynamic scenario.
We use \(\pi\) and \(\pi^{*}\) to denote the current permutation of Dlm and Off, respectively. We introduce two potential functions \(\Phi\) and \(\Psi\), whose values depend only on \(\pi\) and \(\pi^{*}\).
In Section 4, we show that in the first stage of any step, it holds that
\[\Delta\textsc{Dlm}+\Delta\Phi+\Delta\Psi\leq O(r)\cdot\Delta\textsc{Off}. \tag{1}\]
where \(\Delta\textsc{Dlm}\), \(\Delta\textsc{Off}\), \(\Delta\Phi\), and \(\Delta\Psi\) denote increases of the costs of Dlm and Off and the increases of values of \(\Phi\) and \(\Psi\), respectively. Relation (1) summed over all \(m\) steps of the input sequence yields the competitive ratio of \(O(r)\) of Dlm in the static scenario (where only the first stage is present).
In Section 5, we analyze the performance of Dlm in the dynamic scenario. We say that an offline algorithm Off is _MTF-based_ if, for any request, it moves one of the requested elements to the first position of the list and does not touch the remaining elements. We define a class Mtfb of all MTF-based offline algorithms. We show that in the second stage of any step, it holds that
\[\Delta\textsc{Dlm}+\Delta\Phi+\Delta\Psi\leq O(r^{2})\cdot\Delta\textsc{Off}. \tag{2}\]
for any Off\(\in\textsc{Mtfb}\). Now, summing relations (1) and (2) over all steps in the input yields that Dlm is \(O(r^{2})\)-competitive against the class Mtfb. We conclude by arguing that there exists an MTF-based algorithm \(\textsc{Off}^{*}\) which is a 4-approximation of the optimal solution Opt.
### Potential function
To define potential functions, we first split \(\pi(z)\) into two summands, \(\pi(z)=2^{p(z)}+q(z)\), such that \(p(z)\) is a non-negative integer, and \(q(z)\in\{0,\ldots,2^{p(z)}-1\}\). We split \(\pi^{*}(z)\) analogously as \(\pi^{*}(z)=2^{p^{*}(z)}+q^{*}(z)\).
We use the following parameters: \(\alpha=2\), \(\gamma=5r\), \(\beta=7.5r+5\), and \(\kappa=\lceil\log(6\beta)\rceil\). Our analysis does not depend on the specific values of these parameters, but we require that they satisfy the following relations.
Parameters \(\alpha\), \(\beta\) and \(\gamma\) satisfy the following relations: \(\alpha\geq 2\), \(\gamma\geq(3+\alpha)\cdot r\), \(\beta\geq 3+\alpha+(3/2)\cdot\gamma\). Furthermore, \(\kappa\) is an integer satisfying \(2^{\kappa}\geq 6\beta\).
For any element \(z\), we define its potentials
\[\Phi_{z}=\begin{cases}\alpha\cdot b(z)&\text{if }p(z)\leq p^{*}(z)+\kappa,\\ \beta\cdot\pi(z)-\gamma\cdot b(z)&\text{if }p(z)\geq p^{*}(z)+\kappa+1.\end{cases} \tag{3}\]
\[\Psi_{z}=\begin{cases}0&\text{if }p(z)\leq p^{*}(z)+\kappa-1,\\ 2\beta\cdot q(z)&\text{if }p(z)\geq p^{*}(z)+\kappa.\end{cases} \tag{4}\]
We define the total potentials as \(\Phi=\sum_{z\in\mathcal{U}}\Phi_{z}\) and \(\Psi=\sum_{z\in\mathcal{U}}\Psi_{z}\).
At any time and for any element \(z\), \(\Phi_{z}\geq 0\) and \(\Psi_{z}\geq 0\).
Proof.: The relation \(\Psi_{z}\geq 0\) follows trivially from (4). By Fact 3, \(\beta\geq(3/2)\cdot\gamma\). This, together with Observation 2, implies that \(\Phi_{z}\geq 0\).
### Incrementing elements positions
We first argue that increments of elements' positions induce small changes in their potentials. Such increments occur for instance when Dlm fetches an element \(z\) to the list front: all elements that preceded \(z\) are shifted by one position towards the list tail. We show this property for the elements on the list of Dlm first and then for the list of Off.
We say that an element \(w\) is _safe_ if \(p(w)\leq p^{*}(w)+\kappa-1\) and _unsafe_ otherwise. Note that for a safe element \(w\), it holds that \(\pi(w)\leq 2^{p(w)+1}\leq 2^{*}\cdot 2^{p^{*}(w)}\leq 2^{\kappa}\cdot\pi^{*}(w )=O(r)\cdot\pi^{*}(w)\), i.e., its position on the list of Dlm is at most \(O(r)\) times greater than on the list of Off.
Assume that the position of an element \(w\) on the list of Dlm increases by \(1\). Then, \(\Delta\Phi_{w}+\Delta\Psi_{w}\leq 0\) if \(w\) was safe before the movement and \(\Delta\Phi_{w}+\Delta\Psi_{w}\leq 3\beta\) otherwise.
Proof.: By \(\pi(w)=2^{p(w)}+q(w)\) and \(\pi^{\prime}(w)=\pi(w)+1=2^{p^{\prime}(w)}+q^{\prime}(w)\) we denote the positions of \(w\) before and after the movement, respectively.
Assume first that \(w\) was safe before the movement. As \(p^{\prime}(w)\leq p(w)+1\leq p^{*}(w)+\kappa\), \(\Delta\Phi_{z}=\alpha\cdot b(z)-\alpha\cdot b(z)=0\). Furthermore, either \(p^{\prime}(w)=p(w)\), and then \(\Delta\Psi_{w}=0\) trivially, or \(p^{\prime}(w)=p(w)+1\), and then \(q^{\prime}(w)=0\). In the latter case \(\Delta\Psi_{w}=2\beta\cdot q^{\prime}(z)-0=0\) as well. This shows the first part of the lemma.
Assume now that \(w\) was unsafe (\(p(w)\geq p^{*}(w)+\kappa\)) before the movement. We consider two cases.
* \(p(w)=p^{*}(w)+\kappa\) and \(p^{\prime}(w)=p(w)+1\). It means that \(q(w)=2^{p(w)}-1\) and \(q^{\prime}(w)=0\). Then, \[\Delta\Phi_{w}=\beta\cdot\pi^{\prime}(z)-\gamma\cdot\beta(z)- \alpha\cdot\beta(z)\leq\beta\cdot\pi^{\prime}(z)=\beta\cdot 2^{p^{\prime}(w)}=2 \beta\cdot 2^{p(w)}\qquad\text{and}\] \[\Delta\Psi_{w}=2\beta\cdot q^{\prime}(z)-2\beta\cdot q(z)=-2 \beta\cdot(2^{p(w)}-1)=-2\beta\cdot 2^{p(w)}+2\beta.\] That is, the large growth of \(\Phi_{w}\) is compensated by the drop of \(\Psi_{w}\), i.e, \(\Delta\Phi_{w}+\Psi_{w}\leq 2\beta\).
* \(p(w)>p^{*}(w)+\kappa\) or \(p^{\prime}(w)=p(w)\). In such case, there is no case change in the definition of \(\Phi_{w}\), i.e, \[\Delta\Phi_{w}=\begin{cases}\alpha\cdot b(w)-\alpha\cdot b(w)=0&\text{if }p(w)\leq p^{*}(w)+\kappa,\\ (\beta\cdot\pi^{\prime}(w)-\gamma\cdot b(w))-(\beta\cdot\pi(w)-\gamma\cdot b( w))=\beta&\text{otherwise.}\end{cases}\] Furthermore, as \(q^{\prime}(w)\leq q(w)+1\), \(\Delta\Psi(z)=2\beta\cdot q^{\prime}(w)-2\beta\cdot q(w)\leq 2\beta\). Together, \(\Delta\Phi_{w}+\Delta\Psi_{w}\leq\beta+2\beta=3\beta\).
Assume that the position of an element \(w\) on the list of Off increases by \(1\). Then, \(\Delta\Phi_{w}\leq 0\) and \(\Delta\Psi_{w}\leq 0\).
Proof.: Note that \(p^{*}(w)\) may be either unchanged (in which case the values of \(\Phi_{w}\) and \(\Psi_{w}\) remain intact) or it may be incremented. We analyze the latter case.
By (3), the definition of \(\Phi_{w}\), the value of \(\Phi_{w}\) may change only if \(p^{*}(w)\) is incremented from \(p(w)+\kappa-1\) to \(p(w)+\kappa\). In such case,
\[\Delta\Psi_{w} =\alpha\cdot b(w)-\beta\cdot\pi(w)+\gamma\cdot b(w)\] \[\leq(\alpha+\gamma-\beta)\cdot\pi(w) \text{(by Observation 2)}\] \[\leq 0. \text{(by Fact 3)}\]
By (4), the definition of \(\Psi_{w}\), the value of \(\Psi_{w}\) may change only if \(p^{*}(w)\) is incremented from \(p(w)+\kappa\) to \(p(w)+\kappa+1\). In such case, \(\Delta\Psi_{w}=-2\beta\cdot q(w)\leq 0\).
## 4 Analysis in the static scenario
As described in Subsection 3.1, in this part, we focus on the amortized cost of Dlm in the first stage of a step, i.e., where Dlm and Off both pay their access costs and then Dlm reorders its list.
Whenever Dlm executes operation \(\textsc{fetch}(z)\), it holds that \(\Delta\textsc{Dlm}+\Delta\Psi+\sum_{w\neq z}\Delta\Phi_{w}\leq 2\cdot\pi(z)\).
Proof.: As defined in Routine 1, the cost of operation \(\textsc{fetch}(z)\) is \(\Delta\textsc{Dlm}=\pi(z)-1<\pi(z)\). We first analyze the potential changes of elements from set \(K\) of \(\pi(z)-1\) elements that originally preceded \(z\).
Let \(K^{\prime}=\{w\in K\ \mid\ \pi^{*}(w)\leq 2^{p(z)-\kappa+1}\}\). Observe that any \(w\in K\setminus K^{\prime}\) satisfies \(\pi^{*}(w)>2^{p(z)-\kappa+1}\), which implies \(p^{*}(w)\geq p(z)-\kappa+1\geq p(w)-\kappa+1\), and thus \(w\) is safe. Thus, among elements of \(K\), only elements from \(K^{\prime}\) can be unsafe. By Lemma 5,
\[\sum_{w\in K}(\Delta\Phi_{w}+\Delta\Psi_{w}) \leq\sum_{w\in K^{\prime}}(\Delta\Phi_{w}+\Delta\Psi_{w})\leq 3 \beta\cdot|K^{\prime}|=3\beta\cdot 2^{p(z)-\kappa+1}\] \[\leq 2^{p(z)}\leq\pi(z). \text{(by Fact 3)}\]
As the only elements that may change their budgets are \(z\) and elements from \(K\), we have \(\Delta\textsc{Dlm}+\Delta\Psi+\sum_{w\neq z}\Delta\Phi_{w}=\Delta\textsc{Dlm} +\sum_{w\in K}\Delta\Psi_{w}+\Delta\Psi_{z}+\sum_{w\in K}\Delta\Phi_{w}\leq 2 \cdot\pi(z)+\Delta\Psi_{z}\leq 2\cdot\pi(z)\). The last inequality follows as \(\Psi_{z}\) drops to 0 when \(z\) is moved to the list front.
Now we may split the cost of Dlm in a single step into parts incurred by Lines 1-4 and Lines 5-6, and bound them separately.
Whenever Dlm executes Lines 5-6 of Algorithm 2, \(\Delta\textsc{Dlm}+\Delta\Phi+\Delta\Psi\leq 0\).
Proof.: Let \(z\) be the element moved in Line 6. Line 5 guarantees that \(b(z)\geq\pi(z)\) and Observation 2 implies \(b(z)\leq(3/2)\cdot\pi(z)\). The value of \(\Phi_{z}\) before the movement is then
\[\Phi_{z} \geq\min\{\alpha\cdot b(z),\,\beta\cdot\pi(z)-\gamma\cdot b(z)\}\] \[\geq\min\{\alpha,\beta-(3/2)\cdot\gamma\}\cdot\pi(z)\] \[\geq 2\cdot\pi(z). \text{(by Fact 3)}\]
When \(z\) is moved to the list front, potential \(\Phi_{z}\) drops to 0, and thus \(\Delta\Phi_{z}\leq-2\cdot\pi(z)\). Hence, using Lemma 7, \(\Delta\textsc{Dlm}+\Delta\Phi+\Delta\Psi\leq 2\cdot\pi(z)+\Delta\Phi_{z}\leq 0\).
**Lemma 9**.: _Fix any step and consider its first part, where Dlm pays for its access and movement costs, whereas Off pays for its access cost. Then, \(\Delta\textsc{Dlm}+\Delta\Phi+\Delta\Psi\leq(3+\alpha)\cdot 2^{s+1}\cdot\Delta \textsc{Off}=O(r)\cdot\Delta\textsc{Off}\)._
Proof.: Let \(R=\{x,y_{1},\ldots,y_{s-1}\}\) be the requested set, where \(s\leq r\) and \(\pi(x)<\pi(y_{i})\) for any \(i\in[s-1]\). Let \(\Phi_{x}\) denote the value of the potential just before the request. It suffices to analyze the amortized cost of Dlm in Lines 1-4 as the cost in the subsequent lines is at most \(0\) by Lemma 8. In these lines:
* Dlm pays \(\pi(x)\) for the access.
* Dlm performs the operation fetch\((x)\), whose amortized cost is, by Lemma 7, at most \(2\cdot\pi(x)-\Phi_{x}\).
* The budget of \(y_{i}\) grows by \(\Delta b(y_{i})=\pi(x)/s\) for each \(i\in[s-1]\). As these elements do not move (within Lines 1-4), \(\Delta\Psi_{y_{i}}=0\).
Thus, we obtain
\[\Delta\textsc{Dlm}+\Delta\Phi+\Delta\Psi\leq 3\cdot\pi(x)-\Phi_{x}+\sum_{i \in[s-1]}\Delta\Phi_{y_{i}}. \tag{5}\]
As elements \(y_{i}\) do not move (within Lines 1-4)), the change in \(\Phi_{y_{i}}\) can be induced only by the change in the budget of \(y_{i}\). Let \(u\in R\) be the element with the smallest position on the list of Off, i.e., \(\Delta\textsc{Off}=\pi^{*}(u)\). We consider three cases.
* \(p(x)\leq p^{*}(u)+\kappa\). Then \(\pi(x)\leq 2^{p(x)+1}\leq 2^{\kappa+1}\cdot 2^{p^{*}(u)}\leq 2^{\kappa+1} \cdot\pi^{*}(u)=2^{\kappa+1}\cdot\Delta\textsc{Off}\). Note that \(\sum_{i\in[s-1]}\Delta\Phi_{y_{i}}\leq\sum_{i\in[s-1]}\alpha\cdot\Delta b(y_{i })=(s-1)\cdot\alpha\cdot\pi(x)/s<\alpha\cdot\pi(x)\). By Lemma 4, \(\Phi_{x}\geq 0\), and thus using (5), \[\Delta\textsc{Dlm}+\Delta\Phi+\Delta\Psi<3\cdot\pi(x)+\alpha\cdot\pi(x)\leq(3+ \alpha)\cdot 2^{\kappa+1}\cdot\Delta\textsc{Off}.\]
* \(p(x)\geq p^{*}(u)+\kappa+1\) and \(u=x\). In this case, \(\Phi_{x}\geq\beta\cdot\pi(x)-\gamma\cdot b(x)\geq(\beta-(3/2)\cdot\gamma) \cdot\pi(x)\) (cf. Observation 2). By plugging this bound to (5), we obtain \[\Delta\textsc{Dlm}+\Delta\Phi+\Delta\Psi\leq 3\cdot\pi(x)+(\beta-(3/2)\cdot \gamma)\cdot\pi(x)+\alpha\cdot\pi(x)\leq 0,\] where the last inequality follows as \(\beta\geq 3+(3/2)\cdot\gamma+\alpha\) by Fact 3.
* \(p(x)\geq p^{*}(u)+\kappa+1\) and \(u=y_{j}\) for some \(j\in[s-1]\). Recall that \(\pi(x)<\pi(y_{i})\), and thus \(p(y_{j})\geq p(x)\). Hence, \(p(y_{j})\geq p^{*}(y_{j})+\kappa+1\). In such a case, \[\sum_{i\in[s-1]}\Delta\Phi_{y_{i}} =\Delta\Phi_{y_{j}}+\sum_{i\in[s-1]\setminus\{j\}}\Delta\Phi_{y_{i}}\] \[\leq-\gamma\cdot\Delta b(y_{j})+\sum_{i\in[s-1]\setminus\{j\}} \alpha\cdot\Delta b(y_{i})\] \[=-\gamma\cdot\pi(x)/s+(s-2)\cdot\alpha\cdot\pi(x)/s\] \[<(\alpha-\gamma/r)\cdot\pi(x). \text{(as $s\leq r$)}\] Plugging the bound above and \(\Phi_{x}\geq 0\) to (5) yields \[\Delta\textsc{Dlm}+\Delta\Phi+\Delta\Psi\leq(3+\alpha-\gamma/r)\cdot\pi(x)\leq 0,\] where the last inequality again follows by Fact 3.
**Theorem 10**.: Dlm _is \(O(r)\)-competitive in the static scenario._
Proof.: Fix any input \(\mathcal{I}\) ad any offline solution Off that maintains a fixed permutation. For any step \(t\), let \(\Phi^{t}\) and \(\Psi^{t}\) denote the total potentials right after step \(t\), while \(\Phi^{0}\) and \(\Psi^{0}\) be the initial potentials. By Lemma 9,
\[\textsc{Dlm}_{t}(\mathcal{I})+\Phi^{t}+\Psi^{t}-\Phi^{t-1}-\Psi^{t-1}=O(r) \cdot\textsc{Off}_{t}(\mathcal{I}), \tag{6}\]
where \(\textsc{Dlm}_{t}(\mathcal{I})\) and \(\textsc{Off}_{t}(\mathcal{I})\) denote the costs of Dlm and Off in step \(t\), respectively. By summing (6) over all \(m\) steps of the input, we obtain \(\textsc{Dlm}(\mathcal{I})+\Phi^{m}+\Psi^{m}-\Phi^{0}-\Psi^{0}\leq O(r)\cdot \textsc{Off}(\mathcal{I})\). As \(\Phi^{m}+\Psi^{m}\geq 0\),
\[\textsc{Dlm}(\mathcal{I})\leq O(r)\cdot\textsc{Off}(\mathcal{I})+\Phi^{0}+ \Psi^{0}.\]
Note that the initial potentials might be non-zero as in the static scenario Off starts in its permutation which might be different from \(\pi_{0}\). That said, both initial potentials can be universally upper-bounded by the amount independent of \(\mathcal{I}\), and thus Dlm is \(O(r)\)-competitive.
## 5 Analysis in the dynamic scenario
To analyze Dlm in the dynamic scenario, we first establish an offline approximation of Opt that could be handled using our potential functions.
We say that an algorithm is _move-to-front based (Mtf-based)_ if, in response to request \(R\), it chooses exactly one of the elements from \(R\), brings it to the list front, and does not perform any further actions. We denote the class of all such (offline) algorithms by Mtfb. The proof of the following lemma can be found in the appendix.
**Lemma 11**.: _For any input \(\mathcal{I}\), there exists an (offline) algorithm \(\textsc{Off}^{*}\in\textsc{Mtfb}\), such that \(\textsc{Off}^{*}(\mathcal{I})\leq 4\cdot\textsc{Opt}(\mathcal{I})\)._
We now analyze the second stage of a step, where an offline algorithm Off from the class Mtfb reorders its list.
**Lemma 12**.: _Assume Off\(\in\textsc{Mtfb}\). Fix any step and consider its second stage, where Off moves some element \(z\) to the list front. Then, \(\Delta\Phi+\Delta\Psi=O(r^{2})\cdot\Delta\textsc{Off}\)._
Proof.: We may assume that initially \(\pi^{*}(z)\geq 2\), as otherwise there is no change in the list of Off and the lemma follows trivially.
Apart from element \(z\), the only elements that change their positions are elements that originally preceded \(z\): their positions are incremented. By Lemma 6, the potential change associated with these elements is non-positive.
Thus, \(\Delta\Phi+\Delta\Psi\leq\Delta\Phi_{z}+\Delta\Psi_{z}\). Element \(z\) is transported by Off from position \(\pi^{*}(z)\) to position \(1\), i.e., \(\Delta\textsc{Off}=\pi^{*}(z)-1\geq\pi^{*}(z)/2\) as we assumed \(\pi^{*}(z)\geq 2\). Thus, to show the lemma it suffices to show that \(\Delta\Phi_{z}=O(r^{2})\cdot\pi^{*}(z)\) and \(\Delta\Psi_{z}=O(r^{2})\cdot\pi^{*}(z)\). We bound them separately.
* Note that \(p^{*}(z)\) may only decrease. If initially \(p^{*}(z)\leq p(z)-\kappa-1\), then \(\Phi_{z}=\beta\cdot\pi(z)-\gamma\cdot b(z)\) before and after the movement of \(z\), and thus \(\Delta\Phi_{z}=0\). Otherwise, \(p^{*}(z)\geq p(z)-\kappa\), which implies \(\pi(z)<2^{p(z)+1}\leq 2^{\kappa+1}\cdot 2^{p^{*}(z)}\leq 2^{\kappa+1}\cdot\pi^{*}(z)\). In such a case, \[\Delta\Phi_{z}\leq\beta\cdot\pi(z)-\gamma\cdot b(z)-\alpha\cdot b(z)\leq\beta \cdot\pi(z)\leq\beta\cdot 2^{\kappa+1}\cdot\pi^{*}(z)=O(r^{2})\cdot\pi^{*}(z).\]
* Similarly, if initially \(p^{*}(z)\leq p(z)-\kappa\), then \(\Psi_{z}=2\beta\cdot q(z)\) before and after the movement of \(z\), and thus \(\Delta\Psi_{z}=0\). Otherwise, \(p^{*}(z)\geq p(z)-\kappa+1\) and which implies \(\pi(z)<2^{p(z)+1}\leq 2^{\kappa}\cdot 2^{p^{*}(z)}\leq 2^{\kappa}\cdot\pi^{*}(z)\). In such a case \[\Delta\Psi_{z}\leq 2\beta\cdot q(z)-0\leq 2\beta\cdot\pi(z)\leq 2\beta\cdot 2^{ \kappa}\cdot\pi^{*}(z)=O(r^{2})\cdot\pi^{*}(z).\qed\]
**Theorem 13**.: Dlm _is strictly \(O(r^{2})\)-competitive in the dynamic scenario._
Proof.: The argument here is the same as for Theorem 10, but this time we sum the guarantees provided for the first stage of a step (Lemma 9) and for the second stage of a step (Lemma 12). This shows that for any offline algorithm \(\textsc{Off}\in\textsc{Mtfb}\) and any input \(\mathcal{I}\) it holds that
\[\textsc{Dlm}(\mathcal{I})\leq O(r^{2})\cdot\textsc{Off}(\mathcal{I})+\Phi^{0 }+\Psi^{0}. \tag{7}\]
For the dynamic scenario, the initial permutations of Dlm and Off are equal, and hence the initial potential \(\Phi^{0}+\Psi^{0}\) is zero. As (7) holds against arbitrary \(\textsc{Off}\in\textsc{Mtfb}\), it holds also against \(\textsc{Off}^{*}\) which is the 4-approximation of Opt (cf. Lemma 11). This implies that
\[\textsc{Dlm}(\mathcal{I})\leq O(r^{2})\cdot\textsc{Off}^{*}(\mathcal{I})\leq O (4\cdot r^{2})\cdot\textsc{Opt}(\mathcal{I}),\]
which concludes the proof.
## 6 Final remarks
In this paper, we studied achievable competitive ratios for the online Msc problem. We closed the gaps for deterministic polynomial-time static scenarios and tighten the gaps for deterministic dynamic scenarios. Still, some intriguing open questions remain, e.g., the best randomized algorithm for the dynamic scenario has a competitive ratio of \(O(r^{2})\), while the lower bound is merely a constant.
Another open question concerns a generalization of the MSSC problem where each set \(R_{t}\) comes with a covering requirement \(k_{t}\) and an algorithm is charged for the positions of the first \(k_{t}\) elements from \(R_{t}\) on the list (see, e.g., [3]). The only online results so far are achieved in the easiest, learning scenario [13].
|
2309.04812 | Is there charged dark matter bound to ordinary matter? Can it produce
observable quantum effects? | Levitated nano-spheres of silica, optically trapped in a Fabry-Perot cavity
with a single trapping field and the electrostatic field of a charged ring
electrode, are used to infer the potential existence of dark matter particles
with infinitesimal charge. These particles are presumed to exist in bulk matter
as relics of the primordial Universe. In the absence of infinitesimally charged
particles within the chosen nano-sphere, the output light in this setup should
be thermal. However, if these particles do exist, the cavity's output light is
expected to be squeezed even at room temperature, and one could observe
entanglement between light and the nano-sphere's center of mass. | Muhammad Asjad, Paolo Tombesi | 2023-09-09T14:40:06Z | http://arxiv.org/abs/2309.04812v2 | # Is there charged dark matter bound to ordinary matter? Can it produce observable quantum effects?
###### Abstract
Levitated nano-spheres of silica, optically trapped in a Fabry-Perot cavity with a single trapping field and the electrostatic field of a charged ring electrode, are used to infer the potential existence of dark matter particles with infinitesimal charge. These particles are presumed to exist in bulk matter as relics of the primordial Universe. In the absence of infinitesimally charged particles within the chosen nano-sphere, the output light in this setup should be thermal. However, if these particles do exist, the cavity's output light is expected to be squeezed even at room temperature, and one could observe entanglement between light and the nano-sphere's center of mass.
Dark matter, whose nature remains unclear, is inferred from astrophysical and cosmological observations due to gravitational effects [1; 2; 3; 4]. Unlike visible matter, dark matter does not interact with any known force, except through gravity. However, there is a possibility of a photon being directly emitted by dark matter particles, if some of them possessed an extremely weak electric charge difficult to observe but distinguishable through very sensitive measurements [5].
The concept of mini-charged or milli-charged particles (mCPs) allows for fractional or infinitesimal charges beyond traditional quantization [5; 6; 7; 8; 9; 10; 11]. These particles carry charges represented as \(q=\pm\epsilon|e_{0}|\), where \(\epsilon\) is estimated to range from \(10^{-16}\) to \(10^{-2}\), and \(e_{0}\) is the charge on an electron [12]. mCPs might also be bound to ordinary matter nuclei, leading to milli-charged atoms [6; 13; 14]. Although their interaction with electromagnetic fields is extremely weak, it cannot be excluded. Numerous experiments have sought these elusive particles using colliders and levitated particles in bulk matter or as isolated particles [15; 16; 17; 13; 18], placing limits on their abundance for certain values of \(\epsilon\)[19]. Recently, a direct search using the PandaX-4T xenon-based detector has provided a charge limit \(<2.6\times 10^{-11}e_{0}\) for an estimated dark matter mass of \(20-40\)\(\mathrm{GeV/c^{2}}\) by investigating effective electromagnetic interactions between dark matter particles and xenon nuclei [20].
Trapping and levitation of microscopic dielectric spheres have been studied since the 1970s [21; 22]. Stable trapped droplets have been maintained at pressures as low as \(10^{-6}\) Torr [23]. Levitation dielectric particles in optical cavities is now a field of interest in quantum optomechanics, enabling the creation of an oscillating massive system nearly decoupled from the environment. This opens up possibilities for cooling mesoscopic objects to their quantum ground state and developing highly sensitive measurement devices.
This letter discusses the search for mini-charged particles (mCPs) in bulk matter using optically trapped fused silica (SiO\({}_{2}\)) nano-spheres (NS). The levitated NS here considered has a radius \(r=50\) nm, significantly smaller than the trapping/driving laser's optical wavelength (\(\lambda=1064\) nm) in a single mode ideal one-sided Fabry-Perot cavity. The cavity operates at a frequency of \(\omega_{c}/2\pi\), enclosed in a controlled-temperature vacuum chamber with air pressure around \(10^{-10}\) Torr. It is assumed that the mCPs' mass falls within a range compatible with previous results [24], possibly around or greater than one \(\mathrm{MeV/c^{2}}\).
Ensuring no induced electron charge on the NS surface during its generation is crucial [17]. However, it is plausible that the silica droplet may not contain any mCPs, given the unknown abundance of these charged dark matter relics in terrestrial materials [19]. Therefore, several experiments with different silica droplets to obtain a mass of at least one mg would be necessary before drawing some conclusions about mCPs' potential existence [25]. With the experimental set-up considered here, if mCPs do exist in the silica droplet, the output light deviates from pure thermal noise, revealing squeezing in the measured symmetric spectrum of light's quadrature fluctuations. Another significant quantum effect that could be observed is the entanglement between light and the center of mass (CoM) of the NS.
The assumption is that the NS is trapped at one antinode of the steady field within a one-sided ideal Fabry-Perot cavity, positioned at the centre, far from the cavity mirrors, to eliminate any contamination from the Casimir force effect [26]. Previous studies have demonstrated that the oscillatory motions of the NS along the three axes are mostly independent of each other [27]. Therefore, in this analysis, it will be considered only the motion along the cavity axis, assuming that the motion along the other two directions is confined to a region much smaller than the waist of the trapping field. This condition can be achieved, for instance, by implementing feedback cooling methods to confine the transverse motions [28]. In recent years, several authors have studied such a system (e.g., Refs. [29; 30; 31; 32; 33] ). However, in this discussion, a simplified scheme will be followed, focusing solely on a single trapping mode without considering the cooling mode.
Assuming that there is an mCP trapped within the perfectly transparent silica nano-sphere, which could be bound to an atom of silicon or oxygen, several approaches have been considered to enhance its interaction with our world. These approaches include: (i) positioning the cavity axis between two parallel electrodes connected to highly different voltage sources, as described in Ref. [19]; (ii) using a strongly charged metallic needle inserted between the cavity mirrors to generate a strong Coulomb interaction with the mCP, as proposed in Ref. [34]; and (iii) inserting a uniformly charged metallic ring with a linear charge density \(l\) and a radius \(R>>r\) much larger than the radius of the nano-sphere, with the plane of the ring orthogonal to the cavity axis and centered on it, as outlined in Ref. [35]. Although there is also a paper [36] that considers a
"classically charged" microsphere levitated in a hybrid optical-Paul trap, here the proposal of Ref. [35] will be followed, which could be considered analogous to the one-dimensional static case discussed in Ref.[36]. Then, the Hamiltonian for the levitated dielectric droplet, considered as point-like [29; 33], in a frame rotating at the angular frequency \(\omega_{\rm L}\) of the driving field is[37]:
\[\hat{H} = -\hbar\Delta_{0}\hat{a}^{\dagger}\hat{a}-\hbar g\cos^{2}(kx)+ \hbar E(\hat{a}+\hat{a}^{\dagger}) \tag{1}\] \[+ \frac{p^{2}}{2m}+q\phi(x).\]
The detuning of the cavity field with respect to the driving field is denoted as \(\Delta_{0}=\omega_{\rm L}-\omega_{\rm c}\). The second term in Eq. (1) represents the effect of the dielectric droplet, which slightly modifies the frequency of the cavity mode due to the interaction between the point-like dielectric particle and the cavity field [38], giving rise to the ponderomotive coupling constant \(g=3V_{\rm s}(\varepsilon-1)(\varepsilon+2)^{-1}\omega_{\rm c}/2V_{\rm c}\)[29], where \(\varepsilon=2.3\) represents the electric permittivity of SiO\({}_{2}\), assumed to be real. \(V_{\rm s}\) denotes the droplet volume and \(V_{\rm c}=\pi w^{2}\mathcal{L}/4\) represents the volume of the cavity field, where \(w=\sqrt{\lambda\mathcal{L}/2\pi}\) is the internal field waist and \(\mathcal{L}\) is the cavity length. Here, \(\hat{a}(\hat{a}^{\dagger})\) represents the boson annihilation (creation) operator of the cavity mode with the commutation relation \([\hat{a},\hat{a}^{\dagger}]=1\). The driving field amplitude is denoted as \(E=\sqrt{\frac{\kappa\mathcal{P}}{\hbar\omega_{\rm L}}}\), where \(\kappa=\frac{c\pi}{2\mathcal{L}\mathcal{F}}\) represents the cavity linewidth, \(c\) is the speed of light in vacuum, \(\mathcal{F}\) is the cavity finesse, and \(\mathcal{P}\) is the input power of the trapping laser with frequency \(\omega_{\rm L}/2\pi\). The mass of the silica droplet is denoted as \(m=\rho_{0}V_{\rm s}\), where \(\rho_{0}=2650\,{\rm Kg\,m^{-3}}\) is the mass density. The center of mass position of the NS along the cavity axis is denoted as \(x\), and \(p\) represents its conjugate momentum with the commutation relation \([x,p]=i\hbar\). Furthermore, \(k=\omega_{\rm c}/c\) is \(2\pi\) times the inverse wavelength of the trapping field. A polarized Gaussian TEM00 wave along the unit vector \(\mathbf{y}\) inside the cavity, of the form \(\mathbf{E}(x)=E\mathbf{y}\cos(kx)\,e^{-(y^{2}+z^{2})/w^{2}}\), has been considered [29]. The last term in Eq. (1) represents the electrostatic contribution due to the charged ring, denoted as \(q\phi(x)\) where the associated scalar potential \(\phi(x)\) is given by
\[\phi(x)=\frac{Q}{4\pi\epsilon_{0}R}\left[1+\left(\frac{C_{0}+x}{R}\right)^{2} \right]^{-\frac{1}{2}}, \tag{2}\]
with \(Q=2\pi Rl\).
There are two ways to introduce this charged ring: either by positioning its center at the antinode where the NS's center of mass (CoM) sits in equilibrium, as in Ref. [35], or by placing it at a distance \(C_{0}\ll\mathcal{L}\) from the trapping antinode on the cavity axis, as considered in this work. Due to its symmetry, the charged ring generates an electrostatic field only along the cavity axis [37], and its effect is incorporated through the introduction of the scalar potential \(\phi(x)\). The droplet will be optically trapped at an antinode of the standing wave inside the cavity, and it is convenient to choose this antinode as the origin of the frame, such that \(x=0\) corresponds to the position of its CoM at equilibrium. When \(C_{0}=0\), the electrostatic field does not shift the equilibrium position of the NS's CoM; instead, it only modifies its oscillation frequency [35]. However, for \(C_{0}\neq 0\), the equilibrium position \(x_{\rm s}\) will be shifted away from the anti-node at \(x=0\).
In the presence of strong driving, the Hamiltonian described in Eq. (1) can be linearized by considering the mean steady state values \(x_{\rm s}(p_{s}),a_{\rm s}\) for the CoM position (momentum) of the NS and the annihilation operator of the cavity mode. This linearization can be expressed as \(\mathcal{O}\rightarrow\mathcal{O}_{s}+\delta\mathcal{O}\), where \(\mathcal{O}\) represents the operators (\(x\), \(p\), \(\hat{a}\)). Furthermore, in order to ensure that all fluctuation operators are dimensionless a canonical transformation is introduced for the position and momentum fluctuations of the NS's center-of-mass. This transformation is given by \(\delta x=\sqrt{\hbar/(m\omega_{\rm m})}\delta\hat{x}\) and \(\delta p=\sqrt{\hbar m\omega_{\rm m}}\delta\hat{p}\), where the commutation relation is defined as \([\delta\hat{x},\delta\hat{p}]=i\). Then, the resulting linearized Heisenberg-Langevin equations of motion, by considering damping and noises, can be obtained [37],
\[\hat{\mathbf{u}}(t)=\mathbf{A}\hat{\mathbf{u}}(t)+\hat{\mathbf{n}}(t), \tag{3}\]
where \(\hat{\mathbf{u}}(t)=(\delta\hat{x},\delta\hat{p},\delta\hat{X},\delta\hat{Y})^ {T}\) represents the vector of quadrature operators, and \(\hat{\mathbf{n}}(t)=(0,\hat{\eta}(t),\sqrt{\kappa}\hat{X}_{\rm in},\sqrt{ \kappa}\hat{Y}_{\rm in})^{T}\) corresponds to the noise vector (where the exponent \(T\) denotes transposition). In this context, \(\delta\hat{X}=(\delta\hat{a}+\delta\hat{a}^{\dagger})/\sqrt{2}\) and \(\delta\hat{Y}=(\delta\hat{a}-\delta\hat{a}^{\dagger})/(i\sqrt{2})\) represent the amplitude and phase quadrature fluctuations of the optical field, respectively, while \(\hat{X}_{\rm in}=(\hat{a}_{\rm in}+\hat{a}_{\rm in}^{\dagger})/\sqrt{2}\) and \(\hat{Y}_{\rm in}=(\hat{a}_{\rm in}-\hat{a}_{\rm in}^{\dagger})/(i\sqrt{2})\) represent the amplitude and phase quadratures of the input noise operators, respectively. Here, \(\mathbf{A}\) corresponds to the drift matrix:
\[\mathbf{A}=\begin{pmatrix}0&\omega_{\rm m}&0&0\\ -\Omega_{\rm m}&-\frac{\gamma}{2}&-G&0\\ 0&0&-\frac{\kappa}{2}&\Delta(x_{\rm s})\\ -G&0&-\Delta(x_{\rm s})&-\frac{\kappa}{2}\end{pmatrix}, \tag{4}\]
where \(\Delta(x_{\rm s})=\Delta_{0}+g\cos^{2}(kx_{\rm s})\) is the effective detuning, \(G=\sqrt{2\hbar/(m\omega_{\rm m})}kga_{\rm s}\sin(2kx_{\rm s})\) represents the effective coupling and \(\Omega_{\rm m}=\omega_{\rm m}+A_{\rm q}/(m\omega_{\rm m})\) denotes the effective mechanical frequency with \(\omega_{\rm m}^{2}=2\hbar gk^{2}a_{\rm s}^{2}/m\cos(2kx_{\rm s})\) and \(A_{\rm q}=\frac{qQ}{4\pi\epsilon_{0}R^{2}}\). The decay rate for the cavity mode is denoted by \(\kappa\), and for the fluctuation-dissipation theorem [39] the damping constant of the mechanical mode, \(\gamma\), is given by [37]:
\[\gamma = \frac{4\pi^{2}}{5}\frac{\varepsilon-1}{\varepsilon+2}\left(\frac{V_{ \rm s}}{\lambda^{3}}\right)\omega_{\rm m}\frac{\hbar\omega_{\rm m}}{k_{\rm B}T}+ \frac{4\pi r^{2}P_{\rm gas}}{mv},\]
where \(P_{\rm gas}\) denotes the gas pressure, \(v=\sqrt{3k_{B}T/m_{\rm a}}\) represents the mean velocity of gas molecules at temperature \(T\), \(k_{\rm B}\) is the Boltzmann constant, and \(m_{\rm a}=28.97u\) represents the mass of an air molecule (with \(u\) being the atomic mass unit). The stochastic force \(\hat{\eta}(t)\) acting on the mechanical mode which is charchterized by an expectation value of \(\langle\hat{\eta}(t)\rangle=0\) and correlations given by \(\frac{1}{2}[\langle\hat{\eta}(t)\hat{\eta}(t^{\prime})\rangle+\langle\hat{\eta}(t^ {\prime})\hat{\eta}(t)\rangle]=\Gamma\delta(t-t^{\prime})\), where \(\Gamma=\gamma k_{\rm B}T/(\hbar\omega_{\rm m})\) represents the diffusion constant. It is assumed that the stochastic force is Markovian [37]. The optical noise has an expectation value of \(\langle\hat{J}_{\rm in}(t)\rangle=0\) and correlations given by \(\frac{1}{2}[\langle\hat{J}_{\rm in}(t)\hat{J}_{\rm in}(t^{\prime})\rangle+ \langle\hat{J}_{\rm in}(t^{\prime})\hat{J}_{\rm in}(t)\rangle]=\frac{1}{2} \delta(t-t^{\prime})\) (for \(\hat{J}_{\rm in}=\hat{X}_{\rm in}(t)\) or \(\hat{Y}_{\rm in}(t)\)), with the average number of photons at optical frequency being negligible. The steady-state
value for optical mode given by \(a_{\rm s}=E/(i\Delta(x_{\rm s})+\kappa/2)\). The phase of input driving laser is chosen such that \(a_{\rm s}\) is _real_, the corresponding steady state value \(x_{\rm s}\) is obtained by solving the following transcendentalal equation, [37]
\[A_{\rm q}(C_{0}+x_{\rm s})+\frac{\hbar gkE^{2}\sin(2kx_{\rm s})}{\frac{\kappa^{ 2}}{4}+[\Delta_{0}+g\cos^{2}(kx_{\rm s})]^{2}}=0 \tag{5}\]
with the constraints \(C_{0}\neq 0\) and \(-\frac{\pi}{4k}<x_{\rm s}<\frac{\pi}{4k}\). However, unlike typical opto-mechanical systems, the angular frequency \(\omega_{\rm m}(\Omega_{\rm m})\) of the levitated particle, as well as the values of \(a_{\rm s}\) and \(x_{\rm s}\), are strongly influenced by the detuning \(\Delta_{0}\). Additionally, the value of \(\omega_{\rm m}\) is also dependent on \(\cos(2kx_{\rm s})\). Therefore, when choosing \(x_{\rm s}\), it is imperative to uphold the positivity of the _cosine_ term in order to preserve the stability of the system [37; 40].
It becomes apparent that the silica droplet and the internal optical field are not independent of each other and the relationship is governed by the _non-zero_ coupling term \(G\), which arises from the milli-charge \(q\) and primarily the shift \(C_{0}\) of the charged ring plane. However, this coupling term is case specific, as it disappears when \(C_{0}=0\). Indeed, in this case a different [37] approach must be considered because, with the current setting, the only value for light output would be thermal noise alone. This finding already provides an important insight that partially addresses the question posed in the title. With the chosen setup, if a milli-charged particle (mCP) exists within this nano-sphere, the output light should exhibit a spectrum distinct from thermal white noise, confirming the presence of the mCP. However, to provide a comprehensive answer to the question, measurements on the output light need to be performed.
The fluctuation spectrum of the light emitted from the cavity can be determined by using the two-frequency auto-correlation function \(\langle\delta\hat{J}^{\rm out}(\omega)\delta\hat{J}^{\rm out}(\omega^{ \prime})\rangle=S^{\rm out}_{\rm JJ}(\omega,\omega^{\prime})\delta(\omega+ \omega^{\prime})\) for \(J=X,Y\). Then, using the input-output relation \(\delta\hat{J}^{\rm out}=\sqrt{\kappa}\delta\hat{J}-\hat{J}_{\rm in}\)[41] and Eq.(3), the symmetric spectral function of the correlations of the output amplitude (phase) quadrature fluctuations \(S^{\rm out}_{\rm JJ}(\omega)=(1/2\pi)\int d\omega^{\prime}e^{-i(\omega-\omega ^{\prime})}\langle\delta\hat{J}^{\rm out}(\omega)\hat{J}^{\rm out}(\omega^{ \prime})\rangle\) of the laser field in Fourier space is given by
\[S^{\rm out}_{\rm JJ}(\omega) = \frac{1}{2}+\kappa\left\{\Gamma|a_{\rm J}(\omega)|^{2}-\Re[b_{ \rm J}(\omega)d(-\omega)]\right\}/|d(\omega)|^{2} \tag{6}\] \[+ \frac{\kappa^{2}}{2}\left\{|b_{\rm J}(\omega)|^{2}+|c_{\rm J}( \omega)|^{2}\right\}/|d(\omega)|^{2},\]
where the exact expressions of \(d(\omega)\), \(a_{\rm J}(\omega)\), \(b_{\rm J}(\omega)\), and \(c_{\rm J}(\omega)\) for \(J=X,Y\) are given in [37]. The effect of the charged ring is to modify the oscillation frequency \(\omega_{\rm m}\), which gives rise to the effective \(\Omega_{\rm m}\). Furthermore, due to the shifted position of the charged ring plane (\(C_{0}\neq 0\)), when observing the output light, it no longer shows the thermal noise. An example of such a spectral function at temperature \(T=300\) K and \(P_{\rm gas}=10^{-10}\) Torr for \(q=10^{-5}e_{0}\), \(R=5.0\) mm, and a strong electrostatic field on the ring \(\mathcal{E}_{\rm x}=7.24986\times 10^{10}\) V/m is reported in Fig. 1 for \(\Delta_{0}=0.8\kappa\) and \(C_{0}=\lambda\) (other examples are in [37]). Hence, the measurement of the output light's spectrum [37], different from that of the thermal noise as shown in Fig. 1, will be the check for the existence of mCP in the NS considered.
_What is now important is to provide a complete answer to the question posed in the title._
It is already evident from the figures that something non-classical is occurring. In fact, the amplitude and phase quadrature fluctuations of the output light exhibit squeezing at specific values of the output frequency. This finding could potentially signify the presence of a more significant quantum effect: the entanglement between the Gaussian states of light and the center of mass (CoM) position. To quantify the entanglement, here it is opted to use the logarithmic negativity [42; 43; 44]:
\[E_{n}=\text{max}\{0,-\log(2\eta_{-})\}, \tag{7}\]
where \(\eta_{-}\) is the lowest symplectic eigenvalue of the partially transpose correlation matrix [45]. Indeed, the steady state of the bipartite quantum system formed by the mechanical mode of interest and the cavity mode can be fully characterized by their correlation matrix. In fact, the quantum noises \(\hat{\eta}(t)\) and \(\hat{a}_{\rm in}(t)\) are quantum Gaussian noises with zero mean and the dynamics is linearized, consequently the stationary state of the system is a zero-mean bipartite Gaussian state, completely characterized by its \(4\times 4\) correlation matrix \(\mathbf{V}\) with elements \(V_{\rm ij}=[\langle u_{\rm i}(\infty)u_{\rm j}(\infty)+u_{\rm j}(\infty)u_{\rm i }(\infty)\rangle]/2\), which, when the stability conditions for the Eqs.(3) are satisfied [37; 40], they can be obtained [37] solving the Lyapunov equation [46; 47; 48]
\[\mathbf{AV}+\mathbf{VA}^{T}=-\mathbf{D}, \tag{8}\]
where \(\mathbf{A}\) is the drift matrix given in Eq.(4), and \(D_{\rm ij}=\frac{1}{2}[\langle n_{\rm i}(\infty)n_{\rm j}(\infty)\rangle+ \langle n_{\rm j}(\infty)n_{\rm i}(\infty)\rangle]\delta_{\rm ij}\) are the elements of the diagonal matrix \(\mathbf{D}\) representing the stationary noises' correlation functions.
In clamped cantilever and similar cases, where the mechanical oscillation frequency is independent of the detuning, entanglement is usually observed by fixing the effective detuning
Figure 1: (color online) The symmetric spectral function of the output field amplitude quadrature \(S^{\rm out}_{\rm XX}(\omega)\) (blue curve) and phase quadrature \(S^{\rm out}_{\rm YY}(\omega)\) (dashed red curve) fluctuations normalized to the spectrum of thermal noise \(S^{0}_{0}(\omega)\) (black line) are plotted as a function of \(\omega/\kappa\) for a mCP charge of \(q=10^{-5}e_{0}\), a static ring field of \(\mathcal{E}\)x \(\simeq 7.25\times 10^{10}\) V/m, a cavity field detuning of \(\Delta_{0}=0.8\kappa\), a cavity length of \(\mathcal{L}=1\) cm, an input power of the trapping laser at \(\mathcal{P}=1.0\) mW, and a cavity finesse of \(\mathcal{F}=50000\).
at the Stokes sideband. In the scenario here considered, for any value of the mCP charge \(q\), \(\omega_{\rm m}\) depends on the detuning \(\Delta_{0}\) and on \(x_{\rm s}\). Moreover, the position of the droplet's CoM \(x_{\rm s}\) depends on the detuning, the ring charge \(Q\), and the shift constant \(C_{0}\). Hence, one has to choose the best strategy to achieve better results for the entanglement. Considering all of that one obtains the best choice for the mCP charge \(q=10^{-5}e_{0}\) (\(Q\simeq\,3.25\) C ) and an electrostatic field \(\mathcal{E}_{\rm x}=2.5\times 10^{11}\) V/m gives a maximum \(E_{n}\simeq 0.2\) at \(\Delta_{0}\simeq 0.3\kappa\) as shown in Fig. 2[37]. The results obtained here are for the entanglement inside the cavity. However, as it was already shown [47; 49], by appropriately filtering the output light, the stationary entanglement between the internal mechanical mode and the output optical mode is higher at a selected output frequency. Recently, this "long-standing prediction" was considered in Ref. [50], where it is shown that in an appropriately pulsed regime, it is experimentally obtainable beyond the resolved sideband regime.
The strategy chosen is important in obtaining the entanglement, but one is limited to consider \(\epsilon>10^{-5}-10^{-6}\) because of the need for a very strong electrostatic field even at these values of the NS charge. Nevertheless, one could choose, for example, \(\epsilon=10^{-8}-10^{-11}\) and \(\Delta(x_{\rm s})\neq\omega_{\rm m}\). In this case, one obtains \(x_{\rm s}\) from the transcendental Eq. (5), and there is no more entanglement. However, one can still obtain an extremely feeble squeezing for the same value of the ring charge \(Q\) as in the case of \(\epsilon=10^{-5}\), as shown in [37].
To obtain these results, assuming that every other possible noise or loss can be perfectly controlled and eliminated, the electrostatic field \(\mathcal{E}_{\rm x}(x_{\rm s})\) should be as high as possible, but field emission from a rounded tip usually occurs at about \(10^{10}\)V/m in electron microscopy [51]. Therefore, the smoothness of the metal ring must be considered, avoiding protruding parts and sharp points. It is also necessary to work at very low pressure on the order of \(10^{-10}-10^{-8}\) Torr while keeping the system stable, and this may be difficult to achieve at room temperature as considered here. However, this could be achieved at liquid Helium temperature where the Markov condition still holds, and similar results are obtained. Therefore, an electrostatic field value of this magnitude, or perhaps one or two orders of magnitude greater, could be adequately taken.
Despite all the possible difficulties, what has been shown is that, for values of the mCP not smaller than \(q=10^{-6}e_{0}\) the complete answer to the question posed in the title is _Yes_, if bound to matter mCPs exist, one can obtain quantum effects, i.e. squeezing on the output light and feeble entanglement between the internal light and the mechanical CoM mode, which are only due to the existence of the charged dark matter particle. For smaller mCP charge \(q=\epsilon e_{0}\), at least down to \(\epsilon\simeq 10^{-11}\) for \(\Delta(x_{\rm s})\neq\omega_{\rm m}\) there is no more entanglement and the squeezing is vanishingly small, but the output light does not have a spectrum of the thermal noise. There will be a very narrow frequency peak [37]. Whence one might expect to obtain a similar effect considering other possible very weak interactions of dark matter with our measurable world.
Acknowledgements. P. Tombesi. would like to thank his wife Rita for her patience during Covid19 lockdown, when this work started, and Peter Zoller for his much appreciated clarifications. M. Asjad has been supported by the Khalifa University of Science and Technology under Award No. FSU-2023-014.
|
2309.12916 | Meso-scale size effects of material heterogeneities on crack propagation
in brittle solids: Perspectives from phase-field simulations | Brittle solids are often toughened by adding a second-phase material. This
practice often results in composites with material heterogeneities on the meso
scale: large compared to the scale of the process zone but small compared to
that of the application. The specific configuration (both geometrical and
mechanical) of this mesoscale heterogeneity is generally recognized as
important in determining crack propagation and, subsequently, the (effective)
toughness of the composite. Here, we systematically investigate how dynamic
crack propagation is affected by mesoscale heterogeneities taking the form of
an array of inclusions. Using a variational phase-field approach, we compute
the apparent crack speed and fracture energy dissipation rate to compare crack
propagation under Mode-I loading across different configurations of these
inclusions. If fixing the volume fraction of inclusions, matching the inclusion
size to the K-dominance zone size gives rise to the best toughening outcome.
Conversely, if varying the volume fraction of inclusions, a lower volume
fraction configuration can lead to a better toughening outcome if and only if
the inclusion size approaches from above the size of the K-dominance zone.
Since the size of the K-dominance zone can be estimated \textit{a priori} given
an understanding of the application scenario and material availability, we can,
in principle, exploit this estimation to design a material's mesoscale
heterogeneity that optimally balances the tradeoff between strength and
toughness. This paves the way for realizing functional (meta-)materials against
crack propagation in extreme environments. | Liuchi Li, Jack Rao, Todd Hufnagel, KT Ramesh | 2023-09-22T15:07:19Z | http://arxiv.org/abs/2309.12916v3 | # Meso-scale size effects of material heterogeneities on crack propagation in brittle solids
###### Abstract
Brittle solids are often toughened by adding a second-phase material. This practice often results in composites with material heterogeneities on the meso scale: large compared to the scale of the process zone but small compared to that of the application. The specific configuration (both geometrical and mechanical) of this mesoscale heterogeneity is generally recognized as important in determining crack propagation and, subsequently, the (effective) toughness of the composite. Here, we systematically investigate how dynamic crack propagation is affected by mesoscale heterogeneities taking the form of an array of inclusions. Using a variational phase-field approach, we compute the apparent crack speed and fracture energy dissipation rate to compare crack propagation under Mode-I loading across different configurations of these inclusions. If fixing the volume fraction of inclusions, matching the inclusion size to the K-dominance zone size gives rise to the best toughening outcome. Conversely, if varying the volume fraction of inclusions, a lower volume fraction configuration can lead to a better toughening outcome if and only if the inclusion size approaches from above the size of the K-dominance zone. Since the size of the K-dominance zone can be estimated _a priori_ given an understanding of the application scenario and material availability, we can, in principle, exploit this estimation to design a material's mesoscale heterogeneity that optimally balances the tradeoff between strength and toughness. This paves the way for realizing functional (meta-)materials against crack propagation in extreme environments.
keywords: Dynamic fracture, Crack speed, Fracture energy, K-dominance zone, Inclusion configuration +
Footnote †: journal: Journal of the Mechanics and Physics of Solids
## 1 Introduction
Structural materials such as glass and ceramics find applications across multiple industrial sectors, from aerospace to defense, where extreme environments (such as high-speed impact) are often encountered. However, these strong and lightweight materials are often brittle, and this creates significant concerns regarding product reliability in safety-critical applications. Thus, it has been a common practice to toughen a brittle material by adding a second material phase to
alter the fracture behavior. Such approaches often result in mesoscale material heterogeneities: the heterogeneity length scale is large compared to that of the process zone, but small compared to that of the application (which gives rise to "extrinsic toughening mechanisms" [1, 2]). As such, this practice enjoys a very high-dimensional design space in terms of not only what material to choose as the second phase but also how to configure this second phase spatially with respect to the base material phase, given the significant separation of length scales.
The biggest challenge, therefore, lies in how we effectively explore this design space (in terms of choosing materials and their geometrical configurations) to minimize the well-known tradeoff between strength and toughness [1, 3]. For instance, adding a more compliant or tougher phase can help arrest a propagating crack, thereby toughening the material. However, such inclusions can also lead to a decrease in the overall stiffness and strength. Historically, this design space has been explored by focusing on the choice of the second phase material. Examples include crystallized ceramic inclusions in an LS\({}_{2}\) glass matrix [4] and TiC particles in a SiC ceramic matrix [5]. The issue is that, partly due to synthesis and processing limitations, the resulting geometrical configuration of the material heterogeneity is usually stochastic and not well-controlled, leaving the geometrical aspect (a considerable portion of this design space) largely unexplored.
Recently, advances in manufacturing techniques allow the precise control of a material's structure across scales, making it possible to systematically explore the geometric aspect of this design space. There are particularly promising opportunities within the context of mechanical "meta-materials"1 which have already demonstrated novel mechanical properties (such as a high stiffness-to-density ratio [6, 7], with chiral character [8], and being (re-)programmable [9, 10, 11]) that are not found in conventionally manufactured materials. [12, 13] provide comprehensive introductions to this topic. An emergent research area is that of studying the fracture resistance of these meta-materials [14, 15, 16, 17, 18], particularly the class of meta-materials containing arrays of compliant inclusions (which often take the form of voids [19, 20, 21]). These studies recognized that the geometrical configuration of these inclusions (mainly size and spacing) could alter crack propagation behaviors [2, 19]. It has been argued that the size of the K-dominance zone plays an important role in this regard [2]. A properly designed configuration can thus lead to attaining high toughness [21, 22], and even directional asymmetry toughness when designing inclusion shape comes into play as well [23].
Footnote 1: The terms “metamaterials” and “architected materials” are often used interchangeably in the literature.
However, a systematic and quantitative understanding is still lacking with regard to the mesoscale size effects of material heterogeneities on dynamic fracture. Many important questions remain unanswered, especially from a physics-guided design perspective. For instance, why should we choose a specific inclusion size over another? Does changing the inclusion size necessitate a change of inclusion spacing to maintain a "sweet spot" design? Lastly, how important are the material properties of the inclusions? These questions are also relevant for minimizing the undesired effects of these inclusions (for instance, those in the form of voids) on the overall material stiffness and strength, which are especially important for applications in extreme environments.
Here, we systematically quantify the mesoscale size effects of material heterogeneities on fracture propagation in a brittle composite (or meta-) material. Recognizing that fracture
propagation in a medium with mesoscale heterogeneities can give rise to considerable inertia effects [24; 25; 26; 27], we use a dynamic phase-field approach [28] for numerical simulation. This approach enables us to calculate fracture energy dissipation rate and crack speed at every instant. Similar to previous studies [2; 29], we consider (tougher) inclusions as mesoscale material heterogeneities, and we model them and the base medium as separate continuum solids that homogenize out any possible micro-scale (and below) material heterogeneities. Beyond what has been done in prior studies [21; 23], we consider the configuration of these inclusions under both varying and fixed volume fraction settings and different constituent materials. By doing so, we can cover a larger fraction of the design space. We are particularly interested in the possibility of a low-volume fraction design outperforming a high-volume fraction design in toughening a composite material (i.e., lower crack speed and higher fracture energy dissipation rate). The underlying physical interpretations can have practical implications in designing functional materials that optimally balance the tradeoff between strength and toughness [1] (and potentially simultaneously realize low mass densities [14]).
The rest of this paper is organized as follows. In section. 2, we briefly introduce the variational phase-field approach for dynamic fracture simulation; In section. 3, we discuss our numerical model and analysis procedure. In section. 4, we discuss simulation results that demonstrate the relative size interplay between the inclusion and the K-dominance zone. In section. 5, we present a summary and discuss future directions.
## 2 Variational phase-field approach to fracture
Phase-field modeling provides a mathematical framework that is widely used to describe physical systems, especially those with evolving interfaces far from equilibrium (fracture propagation in solids is a typical example). Since its first introduction in the context of solidification and phase transition [30], it has been adapted to modeling many other phenomena such as multiphase flow [31; 32], collection cell dynamics [33; 34], and material failures [35; 36; 37], to name a few. At the core of phase-field modeling is a mathematical description of a physical system's energy (density) \(\gamma_{\ell}\), a quantity associated with the particular physical field of modeling interest. This is done using a scalar field \(\phi\in[0,1]\) that varies smoothly in space over a length scale parameter \(\ell\). In the particular case of fracture, we can view \(\gamma_{\ell}\) as a (regularized) fracture energy density over the entire simulation domain, with the length scale \(\ell\) being used to effectively smear out the crack, so that \(\phi=0\) typically indicates intact material and \(\phi=1\) indicates completely damaged material. It has been shown that \(\ell\) determines the threshold for crack nucleation [38], and from a physics standpoint, it can be viewed as a representation of active mechanisms in the process zone [39]. When modeling dynamic fracture problems, we seek to minimize an incremental Lagrange energy functional \(\mathbf{I}_{\ell}\) using the principle of least action (see [40; 28] for a comprehensive discussion relevant to the topic). The functional form is:
\[\mathbf{I}_{\ell}(u,\dot{u},\phi)=\int_{t_{1}}^{t_{2}}\left\{\int_{\Omega} \Big{[}\frac{\rho}{2}|\dot{u}|^{2}-\mathcal{W}^{e}(u,\phi)-G_{\mathrm{C}} \gamma_{\ell}(\phi,\nabla\phi)+\rho b\cdot u\Big{]}\,\mathrm{d}\Omega+\int_{ \partial\Omega}\mathbf{t}\cdot u\mathrm{dS}\right\}\mathrm{dt}, \tag{2.1}\]
under the constraint \(\dot{\phi}>0\), to account for the irreversibility of the fracture process. In Eqn. 2.1, \(u\) is the displacement field, with \(\dot{u}=\frac{\partial u}{\partial t}\) the velocity field, \(\rho\) the material density, \(\phi\) the phase
field parameter indicating the degree of material damage, \(\mathcal{W}^{\rm e}\) the elastic strain energy density, \(G_{\rm C}\) the critical energy release rate (or fracture toughness) [41], \(\gamma_{\ell}\) the (regularized) fracture energy density, \(b\) the gravitational constant, and \(\mathbf{t}\) the surface traction. The form (or degree of complexity) of \(\mathcal{W}^{\rm e}\), \(\Gamma_{\ell}\), and \(G_{\rm C}\) may be problem-specific (e.g., \(G_{\rm C}\) may be anisotropic [42] and \(\gamma_{\ell}\) may be dependent on higher-order terms of \(\phi\)[43] ) and even Eqn. 2.1 can be modified to model ductile instead of brittle fracture [44; 45; 46]. In this work, we restrict ourselves to materials that are isotropic and linear elastic, with rate-independent fracture toughness. We can describe a material by three parameters: Young's modulus \(E\), Poisson's ratio \(\nu\), and fracture toughness \(G_{\rm C}\). Interfacial effects [47] can be included by incorporating a cohesive zone model [48], but we neglect them for the purposes of this work, leaving that to a subsequent effort. We adopt the following form of \(\gamma_{\ell}\), commonly used for modeling brittle fracture [40]:
\[\gamma_{\ell}=\frac{1}{4c_{w}\ell}\left(w(\phi)+\ell^{2}|\nabla\phi|^{2} \right),\text{with}\,c_{w}=\frac{1}{2}\text{ which implies }w(\phi)=\phi^{2}. \tag{2.2}\]
We implement this method using FEM in 2D based on our previous work on modeling multi-body contact mechanics problems [49]. The source code is publicly available at [https://github.com/liuchili/Variational-phase-field-method-for-dynamic-fracture-problem.git](https://github.com/liuchili/Variational-phase-field-method-for-dynamic-fracture-problem.git).
Our implementation finds the stationary solution to Eqn. 2.1 in parallel based on the alternating minimization scheme, utilizing an in-house conjugate gradient solver that runs with OpenMP and MPI. In particular, we use the average acceleration scheme to solve for \(u\). At each time step, we iterate back and forth between \(u\) and \(\phi\) until convergence is achieved. We verified our implementation using the classical Kalthoff-Winkler experiment [50] (see Appendix for details).
## 3 Modeling fracture propagation in brittle solids with mesoscale heterogeneities
We use the variational phase-field method discussed above to simulate dynamic crack propagation under Mode-I crack opening mode in the plane-stress condition, using a single-notched three-point bending configuration (with span \(L\), height \(H\), and notch length \(a\)) that is subjected to a constant indentation velocity \(v_{\rm load}\) as shown in Fig. 1(a). We consider a simple case where we represent mesoscale material heterogeneities as square-shaped inclusions. We arrange these inclusions (with a uniform size \(d\)) as a single array (with a uniform spacing \(h\)) along a line aligned with the expected crack propagation path, starting from a distance of \(D_{\rm in}\) from the notch tip (a "buffer zone") and extending to a length of \(L_{\rm in}\), as shown in Fig. 1(b). Next, we introduce a design parameter, the number of inclusions, \(N\), to determine \(h\) and \(d\), considering both varying and fixed volume fraction scenarios. We begin with the scenario of varying volume fraction \(f_{1}=\frac{Nd^{2}}{WL_{\rm in}}\) where we pick \(W=10L_{\rm in}\) (see Fig. 1(b)), which is large enough such that (1) the crack can only interact with a single array of inclusions, and (2) the stress fields developed from any two neighboring inclusion arrays (with a separation distance of \(W=10L_{\rm in}\)) do not interact2. Because of this, we only consider a single array of inclusions right above the notch to reduce simulation expenses. Plugging in \(W=10L_{\rm in}\), we have \(f_{1}=\frac{Nd^{2}}{10L_{\rm in}^{2}}\). We further enforce the following two conditions:
Footnote 2: Note that this can be a conservative estimation.
\[\frac{Nd}{L_{\rm in}}=c\ {\rm and}\ Nh=L_{\rm in},\ {\rm with}\ c=\frac{d}{h}\in[0,1] \ {\rm being\ a\ user\-defined\ constant}. \tag{3.1}\]
For a given value for \(c\) (we fix \(c=0.2\) throughout this work), varying \(N\) provides different geometrical configurations represented by different \((d,h)\) combinations. Keeping \(c\) unchanged allows us to isolate the effect of inclusion size from spacing for different configurations. This design strategy enables varying inclusion size and volume fraction at the same time, with the volume fraction \(f_{1}=\frac{c^{2}}{10N}\), so \(f_{1}\propto N^{-1}\) shown as the blue curve in Fig. 1(c). We also have \(d\propto N^{-1}\) following this condition. Fig. 1(d) shows three different combinations of \((d,h)\) using this design strategy with \(N=3\), \(N=5\), and \(N=15\), respectively. Next, we consider the scenario of fixed volume fraction (denoted as \(f_{2}\)), which we choose to be equal to \(f_{1}\) under the configuration shown with \(N=5\), so \(f_{2}=\frac{c^{2}}{50}\) represented as the green curve in Fig. 1(c). This allows us to again find different \((h,d)\) combinations by varying \(N\). A simple calculation gives the following relation between \((d,h)\) and \(N\) for \(f_{2}=\frac{c^{2}}{50}\):
\[d=\frac{cL_{\rm in}}{\sqrt{5N}},\ {\rm and}\ h=\frac{L_{\rm in}}{\sqrt{5N}}. \tag{3.2}\]
We omit examples for this scenario, but all resulting configurations follow the spacing pattern shown in Fig. 1(b). Lastly, in this work, we choose \(L=32a,H=8a,L_{\rm in}=5a\), and \(D_{\rm in}=a\), where \(a\) is the initial crack length (notch length).
In addition to the geometrical configurations, we also vary the mechanical configurations by selecting different values for \(\alpha\) (representing the elastic contrast) and \(\beta\) (representing the toughness contrast):
\[\alpha=E_{\rm in}/E_{0},\ {\rm and}\ \beta=G_{\rm C,in}/G_{0}, \tag{3.3}\]
where \(E_{\rm in}\) (\(G_{\rm C,\ in}\)) is Young's modulus (toughness) of the inclusion material, with \(E_{0}\) (\(G_{\rm C,0}\)) being that of the base material. In this work, we choose inclusions made from tougher materials with \(\beta\in[1.2,2.4]\), and consider a range of stiffness contrast \(\alpha\in[0.2,1.2]\). We pick glass (a typical brittle material) to be our base material, with \(E_{0}\) and \(G_{\rm C,0}\) values reported in [4], which gives a length scale \(\ell\simeq 0.02a\)[51].
For simplicity, we consider dynamic fracture under a quasi-static loading condition in this work. Note that a quasi-static loading condition means wave propagation associated with the external load happens sufficiently fast; However, this does not imply the material response is quasi-static in the presence of fracture3. Brittle fracture can be fast and can cause considerable material inertia effects [52]. As such, we use a dynamic phase-field formulation to explicitly account for crack speed and any inertia effects associated with a fast-propagating crack tip. To achieve a quasi-static loading condition while maintaining computational feasibility, we use a loading velocity \(v_{\rm load}\ \sim 5\times 10^{-5}v_{R}\) for all simulations where \(v_{R}\) is the Rayleigh wave speed
of the glass. (It is generally recommended \(v_{\rm load}<1\%v_{R}\)[53] to ensure a quasi-static loading condition.) We run all simulations for the same amount of time \(t\) over irregular linear triangular elements that are locally refined within the crack propagation region. Elements within this region all have size \(\delta\simeq\ell/3\), which is small enough to resolve crack evolution with sufficient accuracy [54]. We note that \(\ell\) imposes a length scale on material heterogeneity, below which a crack tip always senses a homogeneous field [2]. As such, for all configurations, we make sure the inclusion size \(d\) is strictly larger than \(\ell\). We also note that this finite-element discretization results in a numerical fracture toughness \(G_{\rm C}^{\rm num}\) whose value is different than the theoretical one \(G_{\rm C}\)[36]:
\[G_{\rm C}^{\rm num}=G_{\rm C}(1+\frac{\delta}{c_{w}\ell}), \tag{3.4}\]
where \(c_{w}\) in the normalization constant presented in Eqn. 2. As such, we use \(G_{\rm C}^{\rm num}\) when applying fracture mechanics theories to analyze simulation results. On average, every simulation takes about 8 hours to finish on the Rockfish high-performance computing facility at Hopkins,
Figure 1: (a) The setup of our numerical model: a single-notched three-point bending beam subjected to a constant indentation speed \(v_{\rm load}\). The beam has a span \(L\), a height \(H\), and a notch with length \(a\). (b) The geometrical configuration of a single array of mono-sized (\(d\)) and equally-spaced (\(h\)) inclusions embedded along a line with length \(L_{\rm in}\) which starts at a distance of \(D_{\rm in}\) (as a buffer zone) from the notch tip. The width \(W=10L_{\rm in}\) considered for computing volume fraction is not drawn to scale. In this particular image, we have the inclusion number \(N=3\). Varying this value will lead to a different geometrical configuration. (c) Variation of the volume fraction of inclusions as a function of \(N\). We consider two scenarios: a varying volume fraction scenario where \(f\propto N^{-1}\) (blue curve) and a fixing volume fraction scenario where \(f_{2}=f_{1}\) (green curve) using \(f_{1}\) determined from \(N=5\). (d) Visualization of the geometrical configuration of inclusions under the varying volume fraction scenario with \(N=3\), \(N=5\), and \(N=12\).
using 16 MPI tasks with six threads per task. We confirm that for all configurations considered in this work, we observe negligible differences in the crack initiation time and propagation within the buffer zone (whose length is \(D_{\text{in}}\)). This observation makes it straightforward to ensure that our subsequent analysis isolates the effect of material heterogeneity on dynamic crack propagation.
As a representative result, Fig. 2(a) shows the evolution of normalized crack length (\(l/a\)) as a function of normalized indentation displacement (\(v_{\text{load}}t/a\)), where we calculate \(l\) by tracking the crack tip spatial temporally based on the phase field \(\phi\) (locating the tip of an iso-curve with a threshold value \(\phi=0.85\)4, see Appendix for details). Two configurations are included in this figure: a homogeneous one with no inclusions (yellow curve) and a heterogeneous one with five inclusions (green curve, material properties are \(\alpha=0.4,\beta=2.4\)). The crack growth begins
Figure 2: (a) Evolution of normalized crack length \(l/a\) as a function of normalized indentation displacement \(v_{\text{load}}t/a\) for a homogeneous case with no inclusion (yellow curve) and a heterogeneous case with five inclusions (blue curve). The inclusion material has an elastic contrast \(\alpha=0.4\) and a toughness contrast \(\beta=2.4\). The two alternating shaded areas indicate two different kinds of crack tip locations for the heterogeneous case. The area colored in dark grey indicates the crack tip being outside an inclusion and otherwise inside an inclusion. The (almost) constant crack length within each light grey area indicates the crack is arrested inside the inclusions. (b) A plot similar to (a) shows the variation of normalized instantaneous crack tip speed \(V/v_{R}\), where \(v_{R}\) is the Rayleigh wave speed of the base material. (c) A visualization of the final crack trajectory based on \(\phi\) showing the interaction between the crack and the inclusions for the heterogeneous case. The white cross indicates the identified crack tip, and the white dashed rectangle indicates the region within which \([\langle V\rangle_{l}]_{t}\) and \([\langle G\rangle_{l}]_{t}\) are calculated. (d) A similar visualization to (c) shows the result from the homogeneous case.
simultaneously in both cases, and the evolution of crack length remains identical until leaving the buffer zone. This observation is demonstrated through the corresponding (normalized) instantaneous crack tip speed (\(V/v_{R}\) with \(V\) being calculated using \(l\) and \(t\)), as shown in Fig. 2(b). After a crack leaves the buffer zone, it enters the region where inclusions may reside. The shaded area in Fig. 2(a)(b) with two alternating colors indicates two different regions for crack propagation in the heterogeneous configuration: light grey indicates the region where the crack propagates inside inclusions while dark grey indicates otherwise. We observe that the crack gets (almost) arrested inside inclusions: \(l\) is (almost) constant, and \(V\) is (almost) zero. Figs. 2(c) and 2(d) visualize the final crack patterns for these two configurations, where the white cross marker in each image indicates the identified crack tip location. This observation of the crack propagating through inclusions with fluctuating speed is consistent with experimental studies [25]. We emphasize that the specific crack propagation pattern is size-dependent and material-dependent. In other words, by changing the choice of inclusion size and material, a crack may not propagate through inclusions.
Our main interest is thus to quantify and compare different crack propagation patterns resulting from different inclusion sizes and material choices. We do so using two temporally averaged variables: the apparent crack speed \([\langle V\rangle_{l}]_{t}\) and the apparent fracture energy dissipation rate \([\langle G\rangle_{l}]_{t}\). Here, \(\langle\cdot\rangle_{l}\) denotes the average of a quantity at a given time instant over the crack length \(l\), and \([\cdot]_{t}\) denotes the arithmetic average of a quantity over every time instant \(t\). We do not calculate \(V\) and \(G\) until a crack leaves the buffer zone. As an example, the two dashed black rectangles shown in Figs. 2(c) and 2(d) indicate the regions within which \(V\) and \(G\) are calculated.
As to our calculation procedure. First, at every instant \(t_{i}\), we can calculate the crack length \(l_{i}\) (by tracking the crack tip location), and we can also calculate the fracture energy \(\Gamma^{i}_{\ell}=\int_{\Omega}\gamma^{i}_{\ell}\Omega\), where \(\gamma^{i}_{\ell}\) is the (regularized) fracture energy density (as a function of \(\phi\)) at \(t_{i}\). This allows us to compute \(\langle V\rangle_{l_{i}}=l_{i}/t_{i}\) and \(\langle G\rangle_{l_{i}}=\Gamma^{i}_{\ell}/l_{i}\). Above, \(i\in[1,N_{t}]\) with \(N_{t}\) being the number of time instants sampled from each simulation. Then, we perform an arithmetic average over \(t_{i}\) to get \([\langle V\rangle_{l}]_{t}=\frac{1}{N_{t}}\sum_{i=1}^{N_{t}}\langle V\rangle_ {l_{i}}\) and \([\langle G\rangle_{l}]_{t}=\frac{1}{N_{t}}\sum_{i=1}^{N_{t}}\langle G\rangle_ {l_{i}}\). Lastly, for simplicity, we use the following two non-dimensionalized parameters for comparing \([\langle V\rangle_{l}]_{t}\) and \([\langle G\rangle_{l}]_{t}\) across different configurations:
\[\tilde{V}=\frac{[\langle V\rangle_{l}]_{t}}{[\langle V_{0}\rangle_{l}]_{t}}, \text{ and }\tilde{G}=\frac{[\langle G\rangle_{l}]_{t}}{[\langle G_{0}\rangle_{l}]_{t}}, \tag{3.5}\]
where \(V_{0}\) and \(G_{0}\) follow the same definition of \(V\) and \(G\), but they are computed from the homogeneous configuration. Self-evidently, a smaller value of \(\tilde{V}\) and a larger value of \(\tilde{G}\) indicate a better toughening outcome than that from the homogeneous configuration: lower crack speed and higher fracture energy dissipation rate. We report and discuss our results in the following sections.
## 4 Results and discussion
### Identical volume fraction with fixed inclusion material
We discuss results obtained from configurations with identical volume fraction \(f_{2}=f_{1}|_{N=5}\). The resulting inclusion size ranges from \(\sim 0.06a\) to \(\sim 0.26a\). We fix the choice of inclusion
material to be \(\alpha=0.4,\beta=2.4\), so more compliant and tougher inclusions. Fig. 3(a) shows the variation of \(\tilde{V}\) (data in blue) and \(\tilde{G}\) (data in red) as a function of \(d\), where we normalize \(d\) using \(d_{\text{max}}\) (obtained from the configuration with \(N=3\)). Qualitatively, varying \(d\) monotonically leads to non-monotonic variations of \(\tilde{V}\) and \(\tilde{G}\). When \(d/\) is on the larger side (larger than \(~{}0.4d_{\text{max}}\) residing within the green region), decreasing the inclusion size \(d\) leads to better toughening outcomes: \(\tilde{V}\) (slowly) decreases and \(\tilde{G}\) increases. However, the trend is reversed as \(d\) keeps decreasing to be smaller (smaller than \(~{}0.4d_{\text{max}}\) residing within the orange region). Subsequently, a transition point arises corresponding to the most ideal toughening outcome. Since the exact transition location is unclear, we overlap the green and orange regions qualitatively, as shown in Fig. 3(a). We can also observe a similar kind of transition in terms of crack trajectory. Fig. 3(b) shows the final crack trajectory for five different configurations highlighted in Fig. 3(a). When \(d\) is on the larger side (configurations one and two residing in the green region), the crack always goes through inclusions. However, the crack starts deflecting away as \(d\) keeps decreasing to be smaller (configurations three to five transitioning into the orange region). A crack deflecting away means fewer interactions with (tougher and more compliant) inclusions, which may be understood as the cause for increasing \(\tilde{V}\) and decreasing \(\tilde{G}\) in the orange region.
So, from Figs. 3(a) and 3(b), the size of inclusion \(d\) has an effect on crack propagation, and it governs the transition of the crack-inclusion interaction as well as that of the variation of \(\tilde{V}\) and \(\tilde{G}\). We argue that this kind of size effect can be quantitatively explained by an interplay between the size of inclusion and that of the K-dominance zone (denoted as \(D_{K}\)) associated with the inclusion material. The foundation of our argument is the fact that at least for brittle materials that satisfy the _small-scale yielding_ condition [39], a crack evolves by sensing and exploring the stress field inside the K-dominance zone [55; 23], an annulus that sits in between the much-smaller-sized process zone (where inelastic processes prevail) and the boundary condition-dependent zone (where boundary effects prevail), as shown in Fig. 4(a). More specifically, although where a crack decides to evolve depends on the _local_ location
Figure 3: Scenarios with identical volume fractions. (a) Variation of \(\tilde{V}\) (blue square) and \(\tilde{G}\) (red square) as a function of normalized inclusion size \(d/d_{\text{max}}\), where \(d_{\text{max}}\) corresponds to \(d\) determined from \(N=3\). The two data points encompassed by the black dashed rectangle are from the configuration with \(f_{1}|_{N=5}\). Five configurations are highlighted. (b) Visualizations of the final crack trajectory for the five configurations are highlighted in (a). Again, the white cross in each image indicates the identified crack tip.
(on the scale of the process zone) at which stress satisfies \(\sigma=\sigma_{\rm Y}\), where precisely this _local_ location is depends _non-locally_ on a surrounding area (i.e., on the scale of the K-dominance zone where \(\sigma\propto K_{\rm IC}/\sqrt{r}\)[55]). Above, \(\sigma_{\rm Y}\) is the material's yield strength, and \(\sigma\propto K_{\rm IC}/\sqrt{r}\) is the leading-effect term of the solution from Linear Elastic Fracture Mechanics (LEFM) theory. In this expression, \(K_{\rm IC}\) is the critical Mode-I stress intensity factor (a material property given by \(K_{\rm IC}=\sqrt{GE}\) in plane-stress condition), and \(r\) is the distance to the crack tip.
Consequently, when a crack faces an inclusion and needs to decide how to proceed, the size of that inclusion with respect to that of the K-dominance zone becomes relevant. Suppose the inclusion is large enough to encapsulate the K-dominance zone (such that the location of \(\sigma_{\rm Y}\) can be determined using solely the inclusion material \(K_{\rm IC,\ in}=\sqrt{\alpha\beta}K_{\rm IC,0}\)), the crack tip senses a homogeneous material field (corresponding to the inclusion material). When the inclusion size is not large enough to encapsulate the K-dominance zone (but still large enough to be qualified on the mesoscopic scale), the crack tip senses a heterogeneous material field: part of this field consists of the inclusion and part of this field consists of the base medium. This leads to a stress modulation within the K-dominance zone, which may change the location where \(\sigma=\sigma_{\rm Y}\), and may subsequently cause crack deflection depending on the particular choice of \(\alpha\) and \(\beta\). So in principle, if we have a sense of the size of the K-dominance zone associated with the inclusion material, we should be able to identify more quantitatively the transition location observed qualitatively from Figs. 3(a).
Estimating the size of the K-dominance zone is a complex and challenging process. From a theoretical point of view, the size of the K-dominance zone depends on both the length scale of the crack and that of the application (e.g., size and geometry). Ideally, for a stationary crack propagating in a large enough homogeneous medium, the size of the K-dominance zone will not change; however, this is no longer true when a crack evolves to be close enough to the application boundary. Further, non-uniform crack tip motion (commonly occurs when crossing heterogeneity boundaries) can play a role. It is well understood that it alters the quasi-static solution by scaling through the instantaneous crack tip speed [24]. Here, we use some simplifications to estimate \(D_{K}\) as a first-order approximation. First, we assume negligible changes in the K-dominance zone size for our configurations. Accordingly, we estimate \(D_{K}\) when a crack initiates from the notch tip. Second, we neglect the crack speed effect on the K-dominance solution, meaning the quasi-static formulation is adopted. Finally, following this quasi-static approximation, we normalize the inclusion material property to be \(E_{\rm in}=E_{0},G_{\rm C,\ in}=\alpha\beta G_{\rm C0}\) to estimate the size of the K-dominance zone. We will also use this normalization to study other inclusion materials with different \(\alpha\) but identical \(\alpha\beta\) (i.e., identical \(K_{\rm IC,\ in}\)). We discuss relevant results in Section. 4.3. Fig. 4(b) shows the spatial distribution of \((\sigma_{xx}+\sigma_{yy})/E_{0}\) at the moment of crack initiation, under the same loading condition for a homogeneous configuration with \(E=E_{0},G_{\rm C}=0.96G_{\rm C0}({\rm since\ 0.4\times 2.4=0.96})\). The black cross in Fig. 4(b) represents the crack tip identified based on the value of the phase field \(\phi\). We extract stress along the crack propagation direction and denote the distance to the crack tip as \(r\).
To estimate \(D_{K}\), we overlay the extracted stress distribution with that calculated from the quasi-static near-tip field solution \(\sigma_{xx}+\sigma_{yy}=2K_{\rm IC}/\sqrt{2\pi r}+\text{H.O.T.}\), in a way similar to [56]. We emphasize that this estimation should be done in a way that makes sense in the context of (phase-field) finite-element simulations. Consequently, this will give a \(D_{K}\) value that differs from the theoretical one, as shown qualitatively in Fig. 4(c) and in comparison with Fig. 4(a). There are two reasons behind this difference. First, finite-element discretization introduces an approximation that becomes especially pronounced near the crack tip, i.e., how well a simulation resolves the elasticity depends on the fineness of the mesh near the crack tip. This can lead to a larger deviation from the theoretical prediction of elasticity as we move closer to the crack tip. Second, phase-field regularization introduces an additional approximation to the (theoretically) sharp crack tip. This regularization will lead to a stress drop, as shown in Fig. 4(c) as we approach the numerically-identified crack tip (i.e., \(\phi\to 1\)). Such a stress drop may be interpreted to account for mechanisms active inside the process zone, which is missing from the elastic model considered in our simulations. From a different perspective, as \(\ell\to 0\), the material yield strength goes to infinity, consistent with LEFM theory and \(\Gamma\)-convergence arguments [51]. The pattern of stress drop depends on the specific model used to express the fracture energy density term shown in Eqn. 2.2. For instance, we use the so-called AT2
Figure 4: (a) A schematic showing the variation of stress \(\sigma\) as a function of the distance \(r\) to the crack tip, based on either the LEFM theory (blue curve) or the real response of a brittle material (red curve). Three regions can be identified. Very close to the crack tip is the process zone (colored in yellow), where inelastic material responses prevail, and LEFM theory breaks down. Ahead of the process zone is an annulus called the K-dominance zone, where LEFM theory holds. At the junction of these two zones, the material achieves its yield strength \(\sigma_{Y}\). After the K-dominance zone, boundary effects prevail, and LEFM theory breaks down again. (b) A visualization showing the spatial distribution of \((\sigma_{xx}+\sigma_{yy})/E_{0}\) at the moment of crack initiation for a homogeneous material with \(E=E_{0}\) and \(G_{\rm C}=0.96G_{\rm C0}\). Using the black cross, we indicate the crack tip location and denote the vertical distance away from the crack tip as \(r\). (c) Variation of \((\sigma_{xx}+\sigma_{yy})/E_{0}\) as a function of the normalized distance \(r/a\) extracted from (d). The red squares are from a phase-field simulation, and the blue curve corresponds to the prediction obtained from LEFM theory. The “effective” K-dominance zone has a size \(D_{K}\), which starts from where the stress peaks to where the stress deviates from the LEFM prediction. The PF-regularized zone is where the stress deviates from the LEFM prediction as approaching the crack tip due to a regularization imposed via \(\ell\) (and finite element discretization). The theoretical process zone as well as the K-dominance zone, both discussed in (a), are also shown.
model (\(c_{w}=1/2\)) [51] in this work, which is known to behave differently than the AT1 model (\(c_{w}=2/3\)) [57] (see [58] for a detailed discussion). Taking both into account, we consider an "effective" K-dominance zone which starts from where the stress peaks and ends at the location where the stress deviates from LEFM prediction, as shown in Fig. 4(c). Lastly, to calculate the LEFM prediction, we use the numerical stress intensity factor \(K_{\text{IC}}^{\text{num}}\) based on \(G_{\text{C}}^{\text{num}}\) shown in Eqn. 3.4.
Fig. 5 shows the result where we replace \(d/d_{\text{max}}\) with \(d/D_{K}\). The transition can now be more quantitatively identified as \(d=D_{K}\): the green region contains configurations satisfying \(d>D_{K}\), and the orange region contains those satisfying \(d<D_{K}\). \(d=D_{K}\) also serves as a good location for the transition into crack deflection, i.e., when \(d<D_{K}\). Although the inclusion size in configuration three is larger than \(D_{K}\), the crack also shows a small degree of deflection, especially when compared to configurations one and two. This can be due to a combination of uncertainties associated with estimating \(D_{K}\) from the \((\sigma_{xx}+\sigma_{yy})/E_{0}-r/a\) plot and inaccuracies associated with estimating \(D_{K}\) assuming a quasi-static fracture condition; the latter may not be very accurate when a crack crosses material heterogeneities due to nonnegligible crack speed effects [24, 25]. Nevertheless, our estimation of \(D_{K}\) is a good first-order approximation. We point out that \(d/D_{K}\) ranges from as small as \(\sim 0.4\) to as large as \(\sim 3\). This range sits between the two extremes: zero and infinity. Suppose \(d\) is way smaller than \(D_{K}\) (approaching zero). In that case, we shall consider these inclusions as sub-meso scale heterogeneities whose contributions can be treated in a homogenized manner. As such, the crack tip effectively senses a homogeneous material field. If \(d\) is way larger than \(D_{K}\) (approaching infinity), we can consider these inclusions as layered heterogeneities that a crack can sense. However, there will not be crack deflections as the K-dominance zone is always encompassed by \(d\) (assuming negligible interfacial effects). Therefore, the mesoscale effects of inclusions only become pronounced when the inclusion size becomes comparable to that of the K-dominance zone. We emphasize again that the reported size effect emerges by assuming a strong connection (e.g., negligible interfacial effects) between the base medium and the inclusion. However, depending on the specific choice of materials, a (weak) interface can arise, leading to a crack deflecting along interfaces instead of into inclusions [47] regardless of the inclusion size. This kind of crack deflection can be more pronounced for interfaces with complex geometries [59], especially in a three-dimensional setting [60]. Incorporating interfacial effects is an exciting research direction we plan to investigate in the near future. Lastly, we note that this size interplay between inclusion and the K-dominance zone is reminiscent of that occurs at a smaller length scale [61]: the material heterogeneity becomes comparable to the size of the process zone, subsequently leading to a significant perturbation of the rupture dynamics.
We conclude by pointing out that the size of the K-dominance zone can thus be viewed as the minimum size for an inclusion to interact effectively with a crack. As a result, if given the same volume fraction of second-phase material, it is more efficient, in terms of toughening a material, to increase the number of inclusions that are available to interact effectively (so long as \(d>D_{K}\)) with a crack. Conversely, if given a varying volume fraction of second-phase material, is it possible to exploit the size interplay between \(d\) and \(D_{K}\) to achieve a better toughening outcome using a lower volume fraction configuration? From a design perspective, such a possibility can be helpful to achieve an optimal balance tradeoff between strength and toughness, especially for scenarios with limited choices of materials. We explore this possibility in the next section.
### Applying to varying volume fraction with fixed inclusion material
We generate configurations with varied volume fraction \(f_{1}\propto N^{-1}\) with the resulting inclusion size \(d\) ranging from \(\sim 0.06a\) to \(\sim 0.3a\). Compared to configurations studied in the previous section, configurations here will have a smaller inclusion size \(d\) given the same \(N\) satisfying \(N>5\). Conversely, a similar-valued \(d\) corresponds to a configuration with fewer inclusions. We keep the inclusion material unchanged, so \(\alpha=0.4,\beta=2.4\).
Figure 5: A similar plot to Fig. 3(a) but showing the variation of \(\tilde{V}\) and \(\tilde{G}\) as a function of normalized inclusion size \(d/D_{K}\), with \(D_{K}\) identified from Fig. 4(c). It can be observed that \(D_{K}\) quantitatively identifies the transition from the green region (\(d>_{K}\)) to the orange region (\(d<D_{K}\)).
Fig. 6(a) shows the variation of \(\tilde{V}\) (blue square) and of \(\tilde{G}\) (red square) as a function of \(d/D_{K}\), with \(D_{K}\) taken directly from the previous section. Qualitatively, we can make two observations from this figure. First, a lower volume fraction of inclusion can indeed lead to a better toughening outcome, namely, lower (apparent) crack speed (\(\tilde{V}\)) and higher (apparent) fracture energy dissipation rate (\(\tilde{G}\)). Second, such a configuration occurs only when \(d>D_{K}\); as \(d\) transitions into \(d<D_{K}\), a lower volume fraction configuration always leads to a less effective toughening outcome (e.g., \(\tilde{G}\) becoming monotonically decreasing). As an example, configurations one and two have \(d_{1}>d_{2}\) but give \(\tilde{V}_{1}>\tilde{V}_{2}\) and \(\tilde{G}_{1}<\tilde{G}_{2}\); such a relation is nowhere to be found in the orange region where \(d<D_{K}\). Here, \(d=D_{K}\) does not lead to the best toughening outcome. For these configurations, decreasing inclusion size \(d\) simultaneously reduces the volume fraction (or the number of inclusion). As such, compared to configurations from the previous section, fewer inclusions are there to interact with a crack as we approach \(d=D_{K}\) from the above. Fig. 6(b) shows the final crack trajectory for six configurations highlighted in Fig. 6(a). We can draw a similar observation by comparing to Fig. 3(b): when \(d>D_{K}\) (configurations one, two, and three), the crack goes through inclusions; when \(d<D_{K}\), the crack starts deflecting away. For configuration three, the crack begins to show a small degree of deflection even though the corresponding \(d>D_{K}\). Again, this may be explained by approximations and uncertainties associated with estimating \(D_{K}\). To conclude, configured appropriately, a lower volume fraction of inclusions can lead to a better toughening outcome if and only if the inclusion size approaches from above the size of the K-dominance zone. Results shown here also substantiate our argument that the size of the K-dominance zone can be viewed as the minimal size for an inclusion to interact effectively with a crack.
### Extending to varying inclusion material with varying volume fraction
In this section, we extend our study to consider the effects of different inclusion materials (or different mechanical configurations of inclusions). We continue with the varying volume fraction setting, i.e., \(f_{1}\propto N^{-1}\). We consider three different kinds of materials using three different values for \(\alpha\beta\): 0.45, 0.96, and 1.92, which lead to three different critical stress intensity factor values: \(K_{\text{IC, in}}=\sqrt{0.45}K_{\text{IC0}}\), \(K_{\text{IC, in}}=\sqrt{0.96}K_{\text{IC0}}\), and \(K_{\text{IC, in}}=\sqrt{1.92}K_{\text{IC0}}\). Within each \(K_{\text{IC, in}}\) value,
Figure 6: Scenarios with varying volume fractions. (a) Variation of \(\tilde{V}\) (blue square) and \(\tilde{G}\) (red square) as a function of \(d/D_{K}\) for configurations with different volume fractions \(f_{1}\propto N^{-1}\). (b) Visualizations of the final crack trajectory from the six configurations highlighted in (a) whose inclusion size and volume fraction increase from the left to the right.
we consider three different combinations of \(\alpha\) and \(\beta\) while keeping \(\alpha\beta\) unchanged. For \(\alpha\beta=0.45\) we consider \(\alpha=0.2,\beta=2.25\), \(\alpha=0.3,\beta=0.5\), and \(\alpha=0.36,\beta=1.25\). For \(\alpha\beta=0.96\) we consider \(\alpha=0.4,\beta=2.4\) (already studied), \(\alpha=0.6,\beta=1.6\), and \(\alpha=0.8,\beta=1.2\). For \(\alpha\beta=1.92\) we consider \(\alpha=0.8,\beta=2.4\), \(\alpha=1.0,\beta=1.92\), and \(\alpha=1.2,\beta=1.6\). Finally, we estimate two more \(D_{K}\) values in the same way described in the previous section using \(E=E_{0},G_{\rm C}=0.45G_{\rm C0}\) and \(E=E_{0},G_{\rm C}=1.92G_{\rm C0}\), respectively.
Fig. 7 shows the simulation results for different combinations of \(\alpha\) and \(\beta\), where the first row
Figure 8: (a) Visualizations of the final crack trajectory for four different configurations highlighted in Fig. 7(b). (b) Similar visualizations to (a) but for configurations highlighted in Fig. 7(d). (c) Similar visualizations to (a) but for configurations highlighted in Fig. 7(f).
Figure 7: Variation of \(\tilde{V}\) (first row) and \(\tilde{G}\) (second row) as a function of \(d/D_{K}\) for different inclusion materials under the varying volume fraction setting \(f_{1}\propto N^{-1}\). Each column corresponds to an additional value of \(\alpha\beta\): (a) and (b) in the first column corresponds to \(\alpha\beta=0.45\), (c) and (d) in the second column corresponds to \(\alpha\beta=0.45\), and (d) and (e) corresponds to \(\alpha\beta=1.92\). Each symbol (triangle, circle, and square) within each column of plots corresponds to a specific choice of \(\alpha\) and \(\beta\), whose values are shown within the plots in the first row.
corresponds to the variation of \(\tilde{V}\) as a function of \(d/D_{K}\) and the second row corresponds to that of \(\tilde{G}\). Two conclusions can be drawn from this figure. First, changing the inclusion material will change \(D_{K}\) (thereby moving where \(d/D_{K}\) is) but will not remove the size interplay between the inclusion and the K-dominance zone. Indeed, in the green region where \(d>D_{K}\), we can always find a configuration with a lower inclusion volume fraction that gives a better toughening outcome. On the contrary, in the orange region where \(d<D_{K}\), a lower volume fraction always leads to a less effective toughening outcome. Second, the amount of toughening, in terms of the values of \(\tilde{V}\) and \(\tilde{G}\), depends on the choice of \(\alpha\) and \(\beta\). In general, a more compliant (smaller \(\alpha\)) and tougher (larger \(\beta\)) inclusion material will lead to a better toughening outcome, and this is well-documented [47]. We also note that the size interplay between \(d\) and \(D_{K}\) does not necessarily change the crack trajectory, as we have observed. Whether there is a change in the crack trajectory, and if so, the amount of change, also depends on the specific choice of inclusion material (values of \(\alpha\) and \(\beta\)). Fig. 8 illustrates this point by showing the crack trajectories for three inclusion materials. Each subfigure corresponds to one inclusion material and contains four configurations highlighted in Fig. 7(b). In Fig. 8(a) where we use \(\alpha=0.3,\beta=1.5\), there is no crack deflection as we cross the transition point \(d/D_{K}=1\) from configuration two to configuration three: the crack always goes through inclusions. In Fig. 8(b), where we change to \(\alpha=0.6,\beta=1.6\), there is a noticeable amount of crack deflection as we cross \(d/D_{K}=1\). Lastly, in Fig. 8(c), where we change again to \(\alpha=1.0,\beta=1.92\), we observe a more considerable amount of crack deflection as we cross \(d/D_{K}=1\).
## 5 Summary
Using a variational phase-field approach, we investigate mesoscale size effects of material heterogeneities on dynamic crack propagation in brittle solids under a quasi-static loading condition. We consider a simple case using a single array of square inclusions to represent mesoscale heterogeneities. We study how altering their geometrical (inclusion size and spacing) and mechanical (inclusion material) configurations can lead to different crack propagation patterns under a Mode-I loading condition. We summarize our main findings below:
* In the context of fracture, there is an interplay between the size of the inclusion and that of the K-dominance zone. Fixing the volume fraction of inclusions and matching the inclusion size \(d\) with the size of the K-dominance zone (\(D_{K}\)) leads to the best toughening outcome. Conversely, varying the volume fraction of inclusions, a lower volume fraction configuration can lead to a better toughening outcome if and only if those inclusions are configured appropriately with their size approaching from above the size of the K-dominance zone.
* The size of the K-dominance zone can therefore be viewed as the minimum size for an inclusion to interact effectively with a crack. Therefore, to toughen a piece of brittle solid, it is more efficient to increase the number of inclusions available to interact effectively with a crack (i.e., inclusion size being no smaller than that of the K-dominance zone).
* Changing the inclusion material can change the size of the K-dominance zone, the amount of toughening, and the crack trajectory (e.g., the amount of crack deflection). However, it does not remove the size interplay between the inclusion and the K-dominance zone.
Our work reveals the connection between the toughening effectiveness and the mesoscale size interplay between \(d\) and \(D_{K}\), thereby opening a venue for the rational design of functional (meta-)materials that optimally balance the tradeoff between strength, stiffness, and toughness.
Looking ahead, we discuss several exciting research directions. The first one is extending to study the scenario where a crack can interact with multiple arrays of inclusions. In particular, when neighboring inclusions are close enough to induce stress modulations within the K-dominance zone, the crack trajectory may change and lead to a different toughening outcome. Experiments from [21] demonstrated that the toughening outcome under a single void array is quantitatively different from that under multiple void arrays and that void-void interactions play an important role in this regard. It will be interesting to check how translatable their observation is to our work, where inclusions are not voids but are made of different materials.
The second one is developing a more systematic and efficient approach to estimate \(D_{K}\): Given an estimation of the application's and (mesoscale) defects' geometry and size, how do we quickly give a reliable estimation on the range of \(D_{K}\) considering crack evolution? Further, how significant is the effect of rapid crack motion on \(D_{K}\)? Previous work has shown that increasing crack speed decreases the process zone size [61], which also implies a possible change to \(D_{K}\). Does such a change of \(D_{K}\) require modifying the optimal design strategy obtained from a quasi-static estimation? In our work, the crack speed is not significant due to a quasi-static loading condition (partly due to our choices of inclusion materials). Therefore, estimating \(D_{K}\) using a quasi-static approximation seems accurate enough from a design perspective. However, the crack speed can fluctuate significantly for dynamic loading conditions and even approach the Rayleigh wave speed when interacting with heterogeneities [26]. In these scenarios, a quasi-static approximation may not hold, and a dynamic formula considering the effect of instantaneous crack tip speed is needed [24; 62]. Moreover, early experiments have suggested the possible lack of the K-dominance zone due to the highly transient nature of the crack tip motion [63; 64] under dynamic loadings, whose implication toward our finding merits further investigation. Studying the variation of \(D_{K}\) as a function of different loading rates (as well as crack speed) can therefore be helpful for applications in extreme conditions such as high-speed impact.
The third one is studying the potential implication of our findings to inclusions taking the form of voids. In these cases, we may consider voids as spacings and the materials in between as inclusions. Using the same void size, a recent study [23] showed that a smaller volume fraction of voids (i.e., a more significant volume fraction of inclusions) leads to a better toughening outcome, which is qualitatively consistent with our findings. It will be, however, interesting to study more systematically and quantitatively the size interplay between void, spacing, and the K-dominance zone associated with the base material.
The last one is considering interfacial effects [47], which are not included in our current numerical model but can become significant depending on the choice of material and inclusion size and shape [59], especially in a three-dimensional setting [60].
## 6 CRediT Authorship Contribution Statement
**L Li**. Conceptualization, Methodology, Software, Formal Analysis, Investigation, Writing - Original & Draft, Visualization. **J Rao**. Methodology - Crack tip tracking algorithm, Writing - Appendix B. **TC Hufnagel**. Resources, Supervision, Writing - Review & Editing,
Funding acquisition, Project Administration. **KT Ramesh**. Conceptualization, Resources, Supervision, Writing - Review & Editing, Funding Acquisition, Project Administration.
## 7 Acknowledgements
The authors gratefully acknowledge the financial support provided by the Corning Research and Development Corporation, and for stimulating discussions on glass ceramics with Dr. Jason Harris, Dr. Charlene Smith, and Dr. Xinyi Xu from Corning.
## Appendix A Implementation and verification of our phase-field simulator
Following [40] for modeling fracture in brittle solids, we decompose the elastic strain energy shown in Eqn. 2.1 to a tensile part ("+") and a compressive part ("\(-\) "), with the phase-field acting only on the former:
\[\mathcal{W}^{e}(\epsilon_{ij},\phi)=[(1-k)(1-\phi)^{2}+k]\mathcal{W}^{e,+}( \epsilon_{ij})+\mathcal{W}^{e,-}(\epsilon_{ij}), \tag{10}\]
where \(\epsilon_{ij}=(u_{i,j}+u_{j,i})/2\) is the infinitesimal strain tensor, and \(k\) is a user-defined small constant used for numerical convenience, preventing \(\mathcal{W}^{e,+}\) from vanishing as \(\phi\to 1\). To compute \(\mathcal{W}^{e,+}\) and \(\mathcal{W}^{e,-}\), we first calculate the tensile part \(\epsilon^{+}\) and the compressive part \(\epsilon^{-}\) of \(\epsilon\) using spectral decomposition [40]:
\[\epsilon_{ij} =\epsilon_{ij}^{+}+\epsilon_{ij}^{-}, \tag{11}\] \[\text{with}\quad\epsilon_{ij}^{+} =\sum_{d=1}^{2}\langle\epsilon^{d}\rangle_{+}n_{i}^{d}n_{j}^{d},\] (12) \[\text{and}\quad\epsilon_{ij}^{-} =\sum_{d=1}^{2}\langle\epsilon^{d}\rangle_{-}n_{i}^{d}n_{j}^{d}, \tag{13}\]
where \(\epsilon^{d}\) is the \(d\)-th eigenvalue of \(\epsilon\), \(n^{d}\) is the corresponding eigenvector, \(\langle x\rangle_{+}\) stands for \((x+|x|)/2\), and \(\langle x\rangle_{-}\) stands for \((x-|x|)/2\) with \(|x|\) being the absolute value of \(x\). We can then express \(\mathcal{W}^{e,+}(\epsilon_{ij})\) and \(\mathcal{W}^{e,-}(\epsilon_{ij})\) as the following:
\[\mathcal{W}^{e,+}(\epsilon_{ij}) =\frac{1}{2}\lambda\langle\epsilon_{kk}\rangle_{+}^{2}+\mu \epsilon_{kj}^{+}\epsilon_{jl}^{+}\delta_{kl}, \tag{14}\] \[\mathcal{W}^{e,-}(\epsilon_{ij}) =\frac{1}{2}\lambda\langle\epsilon_{kk}\rangle_{-}^{2}+\mu \epsilon_{kj}^{-}\epsilon_{jl}^{-}\delta_{kl}, \tag{15}\]
where \(\lambda\) and \(\mu\) are the Lame constants that can be determined from the Young's modulus \(E\) and the Poisson's ratio \(\nu\). Applying the principle of least action to Eqn. 2.1 with \(\mathcal{W}^{e}\) expressed using Eqns. 10, 14 and 14, we arrive at the following two governing equations:
\[\sigma_{ij,j}+b_{i} =\rho\ddot{u}_{i}, \tag{16}\] \[\left[1+\frac{4c_{w}\ell(1-k)}{G_{\text{C}}}\mathcal{W}^{e,+} \right]\phi-\ell^{2}\phi_{,ii} =\frac{4c_{w}\ell(1-k)}{G_{\text{C}}}\mathcal{W}^{e,+}, \tag{17}\]
where \(\sigma_{ij}=\partial\mathcal{W}^{e}/\partial\epsilon_{ij}\). We enforce the irreversible growth condition \(\dot{\phi}>0\) using a strain-history field [40] over the simulation domain:
\[\mathcal{H}(x,t)=\max_{s\in[0,t]}\mathcal{W}^{e,+}\left(\epsilon(x,s)\right)\; \forall\,x\in\Omega. \tag{12}\]
Replacing \(\mathcal{W}^{e,+}\) with \(\mathcal{H}(x,t)\) in Eqn. 11 we then want to solve:
\[\sigma_{ij,j}+b_{i} =\rho\ddot{u}_{i}, \tag{13}\] \[\left[1+\frac{4c_{w}\ell(1-k)}{G_{\rm C}}\mathcal{H}\right]\phi- \ell^{2}\phi_{,ii} =\frac{4c_{w}\ell(1-k)}{G_{\rm C}}\mathcal{H}, \tag{14}\]
together with the following Neumann boundary conditions (plus any existing Dirichlet boundary conditions) :
\[\sigma_{ij}n_{j} =t_{i}\text{ on }\partial\Omega, \tag{15}\] \[\phi_{,i}n_{i} =0\text{ on }\partial\Omega. \tag{16}\]
We solve Eqns. 13 and 14 weakly based on a standard finite element discretization and calculation procedure, using the alternating minimization (or staggered) scheme. We verify our implementation using published simulation results of the classical Kalthoff-Winkler experiment [50]. Fig. 9(a) shows the simulation domain and boundary condition, where we also take advantage of the symmetric nature of the experiment to reduce computational cost. We model the impactor by applying the following velocity to the lower left boundary:
\[v=\begin{cases}\frac{t}{t_{0}}v_{0}&t\leq t_{0},\\ v_{0}&t>t_{0},\end{cases} \tag{17}\]
with \(v_{0}=16.5\) m/s and \(t_{0}=1\)\(\mu\)s. The material properties are taken from [28]: \(\rho=8000\) kg/m\({}^{3}\), \(E=190\) GPa, \(\nu=0.3\), and \(G_{\rm C}=2.213\times 10^{4}\) J/m\({}^{2}\). We use \(k=1\times 10^{-12}\) as the small constant used for preventing \(\mathcal{W}^{e,+}\) from vanishing. We model the initial crack as an explicit discontinuity that resembles a sharp wedge. We use \(\ell=3.9\times 10^{-4}\) m and \(t=0.04\)\(\mu\)s. We refine elements around where the crack is expected to propagate and ensure the element size within the refined region \(\delta<\ell/2\). Figs. 9(b) and 9(c) show the temporal evolution of the total elastic energy and dissipated energy obtained from our simulator, respectively. Our results agree well with those extracted from [28] considering multiple element sizes. Fig. 10 shows the temporal evolution of crack trajectory for four different time instants, all of which agree qualitatively with measurements from experiments [50] and simulations [28].
## Appendix B Introduction to our crack tip tracking algorithm
We identify the boundary of a crack using a user-defined phase field value \(\phi_{\rm c}\). We pick \(\phi_{\rm c}=0.85\) in this work. This algorithm finds what can be considered the tip of the boundary (i.e., the crack tip) in four steps (see Fig. 11): starting from the phase field and mesh data at a particular time step, it first reconstructs the iso-curve, isolates points near the tip, resamples these points, then computes the tip by looking for symmetries in the curvature. The algorithm is efficient and parallelizable, as multiple times teps can be analyzed simultaneously.
The algorithm first constructs the iso-curve from the phase field and mesh data of a given time step. Here, the iso-curve is represented by a list of ordered pairs, \([(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{n},y_{n})]\), where each ordered pair represents a point in which the isocurve intersects with an element edge. The algorithm computes this list by looping through all elements and stopping when encountering an element (denoted as \(\#n\)) whose nodal phase field values \(\phi_{n1},\phi_{n2},\phi_{n3}\) satisfy \(\phi_{ni}<\phi_{\rm c}<\phi_{nj}\) for at least one edge of that element. Then, starting from the edge \((n_{i},n_{j})\), it uses knowledge of mesh connectivity to look for the next edge where \(\phi_{ni}<\phi_{\rm c}<\phi_{nj}\). It stores the \((x,y)\) location of where \(\phi=\phi_{\rm c}\) on an edge \((n_{i},n_{j})\) as the next element in the list \([(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{n},y_{n})]\). The algorithm terminates when no more adjacent edges \(\phi_{ni}<\phi_{\rm c}<\phi_{nj}\) can be found. It then travels in reverse to sample the rest of the iso-curve.
From the list \([(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{n},y_{n})]\), the algorithm moves to isolate points on the iso-curve that are near the approximate location of the tip. This step is necessary because the end goal is to obtain the tip location using symmetries in the curvature of the iso-curve. However, this method can easily misidentify the tip in particularly straight cracks (which have a uniform curvature of zero along the sides) or cracks with even curvature throughout. Thus, points of \(\phi=\phi_{\rm c}\) near the crack tip are first found and isolated. This is done by finding the closest vector between two points within some user-defined anti-parallel threshold. Specifically,
Figure A.10: Snapshots obtained from our simulation showing the crack trajectory at four different time instants: \(t=24.8\)\(\mu\)s, \(t=33.8\)\(\mu\)s, \(t=42.8\)\(\mu\)s, and \(t=51.8\)\(\mu\)s.
Figure A.9: (a) Geometry and boundary condition of the simulation domain. (b) Temporal evolution of the system’s kinetic energy obtained from our simulator (red circle) and from [28] using various element sizes (curves colored in black). (c) A plot similar to (b) shows the temporal evolution of the system’s dissipated energy through fracture.
let \(\mathbf{x}_{1}=(x_{i},y_{i})\), \(\mathbf{x}_{2}=(x_{j},y_{j})\) be two points on the iso-curve where \(i<j\). We wish to find the combination of \((i,j)\) such that the quantity \(j-i\) is as small as possible, and that \(\frac{\mathbf{x}_{1}\cdot\mathbf{x}_{2}}{|\mathbf{x}_{1}||\mathbf{x}_{2}|} \approx-1\). In practice, we simply stop the algorithm when \(\frac{\mathbf{x}_{1}\cdot\mathbf{x}_{2}}{|\mathbf{x}_{1}||\mathbf{x}_{2}|}\) is within some range centered at \(-1\). Following this step, an approximate envelope consisting of the tip can be identified by a new list containing a reduced number of points \([(x_{i},y_{i}),(x_{i+1},y_{i+1}),...,(x_{j-1},y_{j-1}),(x_{j},y_{j})]\). From this new list, the points are resampled with a greater density using linear interpolation and a Gaussian smoothing process (to remove discontinuities in the curvature). Denoting the resampled curve to be \([(x_{1}^{R},y_{1}^{R}),...,(x_{n_{R}}^{R},y_{n_{R}}^{R})]\), where \(n_{R}\) is the total number of points created in the resampling process. Then, a simple curvature calculation is performed on this curve using numerical differentiation. More specifically, since the points of the iso-curve are in an ordered list, the curvature \(\kappa\) at any point \(i\) can be found by
\[\kappa_{i}=\frac{[(x_{i+1}-x_{i-1})^{2}+(y_{i+1}-y_{i-1})^{2}]^{3/2}}{|(x_{i+ 1}-x_{i-1})(y_{i+2}-2y_{i}+y_{i-2})-(x_{i+2}-2x_{i}+x_{i-2})(y_{i+1}-y_{i-1})|},\] (B.1)
which is essentially the numerical formulation of the curvature of a pair of parameterized functions in Cartesian coordinates:
\[\kappa=\frac{[(x^{\prime})^{2}+(y^{\prime})^{2}]^{3/2}}{|x^{\prime}y^{\prime \prime}-x^{\prime\prime}y^{\prime}|}.\] (B.2)
This calculation results in a list of curvatures \((\kappa_{3},\kappa_{4},...,\kappa_{n_{R}-2})\), and the algorithm proceeds to look for the point \(\kappa_{i}\) where the curvature plot is most symmetrical within some window \(l\). This is done by summing the quantity \((\kappa_{i-k}-\kappa_{i+k})^{2}\) where \(k=1,2,...,l\). Essentially, we compute the difference between a point with an indicial distance \(k\) on the left-hand side of \(i\) and a point
Figure B.11: A schematic demonstrating our crack tip tracking method.
with an indicial distance \(k\) on the right-hand side of \(i\). We square this difference, and we sum this value over all possible values of \(k\) from \(1\) to \(l\). This means computing:
\[\text{Error associated with point }i=\sum_{k=1}^{l}(\kappa_{i-k}-\kappa_{i+k})^{2}.\] (B.3)
The lower this error, the more symmetrical the curve is around the point \(i\). Since we need to use a window of size \(l\) to calculate this error, we do not consider points within an indicial distance of \(l\) from the end of the resampled curve to avoid issues with this calculation. Then, the point with the lowest error is denoted as the crack tip for the considered time step. The actual location of the tip is found by the identical index \(i+2\) in the non-smoothed version of the resampled curve. We add this 2 because the curvature calculation removed two entries from the indices.
|
2303.17835 | Improved Difference Images for Change Detection Classifiers in SAR
Imagery Using Deep Learning | Satellite-based Synthetic Aperture Radar (SAR) images can be used as a source
of remote sensed imagery regardless of cloud cover and day-night cycle.
However, the speckle noise and varying image acquisition conditions pose a
challenge for change detection classifiers. This paper proposes a new method of
improving SAR image processing to produce higher quality difference images for
the classification algorithms. The method is built on a neural network-based
mapping transformation function that produces artificial SAR images from a
location in the requested acquisition conditions. The inputs for the model are:
previous SAR images from the location, imaging angle information from the SAR
images, digital elevation model, and weather conditions. The method was tested
with data from a location in North-East Finland by using Sentinel-1 SAR images
from European Space Agency, weather data from Finnish Meteorological Institute,
and a digital elevation model from National Land Survey of Finland. In order to
verify the method, changes to the SAR images were simulated, and the
performance of the proposed method was measured using experimentation where it
gave substantial improvements to performance when compared to a more
conventional method of creating difference images. | Janne Alatalo, Tuomo Sipola, Mika Rantonen | 2023-03-31T06:57:34Z | http://arxiv.org/abs/2303.17835v2 | # Improved Difference Images for Change Detection Classifiers in SAR Imagery Using Deep Learning
###### Abstract
Satellite-based Synthetic Aperture Radar (SAR) images can be used as a source of remote sensed imagery regardless of cloud cover and day-night cycle. However, the speckle noise and varying image acquisition conditions pose a challenge for change detection classifiers. This paper proposes a new method of improving SAR image processing to produce higher quality difference images for the classification algorithms. The method is built on a neural network-based mapping transformation function that produces artificial SAR images from a location in the requested acquisition conditions. The inputs for the model are: previous SAR images from the location, imaging angle information from the SAR images, digital elevation model, and weather conditions. The method was tested with data from a location in North-East Finland by using Sentinel-1 SAR images from European Space Agency, weather data from Finnish Meteorological Institute, and a digital elevation model from National Land Survey of Finland. In order to verify the method, changes to the SAR images were simulated, and the performance of the proposed method was measured using experimentation where it gave substantial improvements to performance when compared to a more conventional method of creating difference images.
change detection Sentinel-1 SAR U-Net mapping transformation function remote sensing
## 1 Introduction
Remote sensing change detection can be used for many purposes, such as damage assessment after a natural disaster [1, 2, 3], detection of forest damages after a storm [4, 5], and monitoring deforestation and glacier melting [6, 7], to name only a few. Change detection works by comparing two images that have been captured at different dates in the same geographical location and finding the areas that have changed during the time between the acquisitions [8]. Different platforms can be used to image the terrain, such as airplanes and satellites, however only satellites provide the advantage of continuously monitoring the whole planet [9]. The revisit time of some satellite systems can be as short as only a few days, and the images are available from anywhere in the planet. This makes the satellite images a useful source of remote sensing data for change detection applications. Some space agencies, such as European Space Agency (ESA), provide some of the satellite images for anybody to download and use [10]. The ease of acquiring the data further facilitates the development of change detection systems that are based on the satellite remote sensing techniques. The
images from the satellites are captured using either optical or radar sensors, with radar having the advantage of piercing the cloud layer, thus enabling it to work in various weather conditions [9]. However, the radar satellites have their disadvantages as well. The resolution of the images is not as good as what the optical instruments can produce. The resolution of the radar images is defined by the antenna length and the frequency band of the radar signal. To enable higher resolution images, the satellites use the synthetic aperture radar (SAR) technique, where the satellite movement over the ground is utilized to synthesize virtual aperture that is longer than the physical antenna on the satellite [11]. However, even with the SAR technique the radar images are lower resolution when compared to the optical images. ESA has the Sentinel-1 mission with two SAR satellites that operate on the C-band and have the spatial resolution of around \(5\times 20\) meters [12]. Likewise, speckle noise reduces the quality of the SAR imagery. SAR images always have a grainy look from the speckle, which is random noise that is always present in the images. Despite the shortcomings of the SAR imagery, they are commonly used in remote sensing change detection [13, 14, 15, 16].
One approach to implement a change detection system, that is generally used in unsupervised change detection, is to proceed in steps [17]. Figure 1 illustrates this method. The images are first preprocessed to make them comparable among each other. Then, two images from the same location, that are captured at different times, are used to produce a difference image (DI) using an algebraic operation like subtraction, ratio, or log ratio. Finally, the DI is analysed by a classifier algorithm to produce a change map that indicates the changed regions. The preprocessing step is crucial for this method to work well. The issue with the speckle noise is commonly recognized problem with change detection on SAR imagery [13, 15, 16], and to mitigate the issue, noise suppression algorithms are used in the image preprocessing step. However, it is impossible to remove the noise completely, thus the DI also includes noise that causes misclassifications in the classification step. Likewise, other image properties that influence the image comparability have an effect to the quality of the DI. This includes properties such as the satellite orbit direction, incidence angle, and ground moisture content. The satellite does not capture the image from the same angle during every revisit. In case of the ESA Sentinel-1 satellites, the satellite can be flying from North to South, or from South to North, during the image acquisition, and the satellite orbit can be higher or lower with respect to the horizon from the ground perspective between the overflies. The satellite imaging angle influences how the radar signal backscatters from the ground features [18], which results in that images taken from different imaging angles likely produce lower quality DI than images taken from the same imaging angle. Likewise, ground weather conditions can influence the DI quality. Soil moisture content changes the dielectric constant of the soil, thus changing the backscatter intensity of the radar signal [19]. Images that are taken in similar weather conditions are likely to produce better quality DI when compared to images that are taken in different weather conditions. One solution to improve the DI quality is to favour images with similar acquisition conditions when selecting the images that are used to produce the DI. However, this is not always possible.
In this paper we introduce a new method of producing better quality difference images by using neural network-based mapping transformation function preprocessing step that factors in the image acquisition conditions of the SAR images, thus making the SAR images more comparable. Existing research about SAR image preprocessing has focused on removing speckle noise from the images [20, 21], or correcting the incidence angle variation [22, 23]. However, to the best of knowledge of the authors, this is the first time when the comparability of the SAR images is improved by taking in to account the overall image acquisition conditions using neural network-based preprocessing step. Project code is available on GitHub 1.
Footnote 1: [https://github.com/janne-alatalo/sar-change-detection](https://github.com/janne-alatalo/sar-change-detection)
## 2 Materials and Methods
### Proposed Method
Figure 2 illustrates the overall architecture of the proposed method. It replaces the conventional method that is illustrated in Figure 1 image differencing step. The idea of the proposed method is to improve the SAR image comparability by considering the acquisition conditions of the SAR images. The proposed method utilizes a mapping transformation function that creates artificial SAR images in the requested acquisition conditions. The mapping transformation function \(\mathcal{F}\) is a neural network model that is trained to predict the SAR image at the time \(t\) (\(I_{t}\)). The neural network output \(\hat{I}_{t}\) is the artificial SAR image that is created in the acquisition conditions of \(I_{t}\), therefore it should be more comparable to the \(I_{t}\) than previous SAR images from the location that might have been captured in different acquisition conditions. The model input consists of three distinct features, which are: The previous SAR images from the location; the acquisition conditions of the SAR images (including at time \(t\)); and the digital elevation map from the location. The objective of the neural network model is to learn to replicate the SAR image at the time \(t\). The only information from the time \(t\) in the model input are the image acquisition conditions of the \(I_{t}\). This means that for the model to be able to replicate the \(I_{t}\), it needs to learn to map the information contained in the previous SAR images and the digital elevation map to the image acquisition conditions of the \(I_{t}\). With an ideal model that could perfectly replicate the \(I_{t}\), the \(\hat{I}_{t}\) and \(I_{t}\)
would be identical if nothing has changed between the image acquisition of the \(I_{t-1}\) and \(I_{t}\), however the \(\hat{I}_{t}\) would be missing the change if something had changed after the previous image acquisition since the information of the change is not included in the model input data. In practice the SAR images include random noise that is impossible to replicate accurately, and the acquisition conditions are not accurate enough for perfect replication of the \(I_{t}\), therefore the \(\hat{I}_{t}\) only approximates the \(I_{t}\).
The intuitive description of the \(\hat{I}_{t}\) is that the neural network-based mapping transformation function produces a prediction how the \(I_{t}\) should look like based on previous information about the location and the actual imaging conditions of the \(I_{t}\). The produced image \(\hat{I}_{t}\) can be used with the actual image \(I_{t}\) to create the difference image \(\hat{I}_{DI}\) by using a simple algebraic operation like subtraction, ratio, or log ratio. Generating the difference image is the standard method of conducting change detection, especially when using unsupervised methods [17].
Conventional methods of producing the difference image often use only one of the previously captured images with the most recent image to generate the image e.g. \(I_{DI}=g(I_{t},I_{t-y})\)[24]. This method has the previously discussed drawbacks of noise and imaging conditions affecting the final difference image quality. By using the proposed mapping transformation function, the predicted image \(\hat{I}_{t}\) is used in the place of the previously captured image to generate the difference image e.g. \(\hat{I}_{DI}=g(I_{t},\hat{I}_{t})\). The predicted image \(\hat{I}_{t}\) does not contain noise and the mapping transformation function can correct the acquisition condition mismatch between the images, therefore the proposed method should produce better quality difference images when comparing it to the conventional method.
SAR imaging is sensitive to the soil moisture content of the imaged area [19]. Change in the soil moisture level changes the dielectric constant of the soil, and that way changes the SAR backscatter intensity. Often the soil moisture content changes should be ignored by the change detection system. Otherwise, the system would notify changes after every rainy day. This is one of the advantages of the proposed method. By adding weather to the model input acquisition condition parameters, the mapping transformation function can learn to construct the \(\hat{I}_{t}\) in the actual weather conditions
Figure 1: Change detection is often implemented in three distinct steps. The first step is to make the images more comparable to each other using a preprocessing pipeline. The preprocessed images are then used to create difference images (\(I_{DI}\)) using a function \(g\) that is often a algebraic operation, such as subtraction, ratio, or log ratio. The \(I_{DI}\) is then used as an input to a change detection classifier that produces the change map that displays the changed areas. The figure illustrates the conventional method of producing the difference images by using two SAR images that are captured from the location in two different dates.
of \(I_{t}\) and should correctly model the changes in the soil moisture changing the backscatter intensity. Therefore, the false positive changes, that are caused by soil moisture changes, are reduced.
In addition of weather, the acquisition condition parameters also include the imaging angle and identify the satellite that captured the image. A location is imaged by one of the sentinel satellites with an interval ranging from a few days to about a week. The satellite does not capture the image from the same angle every time. The satellite can be in ascending or descending orbit during the image acquisition and the incidence angle can vary between the overpasses. The ascending or descending orbit changes the look direction of the satellite, and that way has a considerable affect to the resulting image. The Sentinel-1 satellites are right-looking. When the satellite is descending from North to South it is imaging to the direction of West, and for ascending passes it is imaging to the direction of East [25]. Various 3D features, like forest edges, lake banks and hills are sensitive to the look direction, therefore the imaging angle is an important parameter when computing the difference image. When using an image differencing method where only one previous image is used for difference image computation, the imaging angle of the most recent image can restrict what previous images can be used to produce the difference image. Seasonal changes, like foliage growth or change in snow cover, means that the most optimal image for the differencing would be the most recent previous image, however different imaging angles can limit the usage of the most recent images. This problem is not present with the proposed method. The model input includes \(n\) previous images and their imaging angle information. The model output image \(\hat{I}_{t}\) is produced using the actual acquisition conditions of \(I_{t}\). The model can use all the information from all \(n\) input images, despite the input including images from different look directions, and the produced image \(\hat{I}_{t}\) represents an image that is acquired from the same angle as \(I_{t}\).
### Neural Network Architecture
Figure 2 illustrates the architecture of the neural network-based mapping transformation function. The architecture is based on the well-known U-Net neural network architecture [26]. The previous \(n\) SAR images, and the digital elevation map (DEM) are stacked to construct the input. The previous images and the DEM are all from the same location. The images are projected to the same resolution and the pixels across the different images are aligned to match the same geographical position. The U-Net architecture is constructed from encoder and decoder units. The encoder takes the input and compresses the input image stack to the latent space by using a set of downsampler blocks that half the input resolution using convolution layers with stride \(2\times 2\). The encoder stacks enough downsampler blocks so that the input image stack is compressed to \(1\times 1\) resolution in image height and width dimensions. The image acquisition
Figure 2: Architectural overview of the proposed method. The neural network-based mapping transformation function fuses the information from previous image acquisitions and predicts what the scene should look like at the imaging conditions of \(I_{t}\). The model output image \(\hat{I}_{t}\) and the actual image \(I_{t}\) is used to produce the difference image \(\hat{I}_{DI}\).
conditions vector, that contains the information of the acquisitions conditions for the \(n\) input images and the target image, is concatenated to the latent vector. The resulting vector is then fed to the decoder that decodes the vector back to the dimensions of a normal SAR image outputting the \(\hat{I}_{t}\). The decoder is constructed from upsample blocks that double the width and height dimensions using transposed convolution layers with stride \(2\times 2\). The decoder has same amount of upsampler blocks as the encoder has downsampler blocks. The number of filters, that are used in the upsampler and downsampler blocks, can be configured for every block individually, except for the final upsample block that has the same number of filters as the SAR image has bands. The encoder and decoder layers are connected with skip connections that help the model in producing the output by not forcing the model to pack all the information to the latent vector. Instead, the information can flow from the input to the output by skipping most of the layers in the architecture. This is a standard method in U-Net style architectures.
### Dataset
A dataset is needed for the training of the neural network-based mapping transformation function. As discussed previously, the mapping transformation function input is composed from the previously taken SAR images; the acquisition conditions of the previous and the most recent SAR image; and the digital elevation map from the location. The objective of the model is to learn to predict the most recent SAR image based on the input, therefore the most recent SAR image is the target in the training dataset. This means that the training dataset does not require any labelled data making the learning process of the proposed method unsupervised and economical to implement. The dataset can be generated directly from available data sources without needing human labelling for the data.
Figure 3: The neural network architecture for the mapping transformation function. The architecture is based on the well-known U-Net neural network architecture. The image acquisition conditions are injected to the latent vector between the encoder and decoder.
The SAR images for the dataset were acquired from the ESA Copernicus Open Access Hub [27]. The Ground Range Detected products were used in this study [28]. The images were captured between March 2020 and August 2021 from the area illustrated in the Figure 4. All images from the time frame that included the area were downloaded from the Copernicus Open Access Hub. The images were preprocessed using the Sentinel-1 Toolbox from the Sentinel Application Platform (SNAP) [29], by applying the data preprocessing workflow described by Filipponi in [30]. The optional noise filtering step was applied to the dataset using the Refined Lee filter from the SNAP toolkit. The more accurate AUX_POEORB precise orbit files were used in the Apply Orbit File step. The AUX_POEORB files are available 20 days after the image acquisition [31], and since the processing was done in spring 2022, the more accurate orbit files were available for all images. The proposed workflow in [30] uses the SRTM Digital Elevation Database in the Range Doppler Terrain correction step, however the database does not cover the area from where the dataset was created, therefore the Copernicus 30m Global DEM was used that does cover the area. The SNAP toolkit can automatically download the required DEM files during preprocessing and the Terrain Correction step supports multiple different DEM sources, including the Copernicus 20m Global DEM, thus the change was trivial to implement. The preprocessed images were saved as GeoTIFF files and uploaded to PostgreSQL 2 database that was using the PostGIS 3 extension. Using a relational database as the storage backend simplified the dataset generation process since all the data was available in one place and queryable with SQL.
Footnote 2: [https://www.postgresql.org/](https://www.postgresql.org/)
Footnote 3: [https://postgis.net/](https://postgis.net/)
Although the Copernicus 30m Global DEM was used in the SAR image terrain correction preprocessing step, the product was not used for the mapping transformation function input. Instead, we used more accurate DEM from National Land Survey of Finland (NLS). NLS provides the DEM in multiple different resolutions of which the most accurate 2m grid DEM was used [32]. The data is open access and distributed under Attribution 4.0 International (CC BY 4.0) license 4. The DEM was downloaded in GeoTIFF format and uploaded to the same PostgreSQL database with the SAR images.
Footnote 4: [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/)
As discussed before, the image acquisition condition data included information about the weather when the images were captured. This data was acquired from Finnish Meteorological Institute (FMI) that provides daily weather observations that are interpolated to \(1\times 1\)km grid [33]. The interpolation method is described by Aalto et al. in [34]. The data is distributed in NetCDF format and uploaded once a month. Daily mean temperature, daily precipitation sum, and snow depth data was downloaded from the time range. The daily observations were extracted from the NetCDF files, converted to daily GeoTIFF rasters, and uploaded to the same PostgreSQL database with the SAR images and DEM.
Figure 4: The dataset was generated from images acquired from the marked area.
The final data samples were created by sampling random locations from the area and random dates from the time range. For training dataset, the time range was limited to the time before 20th of June in 2021, and for the test dataset the time was limited after the date. The image size was set to \(512\times 512\) pixels, and number of previous images was set to \(4\). To keep the spatial resolution of the SAR images essentially unchanged, the geographical dimensions of the images was set to \(3\times 3\)km. For each random location and date, the target SAR image \(I_{t}\) was the next SAR image from the location that was available after the date. The input SAR images \(I_{t-4},I_{t-3},I_{t-2},I_{t-1}\) were the SAR images from the four previous acquisitions from the location that were captured before the \(I_{t}\). The SAR images and the DEM was queried from the PostgreSQL database and the rasters were projected to the same projection window with the same \(512\times 512\) resolution and \(3\times 3\)km spatial dimensions using GDAL library [35]. The gdal.Translate function was used for the projection with nearest neighbor resampling algorithm. After the projection, all pixels were geographically aligned across all images and the images could be stacked to construct the input image stack. The Sentinel-1 satellites use Interferometric Wide swath mode with dual polarization over the land areas thus one SAR image has two bands [12]. That makes the input image stack to have \(1+4\cdot 2=9\) channels (DEM has one channel and every SAR image has two bands/channels).
The acquisition conditions were composed from the following features:
1. Mean temperature of the acquisition date
2. Snow depth in the acquisition date
3. Satellite orbit direction during the acquisition (Ascending/Descending)
4. Incidence angle
5. Satellite id (Sentinel-1A or Sentinel-1B)
6. Precipitations amount in the acquisition date and three previous dates
All other features were scalar values from the acquisition date except for precipitation that is a vector with values for four different days. Since the moisture content of the soil has known effect to the signal, and moisture can linger long times in the soil, it was decided to include the precipitation amounts from multiple days to the acquisition conditions. Taking the precipitation amounts from the previous \(4\) days was a somewhat arbitrary decision with a reasoning that the neural network can learn to ignore the precipitation amounts from previous days if they have no use. The features were flattened to the final vector with dimensionality of \(|D|=9\).
### Experiment Setup
The performance of the proposed method was measured using experimentation. The main contribution of this paper is to offer a new strategy for computing the difference image. Existing methods generally use a strategy where the difference image is computed using \(I_{DI}=g(I_{t-y},I_{t})\), where the \(g\) is the differencing function, \(I_{t-y}\) is one of the previous images from the location captured at some previous date, and \(I_{t}\) is the most recent image from the location. The proposed method uses the neural network output \(\hat{I}_{t}\) in place of the \(I_{t-y}\) to compute the difference image \(\hat{I}_{DI}=g(\hat{I}_{t},I_{t})\). The mapping transformation function factors in the imaging conditions of \(I_{t}\) when generating the \(\hat{I}_{t}\), therefore the \(\hat{I}_{DI}\) should be higher quality when compared to \(I_{DI}\). The difference image is generally further used in the change detection system to detect the changes by applying a classifier to the difference image. The classifier outputs a change map indicating the pixels that contain the detected changes. By using identical classifier to classify the difference images generated by the two different methods and comparing the classifying accuracy of the resulting change maps, the quality of the two difference images can be measured.
#### 2.4.1 Change Simulation
The experiment needs a dataset with known changes so that the accuracy of the change detection classifier can be determined. This is a challenge since only a small number of datasets exists for remote sensing change detection even for optical satellite images [36]. For SAR images there are only few datasets such as the ones used in the following publications [37, 38], however they consist of only few SAR image pairs with a hand labelled change map. Currently there are no large enough SAR datasets for deep learning applications available online [39].
To avoid the problem with the lack of change detection datasets for SAR images, the decision was made to use simulation to add changes to real SAR images. This technique was used by Inglada and Mercier in [40] where they measured the performance of their statistical similarity measure change detection algorithm using simulated changes. The authors used three different methods for change simulation. The techniques were: _offset change_, where the original value was shifted by a value; _Gaussian change_, where the original value was changed by adding zero mean Gaussian noise to the value; and _deterministic change_, where a value was copied from some other location in the image. Likewise,
Cui et al. used change simulation for SAR images when they introduced an evaluation benchmark for SAR change detection algorithms [41]. The change simulation methods in the paper try to replicate changes that are commonly seen in the real world using techniques that correctly resemble the statistical properties of the real world changes. Based on these papers two change simulation methods were devised for this study.
1. _Offset change_: A value is added to the original pixel value. The simulation does not try to replicate any real world change, however it is trivial to implement, and the offset value can be changed to test different offsets.
2. _First-order statistical change_: The statistical distribution of the change area is converted to the statistical distribution of some other nearby geographical feature. This replicates the real world changes more accurately.
Figure 5 illustrates the simulated change methods when applied to an example SAR image. The changes were added to the SAR images by creating a random shape mask and positioning the mask to a random location in the SAR image. The pixel values inside the mask were changed using the selected method. The location of the mask was restricted to forested geographical areas in the SAR image. If the mask location was at forest edge, the mask part that landed outside of the forested area was not changed. The information about different geographical features was acquired from the NLS Topographic Database [42]. The database was also utilized in first-order statistical change implementation where the forest area pixel values were changed to follow the statistical distribution of some other geographical feature. The nearest areas of the desired geographical feature type were queried from the database, and the statistical distribution of the pixel values was estimated using a univariate kernel density estimator (KDE) from the stathmodels Python library [43]. A second univariate KDE model was fitted to the pixel values of all forested area pixels in the SAR image. The mapping of the pixel values was implemented using the method of modifying first-order statistical distribution described in [41]. The change area pixel values were first mapped to uniform distribution in the interval \([0,1]\) by using the cumulative distribution function (cdf) of the forest area KDE. After that, the inverse cdf of the second KDE model is applied to the uniformly distributed values, thus mapping them to the distribution of the desired geographical feature.
#### 2.4.2 Change Classifier
The quality of the difference images was measured using two different classifiers. The first method is a simple threshold method. A thresholding value is chosen, and the pixels are classified to changed or unchanged based on if the value is smaller or greater than the threshold. This requires that the pixels have scalar values. The scalar valued difference images were produced using the following equations:
\[\hat{I}_{DI}(x,y)=\sqrt{\sum_{b}{(I_{t}(x,y,b)-\hat{I}_{t}(x,y,b))^ {2}}} \tag{1}\] \[I_{DI}(x,y)=\sqrt{\sum_{b}{(I_{t}(x,y,b)-I_{t-y}(x,y,b))^{2}}} \tag{2}\]
In the equations, \(\hat{I}_{DI}\) is the difference image that is computed using the proposed method, \(I_{DI}\) is the difference image that is computed using the conventional method, \(b\) is the band, and the \(x\) and \(y\) define the pixel location. The different bands are considered as vector dimensions. Pythagorean theorem is used to compute the vector length that is used as the value for the difference image pixel. The threshold method was used as an example of an unsupervised classifier algorithm [39]. The performance of the threshold classifiers was measured using the well known area under curve (AUC) metric that is computed from the receiver operating characteristic (ROC) curve. The metrics were computed to the test partition of the neural network mapping function dataset. The \(\hat{I}_{DI}\) and \(I_{DI}\) difference images were computed for every sample in the test dataset, and the pixels from all samples were used to generate the two datasets that were used to compute the ROC curves and AUC metrics.
The second classifier was the linear support vector classifier (SVC). The support vector classifier was used as an example of supervised machine learning algorithm. The support vector models work with multidimensional data, therefore the difference images were produced using simple subtraction:
\[\hat{I}_{DI}(x,y,b)=I_{t}(x,y,b)-\hat{I}_{t}(x,y,b) \tag{3}\] \[I_{DI}(x,y,b)=I_{t}(x,y,b)-I_{t-y}(x,y,b) \tag{4}\]
The test dataset from the mapping transformation function training was used to train the classifiers. For each sample, the two difference images were computed, and the pixels from all difference image samples were used to create the two
Figure 5: Example of the two simulated change methods. The SAR images are visualized as RGB image by using red and green channels for the two bands. The blue channel is set to zero. The offset change is \(-2.5\) dB in the image c, that is close to the mean change introduced by the first-order statistical change method in the image d.
datasets. The first dataset was generated using the pixels from the \(\hat{I}_{DI}\) samples, and the second dataset was generated using the pixels from the \(I_{DI}\) samples. The two datasets were further divided to train and test datasets with a rule that all pixels originating from one image sample end up in the same side of the split. The train test split was also identical for both datasets. The datasets were used to train two instances of the classifier and measure their accuracy.
## 3 Results
### Training the Neural Network-Based Mapping Transformation Function
Different neural network parameters were experimented with, and the best results were achieved with the parameters shown in the Table 1. Mean squared error was used as the loss function, and AdamW [44] was used as the optimizer. The final training dataset had around \(230,000\) samples, and the training was monitored with a test dataset of around \(9,000\) samples. The neural network architecture was implemented using TensorFlow deep learning framework [45]. The training was conducted on one NVIDIA V100 GPU with batch size of \(200\), and training time of around 30 hours.
Figure 5(c) demonstrates the model performance for one of the test samples. Figure 5(a) shows the real SAR image that the model tries to predict. Figure 5(b) illustrates the difference between the real SAR image and the model output with a heat map where lighter color indicates a greater error. The predicted image is very close to the real SAR image except for lack of noise that is purely random and impossible for the model to predict. Likewise, the lower right corner of the image has an area that has greater error in the prediction. The error is located in a lake, therefore the error can be a result of waves that are likewise impossible to predict.
The proposed method depends on that the mapping transformation function adapts the predicted image \(\hat{I}_{t}\) based on the imaging conditions of \(I_{t}\). To verify that the model genuinely uses the image acquisition conditions to produce the \(\hat{I}_{t}\), the model was experimented to produce outputs with manually modified imaging condition vector \(D_{t}\). Figure 5(d) and Figure 5(f) image pair illustrates model outputs where the \(D_{t}\) is modified to have opposite orbit directions. Figure 5(e) illustrates the difference between the images. The lake banks and the upper left corner of the image, where there is a small hill, have large differences between the two generated images. All locations, where there are greater differences between the images, are 3D features. The Sentinel-1 satellites have different look directions on ascending and descending orbit directions. Therefore, the scattering of the radar signal is different and the difference is most noticeable on 3D features. Since the differences are so clearly located on the 3D features in the image the model is clearly factored in the orbit direction when generating the output. This verifies that the imaging conditions are used by the model to produce the \(\hat{I}_{t}\) in the imaging conditions of \(I_{t}\).
The same experiment was conducted by modifying the precipitation amounts in Figure 5(g) and Figure 5(i). The difference between the generated images is shown in the Figure 5(h). This time the difference between the generated images is focused on swamp, meadow, and agricultural land areas in the image. The forest areas have only small differences between the images. In forest areas, the radar signal is scattered back by the forest foliage where the moisture does not affect the scattering properties as much as the open areas. In open areas, the radar signal hits the ground where the soil moisture content is altered more by the rain, thus changing the backscatter intensity. This experiment suggests that the model uses the precipitation information correctly when generating the output image.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Block number** & **Downsampler (filter size, kernel size)** & **Upsampler (filter size, kernel size)** \\ \hline
1 & 64, 4 & 512, 4 \\
2 & 128, 4 & 512, 4 \\
3 & 256, 4 & 512, 4 \\
4 & 512, 4 & 512, 4 \\
5 & 512, 4 & 512, 4 \\
6 & 512, 4 & 512, 4 \\
7 & 512, 4 & 256, 4 \\
8 & 512, 4 & 128, 4 \\
9 & 512, 2 & 2, 4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameters for the neural network architecture. The parameters configure the convolutional layers in the upsampler and downsampler blocks seen in the architectural diagram Figure 3.
Figure 6: Mapping transformation function outputs with different imaging conditions. The image a is the original SAR image that is captured from coordinates \(64.919\) lat, \(28.124\) lon in 7th of July 2021. The image c shows the model output \(\hat{I}_{t}\) when it is trying to predict the \(I_{t}\). Image b shows the difference between the true image \(I_{t}\) and the predicted image \(\hat{I}_{t}\). The Images d, f, g and i are generated by manually modifying the imaging condition vector \(D_{t}\). Image d has ascending and e has descending orbit direction. Image e shows the difference between the different orbit direction images. Identical experiment was conducted by varying the precipitation amount in images g and i. Image h shows the difference between the images with the different precipitation amounts.
### Identifying the Best Conventional DI Strategy
The conventional method of computing the difference image is to use one of the previous SAR images that is captured at some preceding date with the most recent image to produce the difference image \(I_{DI}=g(I_{t-y},I_{t})\). There are multiple different strategies when selecting the previous image. The simplest strategy is to select the previous image that is preceding the image that was captured most recently. This strategy has the advantage that the least amount of time has elapsed between the images, therefore the number of natural changes, like foliage growth or soil moisture changes, are minimized. However, the problem is that the previous image has very likely different incidence angle and it might have been captured from different orbit direction (ascending/descending). To make sure that we compare the proposed method to the best conventional method, three different previous image selection strategies were compared to identify the best strategy. The threshold classifier was used to compare the quality of the difference images that were produced using the different strategies. The strategies have different trade offs between the elapsed time and imaging angle:
[MISSING_PAGE_POST]
66.
that are not as densely wooded making this more realistic representation of real changes in the forest. The mean backscatter intensity change varied from around \(-0.5\) dB to \(-2.5\) dB in the change areas depending on the sample. Both classifiers have considerably worse performance, however the proposed method is still better performing. The overall poor performance is to be expected with the threshold classifiers. It is the simplest possible classifier working in single pixel level without having any kind of visibility to the neighbouring pixels. Furthermore, the changes can be small in the simulated change dataset that is created using the statistical change method.
Figure 8: ROC curve for the two threshold classifiers when applied to the dataset with simulated changes using the offset change method.
Figure 9: ROC curve for the two threshold classifiers when applied to the dataset with simulated statistical changes.
#### 3.3.2 Support Vector Classifier
The experiments were repeated with the SVC model to the same two datasets. The linear kernel SVC implementation LinearSVC from Scikit-learn library [46] was used to conduct the experiment. Linear kernel SVC was chosen due to large dataset size. Other kernel types were tested, however they did not scale to the large number of samples. The samples were normalized using the Scikit-learn StandardScaler to ease the model convergence. Table 2 displays the results from the experiments. The proposed method is clearly superior to the conventional method in both experiments. The performance in the statistical change dataset is considerably worse when compared to the shift change dataset. However, this is to be expected with the similar loss of accuracy in the threshold classifier experiments. This experiment uses supervised learning with labeled dataset which should improve the results when comparing to the threshold classifier. However the SVC is still very simple classifier that performs the classification at pixel level without any visibility to the neighbouring pixels, thus the accuracy scores are mediocre at best. Still, achieving high accuracy score was not the goal of the experiment. Instead, the experiment is comparing the accuracies of the two classifiers and the results from this experiment support the findings from the threshold classifier experiments. The proposed method clearly produces higher quality difference images.
#### 3.3.3 Model Without the Weather Data
The dataset creation for this project was a major undertaking which complicates the adaption of the proposed methodology since the model needs to be trained to every location where it is used. Finnish Meteorological Institute provides the interpolated weather data for the features we used in this study that are available in locations inside the borders of Finland. However, equivalent data sources are not necessary available in other countries. Therefore, we experimented how the neural network based mapping transformation function works without the weather data. The model training pipeline was modified to drop the weather data during training and inference, thus the acquisition conditions consisted only from incidence angle, satellite orbit direction, and satellite id. Figure 10 illustrates the results from the experiment. The experiment used simulated changes with \(-2.5\) dB shift and exact same model hyper parameters with the results that are illustrated in Figure 8, thus the result is directly comparable. The resulting AUC metric is higher at \(0.83\) when comparing to the conventional method at \(0.79\), however the result is worse when comparing to the model that has visibility to the weather data with AUC metric of \(0.87\). Therefore, we can conclude that the proposed methodology can be used also without weather data, and it achieves measurable improvement over conventional method. However, to achieve the best performance, the model requires the weather data in addition of the other imaging condition features.
## 4 Discussion
The experiment results show that the proposed method produces higher quality difference images than the conventional method. Since the output from the proposed method is a difference image, many of the existing change classification techniques may benefit from the method without any modifications. The techniques generally use the conventional method for producing the difference image, however it is completely separate step from the classification, and thus could be replaced with the proposed method without changes to the classification step. Some methods do not use the difference image computation step, instead they accept the two images directly to the model to carry out the classification. Even with these techniques the usage of the proposed method could be beneficial. In these cases, the earlier image (\(I_{t-y}\)) is replaced with the \(\hat{I}_{t}\), thus giving the classification model better understanding about what the scene should look like in the correct image acquisitions conditions.
This study did not experiment with the more advanced change detection classifiers since the simple classifiers were enough to prove that the proposed method is better than the conventional method. However, the clear improvement in classification accuracy with the simple methods could indicate that similar improvement can be achieved with the more advanced methods.
The use of simulated changes to measure the performance of the method was a necessary compromise caused by the lack of existing change detection datasets suitable for training the neural network. The simulated changes are not realistic enough to draw a final conclusion about how much the proposed method would improve the change detection
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Dataset** & **Proposed method accuracy** & **Conventional method accuracy** \\ \hline Shift change & \(0.89\) & \(0.81\) \\ Statistical change & \(0.75\) & \(0.70\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experiment results for the SVC models.
performance in real world application. However, the experiments with the simulated changes indicate a substantial performance improvement potential.
The downside of the proposed method is that the mapping transformation function is a neural network model that requires a training dataset and considerable amount of processing power for training. The dataset creation is a complex operation that combines data from multiple data sources. Some of the sources that were used in this study are available only for geographical locations inside Finland, such as the interpolated weather data from Finnish Meteorological Institute. The model requires training data from the locations it is used at inference time which complicates the adaption of the method outside of Finland. However, many of the data sources very likely have equivalents available in other geographical locations, therefore the adoption is not impossible. Even a global training dataset could potentially be constructed, which could make the training of a universal model possible. The recent advances in neural network architectures with natural language processing and image generation have shown that the models can learn from impressive amounts of data. The model training is unsupervised, meaning it does not require labelled data, thus the creation of such a dataset could be possible. Our experiment with a model that did not see the weather data in the input shows that the method achieves measurable improvement over the conventional method even when the model has information only about the imaging angle and the satellite. That data is available in the SAR images when they are downloaded from the ESA open access portal, thus simplifying the dataset creation considerably. However, without the weather data the mapping transformation function cannot generate accurate enough SAR images to achieve same accuracy metrics that the model with the weather information achieves.
## Funding
This research was funded by the Regional Council of Central Finland/Council of Tampere Region and European Regional Development Fund as part of the _Data for Utilisation - Leveraging digitalisation through modern artificial intelligence solutions and cybersecurity_ (grant number A76982), and _coADDVA - ADDing VAlue by Computing in Manufacturing_ (grant number A77973) projects of Jamk University of Applied Sciences.
## Data Availability
The Sentinel-1 SAR imagery is available to download free of charge from Copernicus Open Access Hub [27]. The weather data is available to download free of charge from Finnish Meteorological Institute [33]. The digital elevation map and topographic database are available to download free of charge from the National Land Survey of Finland open data file download service [32, 42]. Links to the download sites are available in the references. The derived dataset
Figure 10: Threshold classifier ROC when used with mapping transformation function that is trained without the weather data.
that was used to train the neural network and supports the findings of this study can be requested from the authors. The computer code that was used to produce the results is available at [https://github.com/janne-alatalo/sar-change-detection](https://github.com/janne-alatalo/sar-change-detection).
## Acknowledgments
This work uses modified Copernicus Sentinel data 2020-2021. The authors would like to thank Mr. Eppu Heilimo for feedback on the draft manuscript.
|
2308.16859 | Information Theoretically Optimal Sample Complexity of Learning
Dynamical Directed Acyclic Graphs | In this article, the optimal sample complexity of learning the underlying
interactions or dependencies of a Linear Dynamical System (LDS) over a Directed
Acyclic Graph (DAG) is studied. We call such a DAG underlying an LDS as
dynamical DAG (DDAG). In particular, we consider a DDAG where the nodal
dynamics are driven by unobserved exogenous noise sources that are wide-sense
stationary (WSS) in time but are mutually uncorrelated, and have the same
{power spectral density (PSD)}. Inspired by the static DAG setting, a metric
and an algorithm based on the PSD matrix of the observed time series are
proposed to reconstruct the DDAG. It is shown that the optimal sample
complexity (or length of state trajectory) needed to learn the DDAG is
$n=\Theta(q\log(p/q))$, where $p$ is the number of nodes and $q$ is the maximum
number of parents per node. To prove the sample complexity upper bound, a
concentration bound for the PSD estimation is derived, under two different
sampling strategies. A matching min-max lower bound using generalized Fano's
inequality also is provided, thus showing the order optimality of the proposed
algorithm. | Mishfad Shaikh Veedu, Deepjyoti Deka, Murti V. Salapaka | 2023-08-31T17:03:34Z | http://arxiv.org/abs/2308.16859v2 | # Information Theoretically Optimal Sample Complexity of Learning _Dynamical_ Directed Acyclic Graphs
###### Abstract
In this article, the optimal sample complexity of learning the underlying interaction/dependencies of a Linear Dynamical System (LDS) over a Directed Acyclic Graph (DAG) is studied. The sample complexity of learning a DAG's structure is well-studied for static systems, where the samples of nodal states are independent and identically distributed (i.i.d.). However, such a study is less explored for DAGs with dynamical systems, where the nodal states are temporally correlated. We call such a DAG underlying an LDS as _dynamical_ DAG (DDAG). In particular, we consider a DDAG where the nodal dynamics are driven by unobserved exogenous noise sources that are wide-sense stationary (WSS) in time but are mutually uncorrelated, and have the same power spectral density (PSD). Inspired by the static settings, a metric and an algorithm based on the PSD matrix of the observed time series are proposed to reconstruct the DDAG. The equal noise PSD assumption can be relaxed such that identifiability conditions for DDAG reconstruction are not violated. For the LDS with WSS (sub) Gaussian exogenous noise sources, it is shown that the optimal sample complexity (or length of state trajectory) needed to learn the DDAG is \(n=\Theta(q\log(p/q))\), where \(p\) is the number of nodes and \(q\) is the maximum number of parents per node. To prove the sample complexity upper bound, a concentration bound for the PSD estimation is derived, under two different sampling strategies. A matching min-max lower bound using generalized Fano's inequality also is provided, thus showing the order optimality of the proposed algorithm.
## 1 Introduction
Learning the interdependency structure in a network of agents, from passive time series observations, is a salient problem with applications in neuroscience Bower and Beeman (2012), finance Kim et al. (2011), meteorology Ghil et al. (2002), etc. Reconstructing the exact structure with the dependency/causation directions has a wide range of applications. For example, the identification of causation structure among the shares helps in obtaining robust portfolio management in the stock market Kim et al. (2011). Similarly, causal graphs are useful in understanding dynamics and identifying contributing factors of a public epidemic emergency situation Yang et al. (2020).
The structure of directed interactions in a network of agents is conveniently represented using directed graphs, with the agents as nodes and the directed interactions as directed edges. If the underlying graph doesn't have cycles, it is called a Directed Acyclic Graph (DAG). In general, it is not possible to reconstruct the exact structure with the direction of different edges. Instead, in many networks it is possible to retrieve only the Markov equivalence graphs, the set of graphs satisfying the same conditional dependence property, from data without any intervention Ghoshal and Honorio (2018). In applications such as finance Kim et al. (2011), climate science Ghil et al. (2002) etc., the agent states, instead of being temporally independent, can evolve over time due to past directed interactions. Such temporal evolution can be represented by a linear dynamical system (LDS). In LDS, the interaction between agent states is captured by a linear time-invariant function. In this paper, we study the identifiability and present the first sample complexity results for learning a DAG of LDS, which we term as _Dynamical_ DAG or **DDAG**. This is distinguished from _static_ DAG, where the agent states are temporally independent and the DAG does not correspond to temporal dynamics.
### Related Work
**Static DAG Learning:** The problem of obtaining an upper bound on the sample complexity of learning static DAGs goes back twenty-five years Friedman and Yakhini (1996), Zuk et al. (2006). However, tight characterization of optimal rates for DAG learning is a harder problem compared to undirected networks Gao et al. (2022), primarily due to the order identification step. Identifiability conditions for learning static DAGs with linear interactions and excited by equal variance Gaussian noise were given in Peters and Buhlmann (2014). Several polynomial time algorithms have been proposed for static DAG reconstruction using samples of states at the graph nodes; see Ghoshal and Honorio (2017, 2018, 2019); Chen et al. (2019); Gao et al. (2022); Park (2020); Park and Raskutti (2017), and the reference therein. An information-theoretic lower bound on structure estimation was studied in Ghoshal and Honorio (2017). In Gao et al. (2022), it was shown that the order optimal sample complexity for static Gaussian graphical model with equal variance is \(n=\Theta(q\log(p/q))\), where \(p\) is the number of nodes and \(q\) is the maximum number of parents. The authors showed that the algorithm given in Chen et al. (2019) provides an upper bound that matches a min-max lower bound for the number of samples. However, similar results for DDAGs, with underlying temporal directed interaction between agent states, have not been studied, to the best of our knowledge.
**LDS Learning:** Graph reconstruction, in general, is challenging in a network of LDS (directed, undirected, or bi-directed), as it involves time-dependencies between collected samples of nodal states. Learning the conditional independence structure in LDS with independent and identically distributed (white) excitation was explored in Basu and Michailidis (2015), Loh and Wainwright (2011), Songsiri and Vandenberghe (2010), Simchowitz et al. (2018), Faradonbeh et al. (2018), and the references therein. However, the methods in the cited papers do not extend to LDS that is excited by WSS (wide-sense stationary) noise, which makes correlations in state samples more pronounced. For LDS with WSS noise, Tank et al. (2015), Dahlhaus (2000), Materassi and Salapaka (2013), and Materassi and Innocenti (2009), estimated the conditional correlation structure, which contains true edges in the network and extra edges between some two-hop neighbors Materassi and Salapaka (2012). A consistent algorithm for the recovery of exact topology in a large class of applications with WSS noise was provided in Talukdar et al. (2020), with the corresponding sample-complexity analysis developed in Doddi et al. (2022), using a neighborhood-based regression framework. However,
the developed algorithms and related sample complexity results do not extend to directed graphs and hence exclude DDAG reconstruction. DDAG reconstruction from the time-series data has been explored using the framework of directed mutual information in Quinn et al. (2015) but without rate characterization for learning from finite samples.
**Contribution:** This article presents an information-theoretically optimal sample complexity analysis for learning a dynamical DAG (DDAG, i.e., a DAG underlying an LDS excited by WSS noise of equal power spectral density), using samples of state trajectories of the corresponding LDS. To the best of our knowledge, this is the first paper to study and prove sample complexity analysis for DDAGs. We consider learning under two sampling scenarios, viz; 1) restart and record, 2) continuous sampling. While the former pertains to samples collected from multiple disjoint (independent) trajectories of state evolution, the latter includes samples from a single but longer trajectory of the state evolution (see Fig. 2). Surprisingly, the results in this article show that the estimation errors are not influenced by the sampling strategy (restart and record or continuous) as long as the collected samples are over a determined threshold given by \(n=O(q\log(p/q))\) is obtained, where \(p\) is the number of nodes and \(q\) is the maximum number of parents per node. We also provide a matching information-theoretic lower-bound, \(\max\left(\frac{\log p}{2\beta^{2}+\beta^{4}},\frac{q\log(p/q)}{M^{2}-1}\right)\), where \(\beta\) and \(M\) are system parameters (see Definition 2.4); thus obtaining an order optimal bound \(n=\Theta(q\log(p/q))\).
Our learning algorithm relies on first deriving rules for DAG estimation using the Power Spectral Density Matrix (PSDM) of nodal states, inspired by the estimator for static DAGs based on covariance matrices in Chen et al. (2019). Subsequently, the sample complexity associated with learning is derived by obtaining concentration bounds for the PSDM. In this regard, characterization of non-asymptotic bounds of PSDMs for a few spectral estimators have been obtained Fiecas et al. (2019); Veedu et al. (2021); Zhang and Wu (2021) previously. A unified framework of concentration bounds for a general class of PSDM estimators was recently presented in Lamperski (2023). Our concentration bounds of the PSDM are reached using different proof steps, based on Rademacher random variables and symmetrization argument Wainwright (2019).
The rest of the paper is organized as follows. Section 2 introduces the system model and the preliminary definitions for LDS and DDAGs. Section 3 discusses Algorithm 1 and the main results for DDAG reconstruction from PSDM. Section 4 provides a concentration bound for the error in estimating the PSDM and a sample complexity upper bound for DDAG reconstruction using Algorithm 1. Section 5 contains a sample complexity lower bound.
_Notations:_ Bold faced small letters, \(\mathbf{x}\) denote vectors; Bold faced capital letters, \(\mathbf{A}\) denote matrices; For a time-series, \(\mathbf{x}\), \(\hat{x}(t)\) denotes the value of \(\mathbf{x}\) at time \(t\), \(\mathbf{x}(\omega)\) denotes the discrete time Fourier transform of \(\mathbf{x}\), \(\mathbf{x}(\omega):=\sum_{k=-\infty}^{\infty}\hat{\mathbf{x}}(k)e^{-i\omega k}\), \(\omega\in\Omega=[0,2\pi]\); \(diag(v_{1},\ldots,v_{p})\) operator creates a diagonal matrix with diagonal entries \(v_{1},\ldots,v_{p}\); \(\Phi_{\mathbf{x}_{AB}}\) or \(\Phi_{AB}\) denotes the matrix obtained by selecting rows \(A\) and columns \(B\) in \(\Phi_{\mathbf{x}}\); \(\mathbf{A}^{*}\) denotes conjugate transpose of \(\mathbf{A}\). \(\|\mathbf{A}\|\) is the spectral norm of \(\mathbf{A}\) and \(\|\mathbf{v}\|_{2}\) is the Euclidean norm of vector \(\mathbf{v}\).
## 2 System model of DDAG
We describe different aspects of DDAG (DAG underlying an LDS) and the sampling strategies considered. We begin with some necessary DAG terminologies.
**DDAG terminology:** The _dynamical_ directed acyclic graph (DDAG), is given by \(G:=(V,E)\), where \(V=\{1,\ldots,p\}\), and \(E\) is edge set of directed edge \(i\xrightarrow{}j\). A directed path from \(i\) to \(j\) is a path of the form \(i=v_{0}\xrightarrow{}v_{1}\xrightarrow{}\ldots\xrightarrow{}v_{\ell} \xrightarrow{}v_{\ell+1}=j\), where \(v_{k}\in V\) and
\((v_{k},v_{k+1})\in E\) for every \(k=0,\ldots,\ell\). A cycle is a directed path from \(i\) to \(i\), which does not exist in DDAG \(G\). For \(G\), \(pa(i):=\{j\in V:(j,i)\in E\}\) denotes the parents set and \(desc(i)\) denotes the descendants of \(i\), the nodes that have a directed path from \(i\). The set \(nd(i):=V\setminus desc(i)\) denotes the non-descendants set and the set \(an(i)\subset nd(i)\) denotes the ancestors set, the nodes that have a directed path to \(i\). At set \(C\subseteq V\) is called ancestral if for every \(i\in C\), \(pa(i)\subseteq C\). Figure 2 shows an example DDAG with the node definitions. A node set \(C\subseteq V\) is said to be a topological ordering on \(G\) if for every \(i,j\in C\), \(i\in desc(j)\) in \(G\) implies \(i>j\). \(\mathcal{G}_{p,q}\) denotes the family of DDAGs with \(p\) nodes and at most \(q\) parents per node.
Without a loss of generalizability, we use the same terminology for the DDAG and the underlying DAG.
**LDS Model excited by equal PSD WSS noise:** For the DDAG \(G=(V,E)\in\mathcal{G}_{p,q}\), we consider a linear dynamical system (LDS) with \(p\) scalar state variables, corresponding to nodes in \(V\). Node \(i\) is equipped with time series measurements, \(\{\check{x}_{i}(k)\}_{k\in\mathbb{Z}},\ 1\leq i\leq p\). The LDS evolves according to the linear time-invariant model,
\[\check{x}_{i}(k)=\sum_{(i,j)\in E,j\neq i}^{p}(\check{h}_{ij}\star\check{x}_{ j})(k)+\check{e}_{i}(k),\ k\in\mathbb{Z}, \tag{1}\]
where transfer function \(\check{h}_{ij}\neq 0\) when directed edge \((i,j)\in E\). The exogenous noise \(\{\check{e}_{i}(k)\}_{k\in\mathbb{Z}},\ 1\leq i\leq p\), are zero mean wide sense stationary Gaussian processes, uncorrelated across nodes. Taking the discrete-time Fourier transform (DTFT) of (1) provides the frequency representation for every \(\omega\in\Omega=[0,2\pi]\),
\[\mathbf{x}_{i}(\omega)=\sum_{(i,j)\in E,j\neq i}^{p}\mathbf{H}_{ij}(\omega) \mathbf{x}_{j}(\omega)+\mathbf{e}_{i}(\omega),\ 1\leq i\leq p, \tag{2}\]
where \(\mathbf{x}_{i}(\omega)=\mathcal{F}\{\check{x}_{i}\}:=\sum_{k=-\infty}^{\infty }\check{x}_{i}(k)e^{-i\omega k}\), \(\mathbf{e}_{i}(\omega)=\mathcal{F}\{\check{e}_{i}\}\), and \(\mathbf{H}_{ij}(\omega)=\mathcal{F}\{\check{h}_{ij}\}\). The model in (2) can be represented in the matrix form to obtain the following LDS,
\[\mathbf{x}(\omega)=\mathbf{H}(\omega)\mathbf{x}(\omega)+\mathbf{e}(\omega), \ \forall\omega\in\Omega, \tag{3}\]
where \(\mathbf{e}(\omega)\) is the WSS noise. In this article, we are interested in the LDS with \(\Phi_{\mathbf{e}}(\omega)=\sigma(\omega)diag(\alpha_{1},\ldots,\alpha_{p})\), where \(\alpha_{i}\) are known and can be a function of \(\omega\). For the simplicity of analysis, henceforth it is assumed that \(\Phi_{\mathbf{e}}(\omega)=\sigma(\omega)\mathbf{I}\).
**Remark 2.1**.: _The assumption \(\Phi_{\mathbf{e}}(\omega)=\sigma(\omega)\mathbf{I}\) is a restrictive assumption. However, we would like to remark that some form of restriction is required due to the impossibility of DAG
Figure 1: An example DDAG. Node 1 is an ancestor and node 7 is a descendant of every node in the graph. The set \(\{1,2,5\}\) is an ancestral set but \(\{2,5\}\) is not. \(an(3)=\{1,2\}\), \(desc(3)=\{4,7\}\), \(nd(3)=\{1,2,5,6\}\).
reconstruction in a general setup due to identifiability issues Shimizu et al. (2006). However, the assumption can be relaxed to incorporate the identifiability conditions on \(\Phi_{e_{i}}\) and \(\mathbf{H}\) to retrieve the topological ordering, similar to Ghoshal and Honorio (2018). Furthermore, our results on DDAG reconstruction require the equal PSD to hold only at some known \(\omega\in\Omega\) which is less restrictive._
The power spectral density matrix (PSDM) of the time-series \(\mathbf{x}\) at the angular frequency \(\omega\in\Omega\) is given by
\[\Phi_{\mathbf{x}}(\omega)=\mathcal{F}\left\{R_{\mathbf{x}}(t)\right\}=\sum_{k =-\infty}^{\infty}R_{\mathbf{x}}(k)e^{-i\omega k}, \tag{4}\]
where \(R_{\mathbf{x}}(k):=\mathbb{E}[\hat{\mathbf{x}}(k)\hat{\mathbf{x}}^{T}(0)]\) is the auto-correlation matrix of the time-series \(\mathbf{x}\) at lag \(k\). The \((i,j)\)-th entry of \(\Phi_{\mathbf{x}}\) is denoted by \(\Phi_{ij}\). For the LDS (3), the PSDM is given by
\[\Phi_{\mathbf{x}}(\omega)=(\mathbf{I}-\mathbf{H}(\omega))^{-1}\Phi_{\mathbf{ e}}(\omega)((\mathbf{I}-\mathbf{H}(\omega))^{-1})^{*}. \tag{5}\]
Consider the following additional non-restrictive assumptions on the power spectral density and correlation matrix of the LDS states.
**Assumption 2.2**.: _There exists an \(M\in\mathbb{R}\) such that \(\frac{1}{M}\leq\lambda_{min}(\Phi_{\mathbf{x}})\leq\lambda_{max}(\Phi_{ \mathbf{x}})\leq M\), where \(\lambda_{min}\) and \(\lambda_{max}\) respectively denote minimum and maximum eigenvalues._
**Assumption 2.3**.: _The auto-correlation matrix of the time-series \(\mathbf{x}\) at lag \(k\), \(R_{\mathbf{x}}(k):=\mathbb{E}[\hat{\mathbf{x}}(k)\hat{\mathbf{x}}^{T}(0)]\) satisfies \(\|R_{\mathbf{x}}(k)\|\leq C\rho^{-|k|}\), for some positive constants \(C,\rho\in\mathbb{R}\), \(\rho>1\)._
In the remaining paper, following these assumptions, our interest will be limited to the following family of DDAGs and corresponding LDS.
**Definition 2.4**.: \(\mathcal{H}_{p,q}(\beta,\sigma,M)\) _denotes the family of LDS given by (3) such that the corresponding DDAG, \(G(V,E)\in\mathcal{G}_{p,q}\) (\(p\) nodes with each node having a maximum \(q\) parents), with \(|\mathbf{H}_{ij}(\omega)|\geq\beta,\ \forall(i,j)\in E,\omega\), \(\Phi_{\mathbf{e}}(\omega)=\sigma(\omega)\mathbf{I}\), and \(M^{-1}\leq\lambda_{\min}(\Phi_{\mathbf{x}}(\omega))\leq\lambda_{max}(\Phi_{ \mathbf{x}}(\omega))\leq M\), \(\forall\omega\in\Omega\)_
**Sampling Strategy for LDS states:** We consider two sampling settings (see Fig. 2 for details):
i) **restart and record:** The sampling is performed as follows: start recording and stop it after \(N\) measurements. For the next trajectory, the procedure is restarted with an independent realization and record for another epoch of \(N\) samples; repeat the process another \(n-2\) times providing \(n\) i.i.d trajectories of \(N\) samples each.
ii) **continuous sampling:** Here, a single trajectory of length \(n\times N\) is taken consecutively. Then, the observations are divided into \(n\) segments, each having \(N\) consecutive samples.
The data collected using either strategy is thus grouped into \(n\) trajectories of \(N\) samples each. For the finite length \(r^{\text{th}}\) trajectory, \(\{\hat{x}^{r}(t)\}_{t=0}^{N-1}\), define
\[\mathbf{x}^{r}(\omega)=\frac{1}{\sqrt{N}}\sum_{k=0}^{N-1}\hat{x}^{r}(k)e^{-i \omega k},r=1,\ldots,n,\ \omega\in\Omega. \tag{6}\]
Note that \(\mathbf{x}^{r}(\omega)\), if exists, is a zero mean random variable with the covariance matrix given by
\[\widetilde{\Phi}_{\mathbf{x}}(\omega):=\mathbb{E}\left\{\mathbf{x}^{r}( \omega)[\mathbf{x}^{r}(\omega)]^{*}\right\}=\frac{1}{N}\sum_{k=-(N-1)}^{N-1}(N- |k|)R_{\mathbf{x}}(k)e^{-i\omega k},\ \forall\omega\in\Omega. \tag{7}\]
For the restart and record setting (unlike for continuous sampling), \(\{\mathbf{x}^{r}(\omega)\}_{r=1}^{n}\) are i.i.d. Further, as \(N\to\infty\), \(\widetilde{\Phi}_{\mathbf{x}}(\omega)\to\Phi_{\mathbf{x}}(\omega)\) uniformly in \(\Omega\)Stoica et al. (2005).
## 3 Reconstructing DDAGs from PSDM
In this section, we discuss some results on how the PSDM, \(\Phi_{\mathbf{x}}\), can be employed to completely reconstruct the DDAG, \(G\) when the time series is generated according to (3). Applying these results, inspired from the algorithm for static setting Chen et al. (2019), we propose Algorithm 1 for reconstructing the DDAG, \(G\). First, we prove that conditional PSD (CPSD) of \(i\) conditioned on \(C\), defined as
\[f(i,C,\omega):=\Phi_{ii}(\omega)-\Phi_{iC}(\omega)\Phi_{CC}^{-1}(\omega)\Phi_{ Ci}(\omega), \tag{8}\]
is a metric sufficient to obtain a topological ordering on the DDAG, \(G\), which aids in the reconstruction of \(G\). Notice that unlike Chen et al. (2019)'s static setting, our algorithm will require the use of CPSD to unveil the dependencies in the DDAG and further affect the sample complexity of learning.
### CPSD and Topological Ordering
Here, we will show that CPSD is the minimum for the nodes that have all the parents included in the conditioning set. We start by proving the result for the source nodes.
**Lemma 3.1**.: _Consider the LDS described by (3). For any \(\omega\in\Omega\), let \(\alpha^{*}:=\min\limits_{k\in V}\Phi_{kk}(\omega)\). Then \(\Phi_{ii}(\omega)=\alpha^{*}\) if and only if \(i\) is a source node._
Proof.: Let \(\mathbf{T}(\omega)=(\mathbf{I}-\mathbf{H}(\omega))^{-1}\) and \(\Phi_{\mathbf{e}}(\omega)=\sigma(\omega)\mathbf{I}_{n}\). Then, \(\Phi_{\mathbf{x}}(\omega)=\sigma(\omega)\mathbf{T}(\omega)\mathbf{T}^{*}(\omega)\). By Cayley-Hamilton theorem, there exists constants \(a_{0}(\omega),\ldots,a_{n}(\omega)\) such that \((\mathbf{I}-\mathbf{H}(\omega))^{-1}=\sum_{k=0}^{n-1}a_{k}(\omega)(\mathbf{I}- \mathbf{H}(\omega))^{k}\). Using induction, it can be shown that non-diagonal entries of \((\mathbf{I}-\mathbf{H}(\omega))^{k}\) are zero if and only if there no \(k-hop\) path between \(i\) and \(j\) (almost always). Similarly, \((i,i)\)th entry is \(1\) if and only if there is no k-hop path between them (almost always).
Then (ignoring \((\omega)\)), \(\Phi_{ii}=(\Phi_{\mathbf{x}})_{ii}=\sigma\sum_{k=1}^{n}\mathbf{T}_{ik} \mathbf{T}_{ik}=\sigma(\mathbf{T}_{ii}^{2}+\sum_{k\neq i}\mathbf{T}_{ik}^{2})\). If \(i\) is a source node, then \(\mathbf{T}_{ik}=0\) for every \(k\neq i\) and \(\mathbf{T}_{ii}=1\), which implies \(\Phi_{ii}=\sigma\mathbf{T}_{ii}^{2}\). For non-source nodes, \(\mathbf{T}_{ik}\neq 0\) for some \(k\), which gives \(\Phi_{ii}>\sigma\).
#### 3.1.1 Conditional PSD deficit
The following is the definition of conditional PSD deficit, which is helpful in proving the subsequent results and in retrieving the DDAG.
Figure 2: (a) shows restart and record sampling. (b) shows continuous sampling.
**Definition 3.2** (The CPSD deficit).: \[\Delta:=\min_{\omega\in[-\pi,\pi]}\min_{j\in V}\min_{\begin{subarray}{c}C\subseteq \pi d(j),\\ Pa(j)\setminus C\neq\emptyset\end{subarray}}f(j,C,\omega)-\sigma(\omega)\] (9)
The following lemma shows that \(f(i,C,\omega)\) from Eq. 8 can be used as a metric to obtain a topological ordering of \(G\).
**Lemma 3.3**.: _Consider the DDAG, \(G\), governed by (3). Let \(j\in V\) and let \(C\subseteq V\setminus\{j\}\) be an ancestral set. Then for every \(\omega\in\Omega\),_
\[f(j,C,\omega) =\sigma(\omega) :\text{ if }Pa(j)\subseteq C,\] \[f(j,C,\omega) \geq\Delta+\sigma(\omega)>\sigma(\omega) :\text{ if }Pa(j)\nsubseteq C.\]
Proof.: See Appendix A
**Corollary 3.4**.: _If \(Pa(i)\subseteq C\) and \(Pa(i)\nsubseteq D\), then \(f(i,C,\omega)-f(i,D,\omega)\geq\Delta\)._
**Lemma 3.5**.: \(\Delta\geq\beta^{2}\sigma\)_._
Proof.: Applying (15), \(f(j,C,\omega)-\sigma=H_{jD}(\Phi_{D}-\Phi_{DC}\Phi_{CC}^{-1}\Phi_{CD})H_{jD}^{ *}\geq\sigma|D|\beta^{2}\). Then
\[\Delta\geq\min_{\omega}\min_{j}|\sigma|D|\beta^{2}=\sigma|D|\beta^{2}\stackrel{{ (a)}}{{\geq}}\sigma\beta^{2},\]
where \((a)\) follows because \(|D|\geq 1\) if \(C\nsubseteq Pa(j)\).
To determine the DDAG \(G\)'s structure, first determine a topological ordering of nodes as follows: Beginning with an empty set \(\mathcal{S}\), we iteratively add node \(i\) in \(\mathcal{S}\) where
\[(i,C_{i}^{*})\in\operatorname*{arg\,min}_{\begin{subarray}{c}C\subseteq \mathcal{S},|C|\leq q\\ 1\leq j\leq p,\ j\notin\mathcal{S}\end{subarray}}f(j,C,\omega), \tag{10}\]
and \(f\) comes from (8). The following Lemma shows that \(\mathcal{S}\) is a valid topological ordering w.r.t. \(G\).
**Lemma 3.6**.: \(\mathcal{S}\) _is a valid topological ordering with respect to \(G\)._
Proof.: In the first step, \(C=\emptyset\) and \(f(i,C,\omega)=\Phi_{ii}\). By Lemma 3.1 and Lemma 3.3, \(\Phi_{ii}=\sigma\) if \(i\) is a source node, where as \(\Phi_{ii}\geq\sigma+\Delta\) if \(i\) is not a source node. Thus, the first node in the ordered set \(\mathcal{S}\), \(\mathcal{S}_{1}\), is always a source node. Induction assumption: Nodes \(\mathcal{S}_{1}\) to \(\mathcal{S}_{n}\) in \(\mathcal{S}\) follow topological order. For the \(n+1\), by Lemma 3.3, for every \(C\subseteq\mathcal{S}\), \(f(k,C,\omega)\) is minimum for \(k\in V\setminus\mathcal{S}\) if and only if \(Pa(k)\subseteq C\). Thus nodes \(\mathcal{S}_{1}\) to \(\mathcal{S}_{n+1}\) follow a topological order, which proves the result.
**Identification of the parents**: Parents of a node are identified from the ordered set by applying Corollary 3.4. Let \(D=C\setminus\{k\}\). As shown in Corollary 3.4, if \(Pa(i)\subseteq C\) and \(k\in Pa(i)\), then \(f(i,C,\omega)-f(i,D,\omega)\geq\Delta\). Thus, from the set \(\mathcal{S}\), for every node \(\mathcal{S}_{i}\), one can eliminate nodes \(\mathcal{S}_{1},\ldots,\mathcal{S}_{i-1}\) by checking if the difference is greater than \(\Delta\). If the difference is greater than \(\Delta\) for some \(\mathcal{S}_{k}\), then \(\mathcal{S}_{k}\) is a parent of \(\mathcal{S}_{i}\). That is,
**Lemma 3.7**.: _Let \((i,C_{i}^{*})\) be a solution of (10) and let_
\[P_{i}:=\left\{j\in C_{i}^{*}\mid|f(i,C_{i}^{*},\omega)-f(i,C_{i}^{*}\setminus j, \omega)|\geq\Delta\right\}.\]
_Then, \(Pa(i)=P_{i}\)._
Applying the above procedure and Lemma 3.3-Lemma 3.7, one can formulate Algorithm 1 to obtain the ordering of the DDAG, \(G\) and eventually reconstruct the DDAG exactly. In Algorithm 1, \(\widehat{f}(i,C,\omega):=\widehat{\Phi}_{ii}(\omega)-\widehat{\Phi}_{iC}( \omega)\widehat{\Phi}_{CC}^{-1}(\omega)\widehat{\Phi}_{Ci}(\omega)\), an empirical estimate of \(f(i,C,\omega)\), is employed instead of \(f\) and \(\gamma=\Delta/2\). The following lemma proves that if the empirical estimate \(\widehat{f}(\cdot)\) is close enough to the original \(f(\cdot)\), then Algorithm 1 reconstructs the DDAG exactly.
```
0: Estimated PSDM, \(\widehat{\Phi}(\omega)\): \(\Delta\), \(z=e^{j\omega},\ \omega\in(-\pi,\pi]\)
0:\(\widehat{G}\)
1. Initialize the ordering, \(\mathcal{S}\leftarrow\)()
2. For \(i=1,\ldots,p\) 1. Compute \(\left(j^{*},C_{j}^{*}\right)\in\underset{1\leq j\leq p,\ j\notin\mathcal{S}}{ \arg\min}\widehat{f}(j,C,\omega)\) 2. \(\mathcal{S}\leftarrow(\mathcal{S},\ j^{*})\)
3. \(\widehat{G}=(V,\widehat{E}),V\leftarrow\{1,\ldots,p\},\widehat{E}\leftarrow\emptyset\)
4. For \(i=1,\ldots,p\) 1. Parents of \(i\), \(P_{i}:=\left\{j\in C_{i}^{*}\mid|\widehat{f}(\mathcal{S}_{i},C_{i}^{*},\omega) -\widehat{f}(\mathcal{S}_{i},C_{i}^{*}\setminus j,\omega)|\geq\gamma\right\}\) 2. For \(k\in P_{i}\) 1. Do \(\widehat{E}\leftarrow\widehat{E}\cup(k,i)\)
5. Return \(\widehat{G}\)
```
**Algorithm 1** Ordering algorithm
**Lemma 3.8**.: _If \(|\widehat{f}(i,C,\omega)-f(i,C,\omega)|<\Delta/4\), for every \(i,\omega,C\), then Algorithm 1 reconstructs the DAG, \(G\) successfully. That is, \(G=\widehat{G}\)._
Proof.: See Appendix B
Therefore, it suffices to derive the conditions under which \(|f(\cdot)-\widehat{f}(\cdot)|<\Delta/4\). In the following section, we derive a concentration bound to guarantee a small error, which in turn is applied in obtaining the upper bound on the sample complexity of estimating \(G\).
## 4 Finite Sample Analysis of Reconstructing DDAGs
In Lemma 3.8, it was shown that the DDAG, \(G\), can be reconstructed exactly if the error in estimating \(f(i,C,\omega)\) (given by (8)) is small enough. In this section, a concentration bound
on the error in estimating \(\Phi_{\mathbf{x}}\) from finite data is obtained, which is used later to obtain a concentration bound on the error in estimating the metric \(f\).
Recall that we consider \(n\) state trajectories (see Fig. 2 for the two sampling strategies) with each trajectory being of length \(N\) samples, i.e \(\left\{\left\{\tilde{\mathbf{x}}^{r}(k)\right\}_{k=0}^{N-1}\right\}_{r=1}^{n}\). The DFTs for each trajectory, \(\mathbf{x}^{r}(\omega)\), is a complex Gaussian with mean zero and covariance matrix, \(\widetilde{\Phi}_{x}(\omega))\), i.e., for every \(r=1,\ldots,n\), \(\mathbf{x}^{r}(\omega)\sim\mathcal{N}(0,\widetilde{\Phi}_{\mathbf{x}}(\omega))\), as given in Eq. 7. Increasing \(N\) ensures that \(\widetilde{\Phi}_{\mathbf{x}}\) is close to \(\Phi_{\mathbf{x}}\). To estimate the PSDM, we thus rely on the spectrogram method and estimate \(\widetilde{\Phi}_{x}(\omega))\) using finite \(n\) samples for \(\mathbf{x}^{r}\).
### Non-Asymptotic Estimation Error in Spectrogram Method
Let \(\widehat{\Phi}_{x}(\omega):=\frac{1}{n}\sum_{r=1}^{n}\mathbf{x}^{r}(\omega)[ \mathbf{x}^{r}(\omega)]^{*}\). Let the estimation error in estimating \(\Phi_{\mathbf{x}}\) by \(\widehat{\Phi}_{x}(\omega)\) be \(Q:=\widehat{\Phi}_{x}(\omega)-\Phi_{x}(\omega)\).
Applying the triangle inequality, \(\|Q\|\leq\|Q_{approx}\|+\|Q_{2}\|\), where \(Q_{approx}:=\widetilde{\Phi}_{x}(\omega)-\Phi_{x}(\omega)\) and \(Q_{2}:=\widehat{\Phi}_{x}(\omega)-\widetilde{\Phi}_{x}(\omega)\). Note that \(\widetilde{\Phi}_{x}(\omega)\) is the covariance of the DFT of each trajectory. To bound the estimation error, we bound both \(Q_{approx}\) and \(Q_{2}\).
The following Lemma shows that \(Q_{approx}\) is small if \(N\) (length of each trajectory) is large.
**Lemma 4.1** (Lemma 5.1, Doddi et al. (2022)).: _Consider an LDS given by (3) that satisfies Assumption 2.3. Let \(Q_{approx}=\widetilde{\Phi}_{x}(\omega)-\Phi_{x}(\omega)\) where \(\widetilde{\Phi}_{x}(\omega)\) is given in Eq. 7. Then \(\|Q_{approx}\|<\varepsilon_{1}\) if \(N>\frac{2C\rho^{-1}}{(1-\rho^{-1})^{2}\varepsilon_{1}}\)._
The next step is to characterize \(Q_{2}\), the error in estimating \(\widetilde{\Phi}_{\mathbf{x}}\) using \(\widehat{\Phi}_{x}(\omega)\). Since \(\mathbb{E}\left(\mathbf{x}^{r}(\omega)[\mathbf{x}^{r}(\omega)]^{*}\right)= \widetilde{\Phi}_{x}(\omega))\), \(\widehat{\Phi}_{x}(\omega)\) is an unbiased estimator of \(\widetilde{\Phi}_{x}(\omega)\). The following theorem provides a concentration bound on \(\|Q_{2}\|\). The concentration bound is applicable under two sampling scenarios; the restart and record setting and continuous sampling setting, as shown in Fig. 2.
**Theorem 4.2**.: _Suppose \(\{\breve{x}^{r}(k)\}_{k=0}^{N-1}\), \(1\leq r\leq n\) be the time series measurements obtained from an LDS governed by (3), satisfying Assumption 2.2. Then_
\[\mathbb{P}\left(\left\|\widehat{\Phi}_{x}(\omega)-\widetilde{\Phi}_{x}(\omega ))\right\|\geq\epsilon\right)\leq\exp\left(-\frac{\epsilon^{2}n}{128M^{2}}+6p \right),\forall\omega\in[-\pi,\pi]. \tag{11}\]
Proof.: See Appendix C
By combining Lemma 4.1 and Theorem 4.2, the following corollary is obtained, which gives a concentration bound on the estimation error, \(\|Q\|\).
**Corollary 4.3**.: _Consider an LDS governed by (3) that satisfies Assumptions 2.2 and 2.3. Let \(\{\breve{x}^{r}(k)\}_{k=0}^{N-1}\), \(1\leq r\leq n\) be the time series measurements obtained for the LDS. Suppose that \(N>\frac{2C\rho^{-1}}{(1-\rho^{-1})^{2}\varepsilon_{1}}\), where \(0<\varepsilon_{1}\). Let \(0<\varepsilon_{1},\varepsilon_{2}<\varepsilon\) be such that \(\varepsilon_{2}=\varepsilon-\varepsilon_{1}\). Then \(\forall\omega\in\Omega\),_
\[\mathbb{P}\left(\left\|\Phi_{xx}(\omega)-\widehat{\Phi}_{xx}( \omega))\right\|\geq\varepsilon\right) \leq\exp\left(-\frac{\varepsilon_{2}^{2}n}{128M^{2}}+6p\right), \tag{12}\] \[\mathbb{P}\left(\left\|\Phi_{CC}(\omega)-\widehat{\Phi}_{CC}( \omega))\right\|\geq\varepsilon\right) \leq\exp\left(-\frac{\varepsilon_{2}^{2}n}{128M^{2}}+6q\right),\text{ and}\] (13) \[\mathbb{P}\left(\left\|\Phi_{ii}(\omega)-\widehat{\Phi}_{ii}( \omega))\right\|\geq\varepsilon\right) \leq\exp\left(-\frac{\varepsilon_{2}^{2}n}{128M^{2}}+6\right). \tag{14}\]
### Sample Complexity Bounds: Upper Bound
In the previous subsection, concentration bounds on the estimation errors in PSDM were obtained. Here, a concentration bound on the error in estimating \(f\) is obtained, which is used to obtain a concentration bound in reconstructing the DDAG, \(G\). The following result provides a concentration bound on \(|f-\widehat{f}|\).
**Lemma 4.4**.: _Consider an LDS governed by (3) that satisfies Assumptions 2.2 and 2.3. Let \(\{\bar{x}^{r}(k)\}_{k=0}^{N-1}\), \(1\leq r\leq n\) be the time series measurements obtained for the LDS. Suppose that \(N>\frac{2C\rho^{-1}}{(1-\rho^{-1})^{2}\varepsilon_{1}}\), where \(0<\varepsilon_{1}\). Let \(0<\varepsilon_{1},\varepsilon_{2}<\varepsilon\) be such that \(\varepsilon_{2}=\varepsilon-\varepsilon_{1}\). Then there exists a \(c_{0}\in\mathbb{R}\) such that, for any \(\omega\in\Omega\),_
\[\mathbb{P}\left(|f(i,C,\omega)-\widehat{f}(i,C,\omega)\geq\varepsilon\right) \leq c_{0}e^{\left(-\frac{\varepsilon_{2}^{2}}{10368M^{6}}+6(q+1)\right)},\]
_where \(q\) is the maximum number of parents any node has in \(G\)._
Proof.: See Appendix D
Based on Lemma 4.4, the following upper bound on the probability of error in estimating \(G\) can be obtained.
**Theorem 4.5**.: _Suppose \(q\leq p/2\). Consider an LDS that belongs to \(\mathcal{H}_{p,q}(\beta,\sigma,M)\) (Definition 2.4) that satisfies Assumptions 2.2 and 2.3. Let \(\{\bar{x}^{r}(k)\}_{k=0}^{N-1}\), \(1\leq r\leq n\) be the time series measurements of the LDS and let \(\widehat{G}\) be the DDAG reconstructed by Algorithm 1. Suppose that \(N>\frac{2C\rho^{-1}}{(1-\rho^{-1})^{2}\varepsilon_{1}}\), where \(0<\varepsilon_{1}<\Delta/4\). Let \(0<\varepsilon_{1},\varepsilon_{2}<\varepsilon<\Delta/4\) be such that \(\varepsilon_{2}=\varepsilon-\varepsilon_{1}\). Then \(\mathbb{P}\left(G(\omega)\neq\widehat{G}(\omega)\right)\leq\delta\) if_
\[n\gtrsim\frac{M^{6}\left(q\log(p/q)-\log\delta\right)}{\epsilon_{2}^{2}}.\]
Proof.: Applying the bound in Lemma 4.4,
\[\mathbb{P}\left(G(\omega)\neq\widehat{G}(\omega)\right) =\mathbb{P}\left(\bigcup_{k\in V,C\subseteq V\setminus\{k\},|C| \leq q}\left\{|f(i,C,\omega)-\widehat{f}(i,C,\omega)|>\epsilon\right\}\right)\] \[\overset{(a)}{\leq}\sum_{k\in V,C\subseteq V\setminus\{k\},|C| \leq q}\mathbb{P}\left(\left\{|f(i,C,\omega)-\widehat{f}(i,C,\omega)|> \epsilon\right\}\right)\] \[\overset{(b)}{\leq}p\left[\binom{(p-1)}{1}+\cdots+\binom{(p-1)}{ q}\right]c_{0}e^{\left(-\frac{\epsilon^{2}n}{10368M^{6}}+6(q+1)\right)}\] \[\overset{(c)}{\lesssim}c_{1}p\times q\times(p/q)^{q}e^{\left(- \frac{\epsilon^{2}n}{10368M^{6}}+6(q+1)\right)}\] \[\approx\exp\left(\log p+\log q+q\log\left(p/q\right)+\left(- \frac{\epsilon^{2}n}{10368M^{6}}+6(q+1)\right)\right)\] \[\approx\exp\left(\log p+\log q+q\log\left(p/q\right)+6q-\frac{ \epsilon^{2}n}{M^{6}}\right)\] \[\lesssim\exp\left(q\log\left(p/q\right)-\frac{\epsilon^{2}n}{M^{6 }}\right)<\delta,\]
where \((a)\) follows by union bound, \((b)\) follows since \(|V|=p\) and there are \(\binom{p-1}{k}\) number of combinations with \(|C|=k\). \((c)\) follows by applying Stirling's approximation, \(\binom{n}{k}\leq(ne/k)^{k}\). Thus, \(n\gtrsim\frac{M^{6}(q\log(p/q)-\log\delta)}{\epsilon^{2}}\). By selecting the threshold \(\gamma\) in the algorithm appropriately, we can get a sample complexity \(n\gtrsim\frac{M^{6}q\log(p/q)}{\Delta^{2}}\).
In the following section, a matching lower bound is derived.
## 5 Sample Complexity Bounds: Lower Bound
The lower bound for reconstructing DDAG is derived using information-theoretic techniques, in particular Fano's inequality, and by restricting the interested family of graphs to a finite set. Notice that, except for a couple of non-trivial facts in dynamic setup, this is a direct extension of the lower bound for the static case provided in Gao et al. (2022). The approach is to construct restricted ensembles of graphical models and then to lower bound the probability of error using Generalized Fano's inequality. Theorem 5.1 below provides the lower bound. For completeness, the proof is provided in the Appendix E.2.
**Theorem 5.1**.: _Suppose \(q\leq p/2\). If_
\[n\leq(1-2\delta)\max\left(\frac{\log p}{2\beta^{2}+\beta^{4}}, \frac{q\log(p/q)}{M^{2}-1}\right)\]
_then_
\[\inf_{\widehat{G}}\max_{H\in\mathcal{H}_{p,q}(\beta,\sigma,M)} \mathbb{P}\{(G(H)\neq\widehat{G})\}\geq\delta,\]
That is, if \(n\leq(1-2\delta)\max\left(\frac{\log p}{2\beta^{2}+\beta^{4}},\frac{q\log(p/q) }{M^{2}-1}\right)\), then any given estimator fails to reconstruct the DDAG with probability greater than \(\delta\). The lower-bound provides the fundamental limit on reconstructing the DDAG from finite samples.
Theorem 5.1 provides the lower bound \(\Omega\left(\frac{\log p}{2\beta^{2}+\beta^{4}}\sqrt{\frac{q\log(p/q)}{M^{2}-1 }}\right)\). Notice that the upper bound in Theorem 4.5 is \(O(q\log\frac{p}{q})\). Thus we obtain a matching order bound when the lower bound is dominated by the second term.
## Conclusion
In this article, we characterized the optimal sample complexity for structure identification with directions in linear dynamical networks. Inspired by the static setting, a metric and an algorithm were proposed based on the power spectral density matrix to exactly reconstruct the DAG. It is shown that the optimal sample complexity is \(n=\Theta(q\log(p/q))\). For the upper bound characterization, we obtained a tight concentration bound for power spectral density matrix. An information-theoretic min-max lower bound also was provided for (sub) Gaussian linear dynamical sysytems. It was shown that the upper-bound is order optimal with respect to the lower bound.
## Appendix A Proof of Lemma 3.3
Let \(C\subseteq V\setminus\{j\}\) be an ancestral set and let \(D=nd(j)\setminus C\). Then,
\[\mathbf{x}_{j}(\omega)=H_{jC}(\omega)X_{C}(\omega)+H_{jD}(\omega)X_{D}(\omega)+ e_{j}(\omega).\]
Applying \(\Phi_{e_{j}C}=\Phi_{e_{j}D}=0\), we obtain \(\Phi_{jC}(\omega)=H_{jC}(\omega)\Phi_{CC}(\omega)+H_{jD}(\omega)\Phi_{DC}(\omega)\) and
\[\Phi_{j}(\omega)=H_{jC}(\omega)\Phi_{C}H_{jC}(\omega)^{*}+H_{jD}(\omega)\Phi_{ DC}(\omega)H_{jC}^{*}(\omega)+H_{jC}(\omega)\Phi_{CD}(\omega)H_{jD}^{*}( \omega)+H_{jD}(\omega)\Phi_{D}(\omega)H_{jD}^{*}(\omega)+\Phi_{e_{j}e_{j}}( \omega).\]
Then
\[f(j,C,\omega)=\Phi_{j}-\Phi_{jC}\Phi_{C}^{-1}\Phi_{Cj}=\Phi_{e_{j}e_{j}}+H_{jD }(\Phi_{D}-\Phi_{DC}\Phi_{CC}^{-1}\Phi_{CD})H_{jD}^{*}.\]
Notice that when \(Pa(j)\subseteq C\), \(H_{jD}=0\), and \(f(j,C,\omega)=\Phi_{e_{j}e_{j}}\), which shows the first part.
To prove the second part, suppose \(Pa(j)\cap D\neq\emptyset\). We need to show that \(H_{jD}(\Phi_{D}-\Phi_{DC}\Phi_{CC}^{-1}\Phi_{CD})\mathbf{H}_{jD}^{*}>0\). Let \(A=nd(j)=C\cup D\) and \(B=desc(j)\cup\{j\}\). From Talukdar et al. (2018); Veedu et al. (2021),
\[\Phi_{AA}^{-1}= \mathbf{S}+\mathbf{L},\text{ where}\] \[\mathbf{S}=(\mathbf{I}_{A}-\mathbf{H}_{AA}^{*})\Phi_{e_{A}}^{-1} (\mathbf{I}_{A}-\mathbf{H}_{AA}),\] \[\mathbf{L}=\mathbf{H}_{BA}\Phi_{e_{B}}^{-1}\mathbf{H}_{BA}-\Psi^{ *}\Lambda^{-1}\Psi,\] \[\Psi=\mathbf{H}_{AB}^{*}\Phi_{e_{A}}^{-1}(\mathbf{I}-\mathbf{H}_{ AA})+(\mathbf{I}-\mathbf{H}_{BB}^{*})\Phi_{e_{B}}^{-1}\mathbf{H}_{BA},\text{ and}\] \[\Lambda=\mathbf{H}_{AB}^{*}\Phi_{e_{A}}^{-1}\mathbf{H}_{AB}+( \mathbf{I}-\mathbf{H}_{BB}^{*})\Phi_{e_{B}}^{-1}(\mathbf{I}-\mathbf{H}_{BB}).\]
Notice that since \(B\) is the set of descendants of \(j\), \(\mathbf{H}_{AB}=0\), as cycles can be formed otherwise. Then, \(\mathbf{L}=0\) and \(\Phi_{AA}^{-1}=(\mathbf{I}_{A}-\mathbf{H}_{AA}^{*})\Phi_{e_{A}}^{-1}(\mathbf{ I}_{A}-\mathbf{H}_{AA})\).
\[\Phi_{AA}^{-1}=\left[\begin{array}{cc}\Phi_{DD}&\Phi_{DC}\\ \Phi_{CD}&\Phi_{CC}\end{array}\right]^{-1}=\left[\begin{array}{cc}K_{DD}&K_{ DC}\\ K_{CD}&K_{CC}\end{array}\right]=\frac{1}{\sigma}\left(\mathbf{I}-\mathbf{H}_{AA}^ {*}\right)(\mathbf{I}-\mathbf{H}_{AA})\]
By Scur's complement, \((\Phi_{D}-\Phi_{DC}\Phi_{CC}^{-1}\Phi_{CD})^{-1}=K_{DD}=\frac{1}{\sigma}( \mathbf{I}_{D}-\mathbf{H}_{DD}^{*}-\mathbf{H}_{DD}+(\mathbf{H}_{AA}^{*} \mathbf{H}_{AA})_{D\times D})\). Moreover,
\[\mathbf{H}_{AA}=\left[\begin{array}{cc}\mathbf{H}_{DD}&\mathbf{H}_{DC}\\ \mathbf{H}_{CD}&\mathbf{H}_{CC}\end{array}\right]\text{ and }(\mathbf{H}_{AA}^{*} \mathbf{H}_{AA})_{D\times D}=\mathbf{H}_{DD}^{*}\mathbf{H}_{DD}+\mathbf{H}_{ CD}^{*}\mathbf{H}_{CD}.\]
Since \(C\) is ancestral, \(\mathbf{H}_{CD}=0\) and
\[K_{DD}=\frac{1}{\sigma}(\mathbf{I}_{D}-\mathbf{H}_{DD})^{*}(\mathbf{I}_{D}- \mathbf{H}_{DD}).\]
Since \(G\) is a DAG, the rows and columns of \(\mathbf{H}\) can be rearranged to obtain a lower triangular matrix with zeros on the diagonal. Thus eigenvalues of \((\mathbf{I}_{D}-\mathbf{H}_{DD})\) and its inverse are all ones. Hence minimum eigenvalue of \(K_{DD}^{-1}\) is greater than \(\sigma\). Applying Rayleigh Ritz theorem on \(\mathbf{H}_{jD}K_{DD}^{-1}\mathbf{H}_{jD}^{*}\), we have
\[\mathbf{H}_{jD}(\Phi_{D}-\Phi_{DC}\Phi_{CC}^{-1}\Phi_{CD})\mathbf{H}_{jD}^{*}= \mathbf{H}_{jD}K_{DD}^{-1}\mathbf{H}_{jD}^{*}\geq\sigma|D|\beta^{2} \tag{15}\]
which is strictly greater than zero if \(D\) is non-empty.
Proof of Lemma 3.8
The proof is done in two steps. First, we show that \(\mathcal{S}\) in Algorithm 1 is a topological ordering. Then, we show that step (4) in Algorithm 1 can identify the parents of every node in \(G\). The first step is shown via induction. Since \(|\widehat{f}(i,C,\omega)-f(i,C,\omega)|<\Delta/4\) for empty set, \(|\Phi_{ii}-\widehat{\Phi}_{ii}|<\Delta/4\) for every \(i\). Recall from Lemma 3.3 that \(\Phi_{ii}-\Phi_{jj}>\Delta\) if \(i\) is a source node and \(j\) is a non-source node. Then, \(\widehat{\Phi}_{jj}\geq\Phi_{jj}-\Delta/4\geq\Phi_{ii}+3\Delta/4\geq\widehat{ \Phi}_{ii}+\Delta/2\). Thus, \(i\in\arg\min\limits_{1\leq k\leq p}\widehat{\Phi}_{kk}\) if and only if \(i\in\arg\min\limits_{1\leq k\leq p}\Phi_{kk}\) and thus \(\mathcal{S}_{1}\) is always a source node.
For the induction step, assume that \(\mathcal{S}_{1},\ldots,\mathcal{S}_{n}\) forms a correct topologically ordered set w.r.t. \(G\). Let \(C\subseteq\mathcal{S}(1:n)\). If \(Pa(i)\subseteq C\) and \(Pa(j)\nsubseteq C\), then by applying Lemma 3.3, \(\widehat{f}(j,C,\omega)>f(j,C,\omega)-\Delta/4\geq\sigma+3\Delta/4=f(i,C, \omega)+3\Delta/4\geq\widehat{f}(i,C,\omega)+\Delta/2\). Thus, \(i\in\arg\min\limits_{k\in V\setminus\mathcal{S}}\widehat{f}(k,C,\omega)\) if and only if \(i\in\arg\min\limits_{k\in V\setminus\mathcal{S}}f(k,C,\omega)\) and thus \((\mathcal{S},\mathcal{S}_{n+1})\) forms a topological order w.r.t. \(G\), by Lemma 3.6.
To prove the second step, let \(C\subseteq\mathcal{S}(1:i)\). Since \(\mathcal{S}(1:i)\) is a valid topological ordering, \(Pa(i)\subseteq\mathcal{S}(1:i-1)\). Let \(k\in Pa(i)\) and let \(D=C\setminus\{k\}\). Then, as shown in Corollary 3.4\(f(i,C,\omega)-f(i,D,\omega)\geq\Delta\), and
\[\Delta\leq|f(i,C,\omega)-f(i,D,\omega)| \leq|f(i,C,\omega)-\widehat{f}(i,C,\omega)|+|\widehat{f}(i,C, \omega)-\widehat{f}(i,D,\omega)|\] \[\qquad+|\widehat{f}(i,D,\omega)-f(i,D,\omega)|\] \[<\Delta/4+\Delta/4+|\widehat{f}(i,C,\omega)-\widehat{f}(i,D, \omega)|\] \[\implies|\widehat{f}(i,C,\omega)-\widehat{f}(i,D,\omega)| >\Delta/2.\]
Suppose \(k\notin Pa(i)\) but \(k\in\mathcal{S}(1:i)\). Then, for \(D=C\setminus\{k\}\), \(f(i,C,\omega)-f(i,D,\omega)=0\). Repeating the same series of inequalities above by exchanging \(f\) and \(\widehat{f}\), we obtain \(|\widehat{f}(i,C,\omega)-\widehat{f}(i,D,\omega)|<\Delta/2\).
Thus, from the set \(\mathcal{S}\), for every node \(\mathcal{S}_{i}\), one can check nodes \(\mathcal{S}_{1},\ldots,\mathcal{S}_{i-1}\) and verify if the difference of including and excluding the node is greater than \(\Delta/2\). If the difference is greater than \(\Delta/2\) for some \(k\), then \(k\) is a parent of \(i\), and if not, then the node is not a parent of \(i\). That is, let \(C_{i}=\{\mathcal{S}_{1},\ldots,\mathcal{S}_{i-1}\}\), \(i>1\), and let
\[\widehat{P}_{i}:=\left\{j\in C_{i}\,\Big{|}\,\,|\widehat{f}(\mathcal{S}_{i},C _{i},\omega)-\widehat{f}(\mathcal{S}_{i},C_{i}\setminus\{j\},\omega)|>\Delta/ 2\,\right\}.\]
Then, \(Pa(i)=\widehat{P}_{i}\).
## Appendix C Proof of Theorem 4.2
By the variational form of spectral norm Horn and Johnson (2012),
\[\|Q\|=\sup_{v\in\mathbb{C}^{p},\|v\|=1}|v^{*}Qv|,\]
where the max is taken over a \(p\)-dimensional unit complex sphere, \(\mathbb{S}^{p}:=\{v\in\mathbb{C}^{p}:\|v\|_{2}=1\}\). The first step here is to reduce supremum to finite maximization using finite covers of a unit ball, which is done using a \(\delta\) cover. A \(\delta\)-cover of a set \(\mathcal{A}\) is a set \(v^{1},\ldots,v^{m}\) such that for every \(v\in\mathcal{A}\), there exists an \(i\in 1,\ldots,m\) such that \(\|v^{i}-v\|_{2}\leq\delta\). The following Lemma is obtained by extending example 5.8 in Wainwright (2019) to the complex field.
**Lemma C.1**.: _Let \(v^{1},\ldots,v^{m}\) be a \(\delta\)-covering of the unit sphere \(\mathbb{S}^{p}\). Then there exists such a covering with \(m\leq(1+2/\delta)^{2p}\) vectors._
Proof.: The proof follows by extending (5.9) in Wainwright (2019), to the complex field.
Let \(v\in\mathbb{S}^{p}\) and let \(v^{j}\) be such that \(v=v^{j}+\Delta\), where \(\|\Delta\|\leq\delta\). Then, \(v^{*}Qv=(v^{j})^{*}Qv^{j}+2\Re\{\Delta^{*}Qv^{j}\}+\Delta^{*}Q\Delta\). Applying triangle inequality,
\[|v^{*}Qv| \leq|(v^{j})^{*}Qv^{j}|+2\|\Delta\|\|Q\|\|v^{j}\|+\|\Delta\|^{2}\|Q\|\] \[\leq|(v^{j})^{*}Qv^{j}|+2\delta\|Q\|+\delta^{2}\|Q\|\] \[\leq|(v^{j})^{*}Qv^{j}|+\frac{1}{2}\|Q\|\text{ for }\delta\leq 0.22474.\]
Thus,
\[\|Q\| =\max_{v\in\mathbb{S}^{p}}|v^{*}Qv|\leq\max_{j=1,\ldots,m}|(v^{j} )^{*}Qv^{j}|+\frac{1}{2}\|Q\|\text{ and }\] \[\|Q\| \leq 2\max_{j=1,\ldots,m}|(v^{j})^{*}Qv^{j}|\]
Next, we find an upper bound for \(\mathbb{E}\left[e^{\lambda\|Q\|}\right]\), which is treated with Chernoff-type bounding technique to obtain the desired result.
\[\mathbb{E}\left[e^{\lambda\|Q\|}\right] \leq\mathbb{E}\left[\exp\left(2\lambda\max_{j=1,\ldots,m}|(v^{j}) ^{*}Qv^{j}|\right)\right]\] \[\leq\sum_{j=1}^{m}\mathbb{E}\left[e^{2\lambda(v^{j})^{*}Qv^{j}} \right]+\mathbb{E}\left[e^{-2\lambda(v^{j})^{*}Qv^{j}}\right] \tag{16}\]
Next, we complete the proof for the restart and record sampling and the continuous sampling separately.
### Restart and Record Sampling
Under the restart and record sampling settings, for any given \(\omega\in\Omega\), \(\{\mathbf{x}^{r}(\omega)\}_{r=1}^{n}\) is i.i.d. Thus
\[\mathbb{E}\left[\exp\left(t(v^{j})^{*}Qv^{j}\right)\right] =\mathbb{E}\left[\exp\left(t(v^{j})^{*}(\widetilde{\Phi}_{x}( \omega)-\widetilde{\Phi}_{x}(\omega)))v^{j}\right)\right]\] \[=\mathbb{E}\left[\exp\left(\frac{t}{n}\sum_{r=1}^{n}(v^{j})^{*} \mathbf{x}^{r}(\omega)[\mathbf{x}^{r}(\omega)]^{*}v^{j}-(v^{j})^{*}\widetilde {\Phi}_{x}(\omega)v^{j}\right)\right]\] \[=\prod_{r=1}^{n}\mathbb{E}\left[\exp\left(\frac{t}{n}(v^{j})^{*} \mathbf{x}^{r}(\omega)[\mathbf{x}^{r}(\omega)]^{*}v^{j}-(v^{j})^{*}\widetilde {\Phi}_{x}(\omega)v^{j}\right)\right]\] \[=\left(\mathbb{E}\left[\exp\left(\frac{t}{n}(v^{j})^{*}\mathbf{ x}^{1}(\omega)[\mathbf{x}^{1}(\omega)]^{*}v^{j}-(v^{j})^{*}\widetilde{\Phi}_{x}( \omega)v^{j}\right)\right]\right)^{n}\] \[=\left(\mathbb{E}\left[\exp\left(\frac{t}{n}|v^{*}\mathbf{x}^{r} (\omega)|^{2}-v^{*}\widetilde{\Phi}_{x}(\omega)v\right)\right]\right)^{n}\]
Let \(\varepsilon\in\{-1,+1\}\) be a Rademacher variable independent of \(\mathbf{x}^{r}\). It can be shown that Proposition 4.11 in Wainwright (2019) will hold for complex numbers also. Then
\[\mathbb{E}_{\mathbf{x}^{r}(\omega)}\left[\exp\left(\frac{t}{n}|v^{ *}\mathbf{x}^{r}(\omega)|^{2}-v^{*}\widetilde{\Phi}_{x}(\omega)v\right)\right] \leq\mathbb{E}_{\mathbf{x}^{r}(\omega),\varepsilon}\left[\exp \left(\frac{2t\varepsilon}{n}|v^{*}\mathbf{x}^{r}(\omega)|^{2}\right)\right] \tag{17}\] \[=\sum_{k=0}^{\infty}\frac{\left(2t/n\right)^{2k}}{2k!}\mathbb{E} \left[|v^{*}\mathbf{x}^{r}(\omega)|^{4k}\right] \tag{18}\]
Recall that \(\widetilde{\Phi}_{\mathbf{x}}\) is a positive definite matrix and \(v^{*}\mathbf{x}^{r}\sim N(\mathbf{0},\eta),\) where \(\eta=v^{*}\widetilde{\Phi}_{\mathbf{x}}v\leq\lambda_{max}(\widetilde{\Phi}_{ \mathbf{x}})\leq M.\) The even moments of \(y\sim N(0,\eta)\) is given by \(\mathbb{E}\{y^{2k}\}=\eta^{2k}(2k-1)!!=\frac{(2k)!}{2^{k}k!}\eta^{2k}\). Then
\[\mathbb{E}\left[|v^{*}\mathbf{x}^{r}(\omega)|^{4k}\right]\leq\frac{(4k)!}{2^{ 2k}(2k)!}M^{2k}.\]
Therefore using the inequality \((4k)!\leq 2^{2k}[(2k)!]^{2}\),
\[\mathbb{E}_{\mathbf{x}^{r}(\omega)}\left[\exp\left(\frac{t}{n}|v ^{*}\mathbf{x}^{r}(\omega)|^{2}-v^{*}\widetilde{\Phi}_{x}(\omega)v\right)\right] \leq 1+\sum_{k=1}^{\infty}\frac{\left(2t/n\right)^{2k}}{2k!}\frac{ \left(4k\right)!}{2^{2k}(2k)!}M^{2k}\] \[\leq 1+\sum_{k=1}^{\infty}\frac{\left(2t/n\right)^{2k}}{2k!}\frac{ 2^{2k}[(2k)!]^{2}}{2^{2k}(2k)!}M^{2k}\] \[=1+\sum_{k=1}^{\infty}\left(\frac{2Mt}{n}\right)^{2k}=\frac{1}{1- \left(\frac{2Mt}{n}\right)^{2}}\] \[\leq\exp\left(\frac{8M^{2}t^{2}}{n^{2}}\right)\]
whenever \(\frac{2Mt}{n}<3/4\), where the final inequality follows by applying \(1-x\geq e^{-2x}\) for \(x\in[0,3/4]\) (to be precise 0.77). Thus,
\[\mathbb{E}\left[\exp\left(t(v^{j})^{*}Qv^{j}\right)\right]\leq\exp\left(\frac {8M^{2}t^{2}}{n}\right),\ \forall|t|\leq\frac{3n}{8M}.\]
Applying Lemma C.1 and the bound \(2m\leq 2(1+2/0.22474)^{2p}\leq 2e^{4.6p}\leq e^{5p+0.693}\leq e^{6p}\),
\[\text{From \eqref{eq:2m},}\ \ \mathbb{E}\left[e^{\lambda\|Q\|}\right] \leq\mathbb{E}\left[\exp\left(2\lambda\max_{j=1,...,m}|(v^{j})^{*} Qv^{j}|\right)\right]\] \[\leq\sum_{j=1}^{m}\mathbb{E}\left[e^{2\lambda(v^{j})^{*}Qv^{j}} \right]+\mathbb{E}\left[e^{-2\lambda(v^{j})^{*}Qv^{j}}\right]\] \[\leq 2m\exp\left(\frac{32M^{2}\lambda^{2}}{n}\right)\] \[\leq\exp\left(\frac{32M^{2}\lambda^{2}}{n}+6p\right),\ \forall|\lambda|\leq\frac{3n}{16M}.\]
Applying Chernoff-type bounding approach,
\[\mathbb{P}\left(\|Q\|\geq t\right)\leq e^{-\lambda t}\mathbb{E}\left[e^{ \lambda\|Q\|}\right] \leq\exp\left(-\lambda t+\frac{32M^{2}\lambda^{2}}{n}+6p\right),\ \forall|\lambda|\leq\frac{3n}{16M}.\]
The tightest bound is given by \(g^{*}(t):=\inf\limits_{|\lambda|\leq\frac{3n}{16M}}\left\{-\lambda t+\frac{32M^{2} \lambda^{2}}{n}+6p\right\}\), where the objective is convex. Taking derivative w.r.t. \(\lambda\) and equating to zero, \(\lambda^{*}=\frac{tn}{64M^{2}}\) and \(g^{*}=-\frac{t^{2}n}{64M^{2}}+\frac{32M^{2}}{n}\frac{t^{2}n^{2}}{64^{2}M^{4}}+ 6p=6p-\frac{t^{2}n}{128M^{2}}\), if \(t\) is such that \(t\leq 12M\), which is reasonable as we can always pick \(M\geq 1\).
Thus, \(\mathbb{P}\left(\|\mathbf{Q}\|\geq t\right)\leq\exp\left(-\frac{t^{2}n}{128M^{ 2}}+6p\right)\) The theorem statement follows.
### Continuous Sampling
In the continuous sampling setting, the samples \(\ddot{\mathbf{x}}(0),\ldots,\ddot{\mathbf{x}}(N-1),\ddot{\mathbf{x}}(N), \ldots,\ddot{\mathbf{x}}(2N-1),\ldots,\)\(\ddot{\mathbf{x}}((n-1)N),\ldots,\ddot{\mathbf{x}}(nN-1)\) are sampled continuously and are correlated with each other. Thus, \(\mathbf{x}^{r}(\omega)\) and \(\mathbf{x}^{s}(\omega)\), \(r\neq s,\ 1\leq r,s\leq n\), can be correlated, in contrast to the restart and record (RR) setting, where the \(\mathbf{x}^{r}(\omega)\) and \(\mathbf{x}^{s}(\omega)\), \(r\neq s\) are i.i.d. For any given \(\omega\in\Omega\), let \(\mathbf{x}(\omega):=[[\mathbf{x}^{1}(\omega)]^{T},[\mathbf{x}^{2}(\omega)]^{T},\ldots,[\mathbf{x}^{n}(\omega)]^{T}]^{T}\in\mathbb{C}^{pn\times 1}\) be the vectorized form of \(\{\mathbf{x}^{r}(\omega)\}_{r=1}^{n}\) and let \(\mathcal{C}(\omega):=\mathbb{E}\{\mathbf{x}(\omega)\mathbf{x}^{*}(\omega)\}\) be the covariance matrix of \(\mathbf{x}(\omega)\). Under the RR setting, \(\mathcal{C}(\omega)\in\mathbb{C}^{pn\times nn}\) will be a block-diagonal matrix (of block size \(p\times p\)), whereas in the continuous sampling, the non-block-diagonal entries of \(\mathcal{C}(\omega)\) can be non-zero. However, the vector \(\mathbf{x}(\omega)\), with correlated entries, can be written as a linear transformation of i.i.d. vector \(\mathbf{w}\in\mathbb{C}^{pn\times 1}\) with unit variance, i.e. \(\mathbf{x}(\omega)=\mathcal{C}^{1/2}(\omega)\mathbf{w}\), where \(\mathcal{C}^{1/2}\) is the square-root of \(\mathcal{C}\). As shown in Appendix E.1, when \(\{\dot{\mathbf{e}}(k)\}_{k=1}^{n}\) in the linear time-invariant model (1) are Gaussian, \(\mathbf{x}^{r}(\omega)\) and thus \(\mathbf{x}(\omega)\) are Gaussian distributed. In this case, a candidate is \(\mathbf{w}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{pn})\). It can be verified that \(\mathbb{E}\{\mathbf{x}(\omega)\mathbf{x}^{*}(\omega)\}=\mathcal{C}^{1/2}( \omega)\mathbb{E}\{\mathbf{w}\mathbf{w}^{*}\}\mathcal{C}^{1/2}(\omega)= \mathcal{C}(\omega)\). Notice that the covariance matrix \(\mathcal{C}(\omega)\) is a block matrix, defined as
\[\mathcal{C}=\begin{bmatrix}\mathcal{C}^{11}&\mathcal{C}^{12}&\ldots&\mathcal{ C}^{1n}\\ \mathcal{C}^{21}&\mathcal{C}^{22}&\ldots&\mathcal{C}^{2n}\\ \vdots&&\\ \mathcal{C}^{n1}&\mathcal{C}^{n2}&\ldots&\mathcal{C}^{nn}\end{bmatrix},\text{ where }\mathcal{C}^{rs}(\omega)\in\mathbb{C}^{p\times p},\,1\leq r,s\leq n,\]
where the entries of \(\mathcal{C}^{rs}(\omega)\) is given by \(\mathbb{E}\left\{\mathbf{x}^{r}(\omega)[\mathbf{x}^{s}(\omega)]^{*}\right\}\). Recall that
\[\mathbf{x}^{r}(\omega)=\frac{1}{\sqrt{N}}\sum_{\ell=0}^{N-1}\ddot{x}((r-1)N+ \ell)e^{-i\omega\ell}.\]
Let \(\mathbf{I}_{r}=\left[\mathbf{0}|\ldots|\mathbf{0}|\mathbf{I}_{p\times p}| \ldots|\mathbf{0}\right]\in\mathbb{R}^{p\times np}\) be such that \(r^{th}\) block is identity matrix. Then \(\mathbf{x}^{r}(\omega)=\mathbf{I}_{r}\mathbf{x}(\omega)\). The estimated PSDM is then given by
\[\widehat{\Phi}_{\mathbf{x}}(\omega)=\frac{1}{n}\sum_{r=1}^{n}\mathbf{x}^{r}( \omega)[\mathbf{x}^{r}(\omega)]^{*}=\frac{1}{n}\sum_{r=1}^{n}\mathbf{I}_{r} \mathbf{x}(\omega)\mathbf{x}^{*}(\omega)\mathbf{I}_{r}^{*}.\]
Substituting \(\mathbf{x}(\omega)=\mathcal{C}^{1/2}(\omega)\mathbf{w}\), and letting \(\mathbf{B}(\omega):=(\mathcal{C}^{1/2})^{*}(\omega)\sum_{r=1}^{n}\mathbf{I}_{r }^{*}uu^{*}\mathbf{I}_{r}\mathcal{C}^{1/2}(\omega)\)
\[\mathbb{E}\left[\exp\left(tu^{*}\mathbf{Q}u\right)\right]= \mathbb{E}\left[\exp\left(\frac{t}{n}\sum_{r=1}^{n}u^{*}\mathbf{x}^ {r}(\omega)[\mathbf{x}^{r}(\omega)]^{*}u-u^{*}\widetilde{\Phi}_{x}(\omega)u \right)\right]\] \[= \mathbb{E}\left[\exp\left(\frac{t}{n}\sum_{r=1}^{n}\left[\mathbf{ x}^{*}(\omega)\mathbf{I}_{r}^{*}uu^{*}\mathbf{I}_{r}\mathbf{x}(\omega)-\mathbb{E} \left\{\mathbf{x}^{*}(\omega)\mathbf{I}_{r}^{*}uu^{*}\mathbf{I}_{r}\mathbf{x}( \omega)\right\}\right]\right)\right]\] \[= \mathbb{E}\left[\exp\left(\frac{t}{n}\sum_{r=1}^{n}\left[\mathbf{ w}^{*}(\mathcal{C}^{1/2})^{*}(\omega)\mathbf{I}_{r}^{*}uu^{*}\mathbf{I}_{r} \mathcal{C}^{1/2}(\omega)\mathbf{w}-\mathbb{E}\left\{\mathbf{w}^{*}(\mathcal{C }^{1/2})^{*}(\omega)\mathbf{I}_{r}^{*}uu^{*}\mathbf{I}_{r}\mathcal{C}^{1/2}( \omega)\mathbf{w}\right\}\right]\right)\right]\] \[= \mathbb{E}\left[\exp\left(\frac{t}{n}\left[\mathbf{w}^{*}( \mathcal{C}^{1/2})^{*}(\omega)\sum_{r=1}^{n}(\mathbf{I}_{r}^{*}uu^{*}\mathbf{ I}_{r})\mathcal{C}^{1/2}(\omega)\mathbf{w}-\mathbb{E}\left\{\mathbf{w}^{*}( \mathcal{C}^{1/2})^{*}(\omega)\sum_{r=1}^{n}(\mathbf{I}_{r}^{*}uu^{*}\mathbf{I}_ {r})\mathcal{C}^{1/2}(\omega)\mathbf{w}\right\}\right]\right)\right]\] \[= \mathbb{E}\left[\exp\left(\frac{t}{n}\left[\mathbf{w}^{*}\mathbf{ B}(\omega)\mathbf{w}-\mathbb{E}\left\{\mathbf{w}^{*}\mathbf{B}(\omega)\mathbf{w} \right\}\right]\right)\right]\]
Notice that \(\mathbf{I}_{r}^{*}u=\left[0,\ldots,0,u^{T},,\ldots,0\right]^{T}\) is a column vector,
\[\mathbf{I}_{r}^{*}uu^{*}\mathbf{I}_{r}=\begin{bmatrix}\mathbf{0}&\mathbf{0}& \mathbf{0}&\ldots&\mathbf{0}\\ \ldots&&\ldots&\mathbf{0}\\ \vdots&\ldots&\underline{uu}^{*}&\ldots&\mathbf{0}\\ \vdots&\vdots&\ldots&\ddots&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\ldots&\mathbf{0}\end{bmatrix}\text{ and }\sum_{r=1}^{n}\mathbf{I}_{r}^{*}uu^{*}\mathbf{I}_{r}= \begin{bmatrix}uu^{*}&\mathbf{0}&\ldots&\mathbf{0}\\ \mathbf{0}&uu^{*}&\ldots&\mathbf{0}\\ \vdots&\ddots&\ldots&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\ldots&uu^{*}\end{bmatrix},\]
i.e., \(rank(\mathbf{I}_{r}^{*}uu^{*}\mathbf{I}_{r})=1\) and \(rank(\mathbf{B}(\omega))\leq n\). Let \(\mathbf{B}(\omega)=\mathbf{U}(\omega)\Lambda(\omega)\mathbf{U}^{*}(\omega)\) be the eigen value decomposition of \(\mathbf{B}(\omega)\), where \(\Lambda=diag(\lambda_{1},\ldots,\lambda_{n})\). Consequently, omitting \(\omega\) from the notations,
\[\mathbb{E}\left[\exp\left(tu^{*}\mathbf{Q}u\right)\right] =\mathbb{E}\left[\exp\left(\frac{t}{n}\left[\mathbf{w}^{*}\mathbf{ B}\mathbf{w}-\mathbb{E}\left\{\mathbf{w}^{*}\mathbf{B}\mathbf{w}\right\} \right]\right)\right]\] \[=\mathbb{E}\left[\exp\left(\frac{t}{n}\left[\mathbf{w}^{*}\mathbf{ U}\Lambda\mathbf{U}^{*}\mathbf{w}-\mathbb{E}\left\{\mathbf{w}^{*}\mathbf{U} \Lambda\mathbf{U}^{*}\mathbf{w}\right\}\right]\right)\right]\] \[\overset{(a)}{=}\mathbb{E}\left[\exp\left(\frac{t}{n}\left[ \mathbf{w}^{*}\Lambda\mathbf{w}-\mathbb{E}\left\{\mathbf{w}^{*}\Lambda \mathbf{w}\right\}\right]\right)\right]\] \[=\mathbb{E}\left[\exp\left(\frac{t}{n}\sum_{i=1}^{n}\lambda_{i} \left[w_{i}^{2}-\mathbb{E}\left\{w_{i}^{2}\right\}\right]\right)\right]\] \[=\prod_{i=1}^{n}\mathbb{E}\left[\exp\left(\frac{t\lambda_{i}}{n} \left[w_{i}^{2}-\mathbb{E}\left\{w_{i}^{2}\right\}\right]\right)\right],\]
where \((a)\) follows because \(\mathbf{w}\) is invariant under unitary transformations Cui et al. (2019). Let \(\varepsilon\in\{+1,-1\}\) be a uniform random variable independent of \(\mathbf{w}\). Similar to (17), we can now
apply the Rademacher random variable trick.
\[\mathbb{E}_{w_{i}}\left[\exp\left(\lambda\left[w_{i}^{2}-\mathbb{E} \left\{w_{i}^{2}\right\}\right]\right)\right] \leq\mathbb{E}_{w_{i},\varepsilon}\left[\exp\left(2\lambda\varepsilon w _{i}^{2}\right)\right]\] \[=\sum_{k=0}^{\infty}\frac{\left(2\lambda\right)^{2k}}{(2k)!} \mathbb{E}\left[w_{i}^{4k}\right]\] \[\leq\sum_{k=0}^{\infty}\frac{\left(2\lambda\right)^{2k}}{2k!} \frac{(4k)!}{(2k)!2^{2k}}\] \[\leq\sum_{k=0}^{\infty}\left(2\lambda\right)^{2k}=\frac{1}{1-4 \lambda^{2}}\leq\exp(8\lambda^{2}),\]
for every \(|\lambda|<3/8\). Thus, (with the substitution \(\lambda=t\lambda_{i}/n\) and the upperbound \(\lambda_{i}\leq\|\mathbf{B}\|\))
\[\mathbb{E}\left[\exp\left(tu^{\star}\mathbf{Q}u\right)\right] =\prod_{i=1}^{n}\mathbb{E}\left[\exp\left(\frac{t\lambda_{i}}{n} \left[w_{i}^{2}-\mathbb{E}\left\{w_{i}^{2}\right\}\right]\right)\right]\] \[\leq\prod_{i=1}^{n}\exp\left(\frac{8t^{2}\lambda_{i}^{2}}{n^{2}}\right)\] \[=\exp\left(\frac{8t^{2}}{n^{2}}\sum_{i=1}^{n}\lambda_{i}^{2}\right)\] \[\leq\exp\left(\frac{8t^{2}}{n}\|\mathcal{C}\|^{2}\right),\ \forall\ |t|\leq \frac{3n}{8\|\mathcal{C}\|},\]
where we have used \(\|B\|\leq\|\mathcal{C}\|\) in the final equality. Now, combining this with the \(\delta-\)cover argument,
\[\mathbb{E}\left[e^{t\|Q\|}\right] \leq\sum_{j=1}^{m}\mathbb{E}\left[e^{2t(v^{j})^{\star}Qv^{j}} \right]+\mathbb{E}\left[e^{-2t(v^{j})^{\star}Qv^{j}}\right]\] \[\leq 2m\exp\left(\frac{32\|\mathcal{C}\|^{2}t^{2}}{n}\right)\] \[\leq\exp\left(\frac{32\|\mathcal{C}\|^{2}t^{2}}{n}+6p\right),\ \forall|t|\leq \frac{3n}{16\|\mathcal{C}\|}.\]
Finally, applying Chernoff bound, \(\mathbb{P}\left(\|\mathbf{Q}\|\geq t\right)\leq\exp\left(-\frac{t^{2}n}{128\| \mathcal{C}\|^{2}}+6p\right)\).
#### c.2.1 Tight upper bound for \(\|\mathcal{C}\|\)
An explicit expression for \(\mathcal{C}\) is given as follows:
\[\mathcal{C}^{rs}(\omega):=\mathbb{E}\left\{\mathbf{x}^{r}(\omega)[ \mathbf{x}^{s}(\omega)]^{*}\right\} =\frac{1}{N}\sum_{\ell=0}^{N-1}\sum_{k=0}^{N-1}\mathbb{E}\left\{ \breve{x}((r-1)N+\ell)[\breve{x}((s-1)N+k)]^{T}\right\}e^{-i\omega(\ell-k)}\] \[=\frac{1}{N}\sum_{\ell=0}^{N-1}\sum_{k=0}^{N-1}R_{\breve{\mathbf{ x}}}((r-s)N+\ell-k)e^{-i\omega(\ell-k)}\] \[=\frac{1}{N}\sum_{\tau=-N+1}^{N-1}(N-|\tau|)R_{\breve{\mathbf{x}} }((r-s)N+\tau)e^{-i\omega\tau}\] \[=\sum_{\tau=-N+1}^{N-1}\left(1-\frac{|\tau|}{N}\right)R_{\breve{ \mathbf{x}}}((r-s)N+\tau)e^{-i\omega\tau}\] \[=\sum_{\tau=-N+1}^{N-1}\left(1-\frac{|\tau|}{N}\right)e^{-i \omega\tau}R_{\breve{\mathbf{x}}}((r-s)N+\tau).\]
Let \(\alpha_{\tau}\ =\ e^{-i\omega\tau}\left(1-\frac{|\tau|}{N}\right)\). Then
\[\mathcal{C}=\sum_{\tau=-N+1}^{N-1}\alpha_{\tau}\begin{bmatrix}R_{\mathbf{x}}( \tau)&R_{\mathbf{x}}(-N+\tau)&\ldots&R_{\mathbf{x}}((1-n)N+\tau)\\ R_{\mathbf{x}}(N+\tau)&R_{\mathbf{x}}(\tau)&\ldots&R_{\mathbf{x}}((2-n)N+\tau) \\ \vdots&\vdots&\ddots&\\ R_{\mathbf{x}}((n-1)N+\tau)&R_{\mathbf{x}}((n-2)N+\tau)&\ldots&R_{\mathbf{x}}( \tau)\end{bmatrix}.\]
Notice that \(\breve{g}(\tau):=1-|\tau|/N\) is a triangle function, and the Fourier transform of \(\breve{g}(\tau)\), \(g(\omega)\) has the property that \(|g(\omega)|\leq 1\). Then for any \(u\in\mathbb{C}^{np}\) such that \(\|u\|_{2}\leq 1\),
\[u^{*}\mathcal{C}u =\sum_{i,j=1}^{n}[u^{i}]^{*}\sum_{\tau=-N+1}^{N-1}\alpha_{\tau}R _{\breve{\mathbf{x}}}((i-j)N+\tau)u^{j}\] \[\overset{(a)}{\leq}\mathcal{F}^{\tau}\{u\}\mathcal{F}^{\tau}\{R_ {\mathbf{x}}(tN+\tau)\}\mathcal{F}^{\tau}\{u\}\] \[\overset{(b)}{\leq}\|\Phi_{\mathbf{x}}(\omega)\|\leq M,\]
where \((a)\) follows by taking Fourier transform with respect to \(\tau\) (\(\mathcal{F}^{\tau}\) denotes Fourier transform with respect to the variable \(\tau\)) and \((b)\) since \(\|\mathcal{F}^{\tau}\{u\}\|_{2}\leq 1\). Thus, \(\mathbb{P}\left(\|\mathbf{Q}\|\geq t\right)\leq\exp\left(-\frac{t^{2}n}{128M^{ 2}}+6p\right)\), similar to the restart and record case.
## Appendix D Proof of Lemma 4.4
Notice that \(\|A\mathbf{x}\|_{2}\leq\|A\|\|\mathbf{x}\|_{2}\) for every matrix \(A\) and vector \(\mathbf{x}\). Applying this identity with \(\mathbf{x}=[1,0\ldots,0]\), \(\|\Phi_{Ci}\|_{2}\leq\|\Phi_{AA}\|\), where \(A=[k,\ C]\). Then, applying CBS inequality for
complex vectors, \(|x^{*}Ay|\leq\|x\|_{2}\|Ay\|_{2}\leq\|x\|_{2}\|A\|\|y\|_{2}\), the error can be upper bounded as
\[|f(i,C,\omega)-\widehat{f}(i,C,\omega)| =|(\Phi_{ii}-\Phi_{iC}\Phi_{CC}^{-1}\Phi_{Ci})-(\widehat{\Phi}_{ii} -\widehat{\Phi}_{iC}\widehat{\Phi}_{CC}^{-1}\widehat{\Phi}_{Ci})|\] \[=|(\Phi_{ii}-\widehat{\Phi}_{ii})+(\widehat{\Phi}_{iC}\widehat{ \Phi}_{CC}^{-1}\widehat{\Phi}_{Ci}-\Phi_{iC}\Phi_{Ci}^{-1}\Phi_{Ci})|\] \[\leq|\Phi_{ii}-\widehat{\Phi}_{ii}|+|\widehat{\Phi}_{iC}(\widehat {\Phi}_{CC}^{-1}-\Phi_{C}^{-1})\widehat{\Phi}_{Ci})|\] \[\qquad+|(\widehat{\Phi}_{iC}-\Phi_{iC})\Phi_{CC}^{-1}\widehat{ \Phi}_{Ci})|+|\Phi_{iC}\Phi_{CC}^{-1}(\widehat{\Phi}_{Ci}-\Phi_{Ci})|\] \[\leq|\Phi_{ii}-\widehat{\Phi}_{ii}|+\|\widehat{\Phi}_{iC}\|_{2}\| \widehat{\Phi}_{CC}^{-1}-\Phi_{CC}^{-1}\|\widehat{\Phi}_{Ci}\|_{2}\] \[\qquad+\|\widehat{\Phi}_{iC}-\Phi_{iC}\|_{2}\|\Phi_{CC}^{-1}\| \|\widehat{\Phi}_{Ci}\|_{2}+\|\Phi_{iC}\|_{2}\|\Phi_{CC}^{-1}\|\|\widehat{\Phi} _{Ci}-\Phi_{Ci}\|_{2}\] \[\leq|\Phi_{ii}-\widehat{\Phi}_{ii}|+\|\widehat{\Phi}_{CC}^{-1}- \Phi_{CC}^{-1}\|\|\widehat{\Phi}_{Ci}\|_{2}^{2}\] \[\qquad+M\|\widehat{\Phi}_{iC}-\Phi_{iC}\|_{2}\|\widehat{\Phi}_{ Ci}\|_{2}+M^{2}\|\widehat{\Phi}_{Ci}-\Phi_{Ci}\|_{2}\] \[\leq|\Phi_{ii}-\widehat{\Phi}_{ii}|+\|\widehat{\Phi}_{CC}^{-1}- \Phi_{CC}^{-1}\|\|\widehat{\Phi}_{Ci}-\Phi_{Ci}\|_{2}^{2}+\|\widehat{\Phi}_{ CC}^{-1}-\Phi_{CC}^{-1}\|\|\Phi_{Ci}\|_{2}^{2}\] \[\qquad+M\|\widehat{\Phi}_{iC}-\Phi_{iC}\|_{2}\left(\|\widehat{ \Phi}_{Ci}-\Phi_{Ci}\|_{2}+\|\Phi_{Ci}\|_{2}\right)+M^{2}\|\widehat{\Phi}_{Ci} -\Phi_{Ci}\|_{2}\] \[\leq|\Phi_{ii}-\widehat{\Phi}_{ii}|+\|\widehat{\Phi}_{CC}^{-1}- \Phi_{CC}^{-1}\|\|\widehat{\Phi}_{Ci}-\Phi_{Ci}\|_{2}^{2}+\|\widehat{\Phi}_{CC} ^{-1}-\Phi_{CC}^{-1}\|M^{2}\] \[\qquad+M\|\widehat{\Phi}_{iC}-\Phi_{iC}\|_{2}\left(\|\widehat{ \Phi}_{Ci}-\Phi_{Ci}\|_{2}+M\right)+M^{2}\|\widehat{\Phi}_{Ci}-\Phi_{Ci}\|_{2}\] \[=|\Phi_{ii}-\widehat{\Phi}_{ii}|+\|\widehat{\Phi}_{CC}^{-1}-\Phi_{ CC}^{-1}\|\|\widehat{\Phi}_{Ci}-\Phi_{Ci}\|_{2}^{2}+\|\widehat{\Phi}_{CC}^{-1}- \Phi_{CC}^{-1}\|M^{2}\] \[\qquad+M\|\widehat{\Phi}_{Ci}-\Phi_{Ci}\|_{2}^{2}+M^{2}\|\widehat {\Phi}_{Ci}-\Phi_{Ci}\|_{2}+M^{2}\|\widehat{\Phi}_{Ci}-\Phi_{Ci}\|_{2}\] \[\leq|\Phi_{ii}-\widehat{\Phi}_{ii}|+\|\widehat{\Phi}_{CC}^{-1}- \Phi_{CC}^{-1}\|\|\widehat{\Phi}_{Ci}-\Phi_{Ci}\|_{2}^{2}+M^{2}\|\widehat{\Phi }_{CC}^{-1}-\Phi_{CC}^{-1}\|\] \[\qquad+M\|\widehat{\Phi}_{Ci}-\Phi_{Ci}\|_{2}^{2}+2M^{2}\|\widehat {\Phi}_{Ci}-\Phi_{Ci}\|_{2}.\]
The above expression can be bounded above if we can bound the three errors, \(\|\widehat{\Phi}_{ii}-\Phi_{ii}\|=\epsilon_{i}\), \(\|\widehat{\Phi}_{AA}-\Phi_{AA}\|=\epsilon_{A}\), and \(\|\widehat{\Phi}_{CC}^{-1}-\Phi_{CC}^{-1}\|=\epsilon_{Cinv}\). Simplifying the above expression,
\[|f(i,C,\omega)-\widehat{f}(i,C,\omega)| \leq|\Phi_{ii}-\widehat{\Phi}_{ii}|+\|\widehat{\Phi}_{CC}^{-1}- \Phi_{CC}^{-1}\|\|\widehat{\Phi}_{Ci}-\Phi_{Ci}\|_{2}^{2}+M^{2}\|\widehat{\Phi }_{CC}^{-1}-\Phi_{CC}^{-1}\|\] \[\qquad+M\|\widehat{\Phi}_{Ci}-\Phi_{Ci}\|_{2}^{2}+2M^{2}\|\widehat {\Phi}_{Ci}-\Phi_{Ci}\|_{2}\] \[\leq\epsilon_{i}+\epsilon_{Cinv}\epsilon_{A}^{2}+2M^{2}\epsilon_{ Cinv}+M\epsilon_{A}^{2}+2M^{2}\epsilon_{A}\] \[\leq\epsilon_{i}+\epsilon_{Cinv}(\epsilon_{A}^{2}+2M^{2})+3M^{2} \epsilon_{A}\] \[\leq\epsilon_{i}+3M^{2}\epsilon_{Cinv}+3M^{2}\epsilon_{A}\]
Pick \(\epsilon_{i}=3M^{2}\epsilon_{Cinv}=3M^{2}\epsilon_{A}=\epsilon/3\). Then \(|f(i,C,\omega)-\widehat{f}(i,C,\omega)|<\epsilon\). From Section 5.8 in Horn and Johnson (2012),
\[\|\Phi_{CC}-\widehat{\Phi}_{CC}\| \leq\|\Phi_{CC}\|\|\Phi_{CC}^{-1}\|^{-1}\|\widehat{\Phi}_{CC}^{-1}- \Phi_{CC}^{-1}\|\frac{M^{2}}{1-M^{2}\frac{\|\widehat{\Phi}_{CC}^{-1}-\Phi_{CC}^{- 1}}{\|\Phi_{CC}^{-1}\|}},\] \[\leq\frac{M^{4}\|\widehat{\Phi}_{CC}^{-1}-\Phi_{CC}^{-1}\|}{1-M\| \widehat{\Phi}_{CC}^{-1}-\Phi_{CC}^{-1}\|}\leq\epsilon\implies\|\widehat{\Phi}_{ CC}^{-1}-\Phi_{CC}^{-1}\|\leq\frac{\epsilon}{M^{4}+M\epsilon}\leq\frac{\epsilon}{M^{4}}.\]
Therefore, to guarantee that \(\|\widehat{\Phi}_{CC}^{-1}-\Phi_{CC}^{-1}\|<\epsilon\), it is sufficient to guarantee that \(\|\widehat{\Phi}_{CC}-\Phi_{CC}\|<\epsilon\) since \(M\geq 1\). Rewriting Corollary 4.3,
\[\mathbb{P}\left(|\Phi_{ii}-\widehat{\Phi}_{ii}|\geq\epsilon\right) \leq e^{-\frac{\epsilon^{2}n}{128M^{2}}+6}, \tag{19}\] \[\mathbb{P}\left(\|\Phi_{AA}-\widehat{\Phi}_{AA}\|>\epsilon\right) \leq e^{-\frac{\epsilon^{2}n}{128M^{2}}+6(q+1)},\text{ and}\] (20) \[\mathbb{P}\left(\|\Phi_{CC}-\widehat{\Phi}_{CC}\|>\epsilon\right) \leq e^{-\frac{\epsilon^{2}n}{128M^{2}}+6q},\ \ \ \ \forall\epsilon\geq 0. \tag{21}\]
Plugging these bounds in the above expressions gives the concentration upper bound
\[\mathbb{P}\left(|f(i,C,\omega)-\widehat{f}(i,C,\omega)\geq\epsilon\right) \leq\mathbb{P}\left(|\Phi_{ii}-\widehat{\Phi}_{ii}|\geq\epsilon/3 \right)+\mathbb{P}\left(\|\widehat{\Phi}_{CC}^{-1}-\Phi_{CC}^{-1}\|\geq \epsilon/(9M^{2})\right)\] \[\qquad+\mathbb{P}\left(\|\widehat{\Phi}_{AA}-\Phi_{AA}\|\geq \epsilon/(9M^{2})\right)\] \[\leq\mathbb{P}\left(|\Phi_{ii}-\widehat{\Phi}_{ii}|\geq\epsilon/ 3\right)+\mathbb{P}\left(\|\widehat{\Phi}_{CC}-\Phi_{CC}\|\geq\epsilon M^{2}/ (9)\right)\] \[\qquad+\mathbb{P}\left(\|\widehat{\Phi}_{AA}-\Phi_{AA}\|\geq \epsilon/(9M^{2})\right)\] \[\leq e^{\left(-\frac{\epsilon^{2}n}{1152M^{2}}+6\right)}+e^{ \left(-\frac{\epsilon^{2}M^{2}n}{10368}+6q\right)}+e^{\left(-\frac{\epsilon^{2 }n}{10368M^{6}}+6(q+1)\right)}\] \[\leq c_{0}e^{\left(-\frac{\epsilon^{2}n}{10368M^{6}}+6(q+1) \right)}.\]
## Appendix E Lower bound: Proof of Theorem 5.1
### Density function in frequency domain
Consider an autoregressive (AR) model
\[\tilde{\mathbf{x}}(k)=\sum_{l=0}^{T_{1}}\tilde{\mathbf{H}}(l)\tilde{\mathbf{x }}(k-l)+\tilde{\mathbf{e}}(k),\ \forall k\in\mathbb{Z}, \tag{22}\]
where \(\tilde{\mathbf{e}}(k)=[\tilde{\mathbf{e}}_{1}(k),\ldots,\tilde{\mathbf{e}}_{n }(k)]^{T}\), \(\tilde{\mathbf{x}}(k)=[\tilde{\mathbf{x}}_{1}(k),\ldots,\tilde{\mathbf{x}}_{n }(k)]^{T}\), \(\tilde{\mathbf{e}}_{i}\) is a stochastic process such that the Fourier transform \(\mathbf{e}(\omega)\) exists. To understand the problem, let us assume \(\tilde{\mathbf{e}}(k)\sim\mathcal{N}(0,\sigma^{2}\mathbf{I})\) for \(k=0,\ldots,T_{2}\), i.i.d. and zero otherwise. i.e., \(\tilde{\mathbf{x}}(k)\) is non zero only for
\(0,\ldots,T_{1}+T_{2}\). Then
\[\mathbf{e}(\omega) =\sum_{k=0}^{T_{2}}\check{\mathbf{e}}(k)e^{-j\omega k}=\sum_{k=0}^{ T_{2}}\check{\mathbf{e}}(k)\cos(\omega k)-j\sum_{k=0}^{T_{2}}\check{\mathbf{e}}(k) \sin(\omega k).\] \[\implies cov(e(\omega)) =\mathbb{E}\{e(\omega)e^{*}(\omega)\}\] \[=\mathbb{E}\left\{\left(\sum_{k_{1}=0}^{T_{2}}\check{\mathbf{e}} (k_{1})e^{-j\omega k_{1}}\right)\left(\sum_{k_{2}=0}^{T_{2}}\check{\mathbf{e}} (k_{2})e^{-j\omega k_{2}}\right)^{*}\right\}\] \[=\mathbb{E}\left\{\left(\sum_{k_{1}=0}^{T_{2}}\sum_{k_{2}=0}^{T_ {2}}\check{\mathbf{e}}(k_{1})[\check{\mathbf{e}}(k_{2})]^{*}e^{-j\omega k_{1 }}e^{j\omega k_{2}}\right)\right\}\] \[=\mathbb{E}\left\{\left(\sum_{k=0}^{T_{2}}\check{\mathbf{e}}(k)[ \check{\mathbf{e}}(k)]^{*}\right)\right\}\] \[=\sum_{k=0}^{T_{2}}\mathbb{E}\left\{\check{\mathbf{e}}(k)[ \check{\mathbf{e}}(k)]^{*}\right\}\] \[=\sigma^{2}\mathbf{I}\sum_{k=0}^{T_{2}}1.\]
Thus, \(\mathbf{e}(\omega)\sim N\left(0,(T_{2}+1)\sigma^{2}\mathbf{I}\right).\)
Now, consider the general LDS (3), with \(\check{e}(k)\sim\mathcal{N}(0,\sigma_{k}\mathbf{I}),\ k\in\mathbb{Z}\). Then as shown above, \(\mathbf{e}(\omega)\sim N\left(0,\Phi_{\mathbf{e}}(\omega)\right)\), where \(\Phi_{\mathbf{e}}(\omega)=\sum_{k\in\mathbb{Z}}\sigma_{k}\mathbf{I}\). It follows that \(\mathbf{x}(\omega)=(I-\mathbf{H}(\omega))^{-1}\mathbf{e}(\omega)\) and \(\mathbf{x}(\omega)\sim N(0,\Phi_{\mathbf{x}})\), where \(\Phi_{\mathbf{x}}(\omega)=(I-\mathbf{H}(\omega))^{-1}\Phi_{\mathbf{e}}(\omega )((I-\mathbf{H}(\omega))^{-1})^{*}\). Thus, the density function \(f^{(\omega)}\) is Gaussian, where the covariance matrix is the power spectral density matrix, a function of \(\omega\).
### Proof of lowerbound
The proof is based on Generalized Fano's inequality.
**Lemma E.1** (Generalized Fano's method).: _Gao et al. (2022) Consider a class of observational distribution \(\mathcal{F}\) and a subclass \(\mathcal{F}^{\prime}=\{F_{1},\ldots,F_{r}\}\subseteq\mathcal{F}\) with \(r\) distributions and the estimators \(\widehat{\theta}\). Then_
\[\inf_{\widehat{\theta}}\max_{F\in\mathcal{F}}\mathbb{E}\{\mathbb{I}(\theta(F) \neq\widehat{\theta})\}\geq\frac{\alpha_{r}}{2}\left(1-\frac{n\beta_{r}+\log 2}{ \log r}\right),\]
_where \(n\) is the number of samples,_
\[\alpha_{r} :=\max_{k\neq j}\mathbb{I}(\theta(F_{k})\neq\theta(F_{j})),\] \[\beta_{r} :=\max_{k\neq j}KL(F_{k}||F_{j}),\]
_with \(KL(P||Q):=\mathbb{E}_{P}\left[\log\frac{P}{Q}\right]=\mathbb{E}_{P}\left[\log P \right]-\mathbb{E}_{P}\left[\log Q\right]\) being the KL divergence._
**Corollary E.2**.: _Consider subclass of graphs \(\mathcal{G}^{\prime}=\{G_{1},\ldots,G_{r}\}\subseteq\mathcal{G}_{p,q}\), and let \(\mathbf{H}^{i}\) be the distribution corresponding to a distinct \(G_{i}\in\mathcal{G}^{\prime}\). Then, any estimator \(\widehat{G}:=\bigcup\limits_{\omega\in\Omega}\widehat{G}(\omega)\) of
_is \(\delta\) unreliable,_
\[\inf_{\widehat{G}}\sup_{G_{i}\in\mathcal{G}^{\prime}}\mathbb{P}\{(G(\mathbf{H}^{i} )\neq\widehat{G}\}\geq\delta,\]
_if_
\[n\leq\frac{(1-2\delta)\log r-\log 2}{\beta_{r}}\]
Therefore, building a lower bound involves finding a subclass that has 1) small \(\beta_{r}\) and 2) large \(r\). First, we can find an upper bound of \(r\) by upper bounding the number of directed graphs possible with at most \(q\) parents. Overall, \(p^{2}\) number of possible positions are there and at most \(pq\) many edges. The number of possible ways to choose \(k\) edges is \(\binom{p^{2}}{k}\). Thus \(r=\sum\limits_{k=1}^{pq}\binom{p^{2}}{k}\leq pq\binom{p^{2}}{pq}\lesssim pq(p/ q)^{pq}\). Therefore, \(\log r\lesssim\log(pq)+pq\log(p/q)\lesssim pq\log(p/q)\) Similarly, it can be shown that \(\log r\gtrsim pq\log(p/q)\) Gao et al. (2022).
Consider the LDS (3), with \(\breve{c}(k)\sim N(0,\sigma_{k}\mathbf{I})\), i.i.d. across time. Then as shown in Appendix E.1, \(\mathbf{e}(\omega)\sim N\left(0,\Phi_{\mathbf{e}}(\omega)\right)\) and \(\mathbf{x}(\omega)\sim N(0,\Phi_{\mathbf{x}})\), where \(\Phi_{\mathbf{e}}(\omega)=\sum_{k\in\mathbb{Z}}\sigma_{k}\mathbf{I}\) and \(\Phi_{\mathbf{x}}(\omega)=(I-\mathbf{H}(\omega))^{-1}\Phi_{\mathbf{e}}(\omega) ((I-\mathbf{H}(\omega))^{-1})^{*}\).
**Ensemble A:** Consider all possible DAGs in \(\mathcal{G}^{\prime}\) with i.i.d. Gaussian exogenous distribution such that \(\Phi_{\mathbf{e}}(\omega)\) exists. For the two distributions, \(F_{k}\) and \(F_{j}\) such that for any \(\omega\in\Omega\), \(F_{k}(\omega)\sim\mathcal{N}(\mathbf{0},\Phi^{(k)}(\omega))\) and \(F_{j}(\omega)\sim\mathcal{N}(\mathbf{0},\Phi^{(j)}(\omega))\),
\[KL(F_{k}(\omega)||F_{j}(\omega)) =\frac{1}{2}\left(\mathbb{E}_{F_{j}(\omega)}[\mathbf{x}^{*}( \omega)[\Phi^{(k)}(\omega)]^{-1}\mathbf{x}(\omega)]-\mathbb{E}_{F_{j}}[ \mathbf{x}^{*}(\omega)[\Phi^{(j)}(\omega)]^{-1}\mathbf{x}(\omega)]\right)\] \[=\frac{1}{2}\left(\mathbb{E}_{F_{j}(\omega)}[tr(\mathbf{x}( \omega)[\Phi^{(k)}(\omega)]^{-1}\mathbf{x}^{*}(\omega))]-p\right)\] \[=\frac{1}{2}\left(tr([\Phi^{(k)}(\omega)]^{-1}\Phi^{(j)}(\omega)) -p\right)\] \[\leq\frac{1}{2}\left(\sqrt{\|[\Phi^{(k)}(\omega)]^{-1}\|_{F}^{2} \|\Phi^{(j)}(\omega)\|_{F}^{2}}-p\right)\] \[\leq\frac{1}{2}\left(pM^{2}-p\right)\leq(M^{2}-1)p\]
Therefore one of the lower bounds is
\[\inf_{\widehat{G}}\sup_{G\in\mathcal{G}^{\prime}}\mathbb{P}\{(G\neq\widehat{ G})\}\geq\delta,\]
if
\[n \leq\frac{(1-2\delta)pq\log(p/q))-\log 2}{(M^{2}-1)p}\] \[\lesssim\frac{q\log(p/q)}{M^{2}-1}.\]
**Ensemble B:** Here, we consider graphs in \(\mathcal{H}_{p,q}(\beta,\sigma,M)\) (recall the definition from 3.1.1) with a single edge \(u\to v\) with \(H_{vu}(\omega)=\beta\) for every \(\omega\in\Omega\), i.e. constant matrix. For LDS with i.i.d. Gaussian noise with PSD matrix \(\Phi_{\mathbf{x}}\) that satisfies this condition, \(\mathbf{H}\) is such that \(H_{vu}\neq 0\) and \(H_{ij}=0\) otherwise. Here, the total number of graphs,
Notice that \([\Phi_{\mathbf{x}}^{-1}]_{ij}=(\mathbf{I}_{ij}-H_{ij}-H_{ji}^{*}+\sum_{k=1}^{p}H_{ ki}^{*}H_{kj})/\sigma\). Thus (ignoring \(\omega\)), \([\Phi_{\mathbf{x}}^{-1}]_{uv}=\frac{-H_{uu}^{*}+\sum_{k=1}^{p}H_{ku}^{*}H_{kv}}{ \sigma}=\frac{-H_{uu}^{*}}{\sigma}=-\beta/\sigma\) and \([\Phi_{\mathbf{x}}^{-1}]_{ij}=0\) if \(i,j\neq u,v\). Then,
\[\mathbf{x}^{*}\Phi_{\mathbf{x}}^{-1}\mathbf{x} =\sum_{ij}x_{i}^{*}[\Phi_{\mathbf{x}}^{-1}]_{ij}x_{j}\] \[=\frac{1}{\sigma}\left[\sum_{i=1}^{p}|x_{i}|^{2}(1+\sum_{k}|H_{ki} |^{2})+x_{u}^{*}[\Phi_{\mathbf{x}}^{-1}]_{uv}x_{v}+x_{v}^{*}[\Phi_{\mathbf{x} }^{-1}]_{vu}x_{u}\right]\] \[=\frac{1}{\sigma}\left[\sum_{i\neq u}|x_{i}|^{2}+(1+|H_{vu}|^{2}) |x_{u}|^{2}-2\beta\Re\{x_{u}^{*}x_{v}\}\right]\] \[=\frac{1}{\sigma}\left[\sum_{i}|x_{i}|^{2}+\beta^{2}|x_{u}|^{2}-2 \beta\Re\{x_{u}^{*}x_{v}\}\right]\] \[=\frac{1}{\sigma}\left[\sum_{i\neq v}|x_{i}|^{2}+|x_{v}-\beta x_{ u}|^{2}\right]\]
Therefore,
\[KL(F^{uv}||F^{jk}) =\mathbb{E}_{F^{uv}}\left[\log F^{uv}-\log F^{jk}\right]\] \[=\frac{1}{2\sigma}\mathbb{E}_{F^{uv}}\left[|x_{v}|^{2}+|x_{v}- \beta x_{u}|^{2}-|x_{k}|^{2}-|x_{k}-\beta x_{j}|^{2}\right]\] \[=\frac{1}{2\sigma}\left[\beta^{2}\sigma+\mathbb{E}_{F^{uv}}\left( \beta^{2}|x_{j}|^{2}-2\beta\Re\{x_{j}^{*}x_{k}\}\right)\right]\]
Considering all the cases of \((u,v)\) vs \((j,k)\) it can be shown that \(KL(F^{uv}||F^{jk})\leq\beta^{2}+\beta^{4}/2\) Gao et al. (2022). Thus, \(n\gtrsim\frac{\log p}{\beta^{2}+\beta^{4}/2}\) gives the second lower bound. The lower bound follows by combining ensembles \(A\) and \(B\).
|
2309.08796 | Towards Robust and Efficient Communications for Urban Air Mobility | For the realization of the future urban air mobility, reliable information
exchange based on robust and efficient communication between all airspace
participants will be one of the key factors to ensure safe operations.
Especially in dense urban scenarios, the direct and fast information exchange
between drones based on Drone-to-Drone communications is a promising technology
for enabling reliable collision avoidance systems. However, to mitigate
collisions and to increase overall reliability, unmanned aircraft still lack a
redundant, higher-level safety net to coordinate and monitor traffic, as is
common in today's civil aviation. In addition, direct and fast information
exchange based on ad hoc communication is needed to cope with the very short
reaction times required to avoid collisions and to cope with the the high
traffic densities. Therefore, we are developing a \ac{d2d} communication and
surveillance system, called DroneCAST, which is specifically tailored to the
requirements of a future urban airspace and will be part of a multi-link
approach. In this work we discuss challenges and expected safety-critical
applications that will have to rely on communications for \ac{uam} and present
our communication concept and necessary steps towards DroneCAST. As a first
step towards an implementation, we equipped two drones with hardware prototypes
of the experimental communication system and performed several flights around
the model city to evaluate the performance of the hardware and to demonstrate
different applications that will rely on robust and efficient communications. | Dennis Becker, Lukas Schalk | 2023-09-15T22:39:59Z | http://arxiv.org/abs/2309.08796v1 | # Towards Robust and Efficient Communications
###### Abstract
For the realization of the future urban air mobility, reliable information exchange based on robust and efficient communication between all airspace participants will be one of the key factors to ensure safe operations. Especially in dense urban scenarios, the direct and fast information exchange between drones based on Drone-to-Drone communications is a promising technology for enabling reliable collision avoidance systems. However, to mitigate collisions and to increase overall reliability, unmanned aircraft still lack a redundant, higher-level safety net to coordinate and monitor traffic, as is common in today's civil aviation. In addition, direct and fast information exchange based on ad hoc communication is needed to cope with the very short reaction times required to avoid collisions and to cope with the the high traffic densities. Therefore, we are developing a D2D communication and surveillance system, called DroneCAST, which is specifically tailored to the requirements of a future urban airspace and will be part of a multi-link approach. In this work we discuss challenges and expected safety-critical applications that will have to rely on communications for UAM and present our communication concept and necessary steps towards DroneCAST. As a first step towards an implementation, we equipped two drones with hardware prototypes of the experimental communication system and performed several flights around the model city to evaluate the performance of the hardware and to demonstrate different applications that will rely on robust and efficient communications.
unmanned aviation, urban air mobility, drone-to-drone communications, collision avoidance, measurements, flight demonstration
## Nomenclature
AGC - Automatic Gain Control
CNPC - Control and Non Payload Communication
COTS - Commercially off the Shelf
DAA - Detect and Avoid
DroneCAST - Drone Communication and Surveillance Technology
D2D - Drone-to-Drone
GBAS - Ground Based Augmentation System
GPS Disciplined Oscillator
LOS - Line of Sight
SDR - Software Defined Radio
SNR - Signal to Noise Ratio
UAM - Urban Air Mobility
UTM - Unmanned Aircraft System Traffic Management
## 1 Introduction
In the near future, the urban airspace will be shared by piloted as well as unpiloted and autonomous aircraft, so-called drones. Current airspace management concepts, such as SESAR U-Space [1] and NASA UTM [2], rely on a reliable exchange of information between all participants for a safe integration of the new participants in urban airspace. In particular, unpiloted aircraft such as drones depend on this data exchange. Although robust communication is a central aspect of all concepts, there is currently no communication system that has been adapted to the specific challenges of this environment. In addition, due to the high density of drones, the management of urban airspace, called Unmanned Aircraft System Traffic Management (UTM), will be fundamentally different from the way it is currently handled in civil aviation. Continuous remote control of all the drones by a remote pilot in communication with UTM will not be possible due to the high traffic density and short reaction times needed to avoid collisions. Instead, UTM will heavily rely on pre-planned and conflict-free trajectories as well as continuous monitoring. Drones will fly these trajectories in an automated or autonomous manner. The implementation of this UTM concept will rely, at least in part, on existing communications infrastructure, such as mobile communication in order to connect the drones to the UTM [3]. Under ideal conditions, this approach may seem sufficient. However, upon closer inspection, weaknesses quickly become apparent, such as a lack of redundancy or a lack of an overarching safety net, as is common in civil aviation and shipping, or as is envisioned for future autonomous driving [4, 5, 6, 7].
But the urban environment is very challenging from a physical layer point of view, with rich multipath signal propagation as well as shadowing and diffraction events when flying close to surrounding objects such as tall buildings. Therefore, we are developing an ad hoc communication concept that is adapted to the specific challenges of the urban environment and takes into account the requirements of the potential applications. The ad hoc communication concept refers to the technical communication on the air interface between different nodes and is designed as a redundant data link in addition to other communication options in the context of a multi-link approach.
## 2 Challenges and Applications for Communication Systems for Urban Air Mobility
Communication systems for use in urban airspace face unique challenges that must be considered when selecting an appropriate system. The expected high density of drones must
be considered along with high mobility in three-dimensional space and rapidly changing topologies. Communication resources are limited and must be shared by all participants, whether airborne or ground based. The efficient use of resources and scalability is critical. In urban environments, the transmitted electromagnetic signals are reflected, scattered and diffracted by many surrounding objects such as buildings, vegetation and cars like illustrated in fig. 1 The multipath propagation of the signal can cause unfavorable overlap at the receiver and must be taken into account during reception to allow reconstruction of the transmitted signal. In addition, such interference can also be expected between different signals of the participants, especially in the air, where there is a high visibility between the vehicles and possible communication infrastructures in a dense space [8]. In addition, direct signal propagation can be expected to be shadowed by larger objects such as buildings at lower altitudes, so that only reflected and diffracted components can be received. Influences from the aircraft itself, such as shadowing from their frame, electrical and mechanical sources of interference, must also be considered.
The layout of the aircraft's capability may also lead to possible limitations in terms of size, weight and power consumption, known as SWaP constraints. This must then be taken into account in the choice of the transceiver performance. In addition, the requirements of various applications and future regulations in the field of urban air mobility are not yet well known. Figure 2 provides an overview of several other categories that interact with communications in the urban airspace and may need to be considered. In addition to the specific signal propagation effects already mentioned, any application that requires information exchange with the airborne vehicles may also place direct requirements on the communication system. For example, a certain amount of data must be transmitted within a certain time, the data link must be highly available, or a minimum number of subscribers, i.e., scalability, must be ensured. There may also be indirect requirements or influences on the communication system. For example, a command or important information for an aircraft to avoid an obstacle may need to be provided in a more timely manner if the aircraft is traveling faster, if the aircraft is very sluggish in the evasion obstacles, or has a limited mobility. Also, if position accuracy is degraded, increased separation and other separation rules may need to be applied, requiring more timely or frequent information exchanges. An on-board autonomy system may need to send information back to UTM, depending on the level of autonomy, or require certain clearances that may not be automated. Required security measures also mean increased data exchange and data volume. More broadly, various environmental factors place demands on the communications system. Weather, for example, can degrade signal propagation conditions or affect flight performance, such as in the case of strong wind gusts. In addition, events such as bird strikes may require active detection and transmission of critical information to the aircraft.
However, for communication in urban environments, collision avoidance in densely populated airspace will be a key application, as reliable and decentralized exchange of position data and trajectories between individual drones will be required. In this context, there is a high demand for the lowest possible transmission latency to enable the shortest possible reaction times.
## 3 Multi-link Approach for Robust Communication
To realize the upcoming UAM, a wide variety of applications will be used, each with different requirements on communication. It is therefore very difficult for a single communication system to cover the wide range of requirements. Therefore, we pursue a multi-link approach, i.e. a combination of different data links, as it is also aimed at in other concepts [9, 10, 11, 2]. A multi-link approach combining different data link technologies has many advantages over a single data link. The increase in redundancy and the increase in the performance of the overall system are the key aspects here. In addition, the initial effort required for the step-by-step implementation of applications such as U-Space Services is reduced. Existing communication systems can be used first, even if they have not been adapted for the application, and then future adapted data links can be added or the existing systems can be adapted according to the requirements. We distinguish different systems in the categories of infrastructure communication and adhoc communication.
For most use cases, the already existing communication infrastructure in the urban area will be sufficient, since for a large part the exchange of information is not security-critical and the required amounts of data can be transmitted over it with a certain delay. Since collision avoidance is a particularly safety-critical application in urban areas, we consider an specifically tailored and redundant safety network based on an ad hoc communication system to be the most important element for this application. In this case, important information for collision avoidance should be exchanged primarily via a direct and adapted data link between vehicles.
We consider the construction of a combined communication and monitoring system, which shall meet the following characteristics.
Fig. 1: Major signal propagation effects to consider in the urban D2D communications channel.
1. **Cooperative Collision Avoidance** Cooperative collision avoidance based on ad hoc communication between drones will be implemented that creates an additional, decentralized safety net without having to rely on communication infrastructure.
2. **Redundant Monitoring and Tracking of Aircraft** Not only can ad hoc communications be used to establish a direct link between drones, but redundant monitoring of drone movements can be established using appropriate ground stations to support the UTM. This can be done using the position messages of all drones in range, which are already broadcast for collision avoidance.
3. **Backup Datalink** For most applications, non-critical information can be sent over existing communication infrastructure. Nevertheless, it may be useful to have a redundant bi-directional data link available for this as well to increase reliability. Furthermore, it is not yet possible to estimate which other possible critical applications will require a reliable data link or low latencies in addition to collision avoidance. Therefore, it makes sense to build a basic backup data link that goes beyond a conventional pure beaconing system for collision avoidance. Here it would be possible to establish only direct links between the airborne participants and possible ground stations or to allow links over multiple "hops". A "multihop" communication across multiple participants would significantly increase the capability of a backup data link and create a new communication infrastructure, but implies additional effort in implementation and overhead in the communication itself. For example, routing algorithms would need to be implemented to route messages to the recipient via the correct path. The challenge here lies primarily in the rapidly changing network topology due to the high mobility of participants and changing signal propagation conditions. Thus, an initially preferred connection path between two nodes as part of a route can quickly become unfavorable or even fail completely if the communication characteristics deteriorate or the connection is quickly disrupted by shadowing.
Figure 3 illustrates such an multi-link approach considering mobile communication and sat-based communication as available infrastructure in urban environments.
Fig. 3: Multi-link approach as communication concept in DLR project HorizonUAM.
Fig. 2: Overview of categories that have direct or indirect mutual relationships with communication for Urban Air Mobility.
Adhoc Communication as Solution for Collision Avoidance in Urban Airspace
In road traffic, drivers avoid collisions with other vehicles by using their eyes to monitor their surroundings and braking or swerving as soon as they detect that another vehicle is on a collision course. In today's vehicles, optional assistance systems help the driver detect potential collisions. For example, adaptive cruise control systems use on-board sensors such as RADAR, LIDAR, or cameras to adjust the vehicle's speed to maintain a safe distance from vehicles ahead.
Beyond that, a variety of other sensors can be used to detect collision courses in road traffic, air traffic, rail traffic, or maritime traffic. Basically, sensors can be divided into two types: Cooperative obstacle detection sensors and non-cooperative obstacle detection sensors. Figure 4 provides an overview of the different sensor types for detect-and-avoid (DAA) systems.
Cooperative obstacles actively attract attention, for example by emitting a signal. Non-cooperative obstacles do not call attention to themselves. Sensors that detect non-cooperative systems can be further divided into active and passive systems. Active systems emit a signal and detect the reflection of the obstacle. An example of an active system is a RADAR. Passive systems detect obstacles by detecting unintentionally emitted signals, such as thermal radiation. Cooperative systems are widely used in all traffic domains to create situational awareness among vehicles. Therefore, every vehicle is required to periodically transmit its own position and intent to nearby vehicles via an ad-hoc communications system. Popular systems are 1090 Extended Squitter for air traffic, IEEE 802.11p for road traffic, RCAS for rail traffic and AIS for maritime traffic.
The main advantage of cooperative systems is the fact that cooperative systems typically provide very accurate information about position, direction, and speed, as well as additional information that non-cooperative systems cannot easily provide, such as the type of target and its state. Moreover, ad-hoc communication is a way of transmitting data without relying on any fixed infrastructure. It has several advantages for collision avoidance, especially in critical situations where every millisecond counts. Ad-hoc communication can achieve lower latency than infrastructure communication, which means that the data can be delivered faster and more reliably. Ad-hoc communication can also serve as a backup in case the infrastructure communication fails or is not available. Compared to RADAR and LIDAR, ad-hoc communication can cover higher ranges and works even in case of signal shadowing, which is when the signal is blocked or weakened by objects or weather conditions. Furthermore, ad-hoc communication can enable extended information exchange between the communicating vehicles, such as evasion instructions and information about trajectories. This can help to coordinate the actions and avoid conflicts. Finally, ad-hoc communication can have lower power consumption than RADAR, which means that it can save energy and reduce costs.
Cooperative detect and avoid via ad-hoc communication systems has also some disadvantages. First, the information that is transmitted must be trusted, since there is no guarantee that it is accurate or authentic. Therefore, security measures are needed to ensure the reliability and integrity of the communication. Second, ad-hoc communication requires an additional communication module, which adds costs, weight and power consumption to the vehicles or participants. This may affect their performance and efficiency. However, safety should not be compromised for the sake of saving resources. Third, ad-hoc communication can only detect uncooperative participants, who do not share their information or intentions with others. This means that there may be some hidden or unexpected threats that are not accounted for by the communication.
## 5 Development of a Drone-to-Drone Channel Model for Urban Environments based on Measurements
In 2019, German Aerospace Center (DLR) conducted a wideband channel sounding measurement campaign with two small hexacopters to measure drone-to-drone (D2D) propagation characteristics in an urban environment. The campaign took place at the DLR site in Oberpfaffenhofen, Germany, in three different environments with different flight trajectories including critical scenarios, where two communicating drones are not always in LOS to each other and are on a collision course. A channel sounding signal was transmitted in the C-band at 5.2 GHz with a bandwidth of 100 MHz and a transmit power of 30 dBm using omnidirectional and vertically polarized radiating antennas mounted underneath the drones. The measurement setup, hardware equipment and flight scenarios are described in detail in [12, 13].
Based on these measurements, we proposed a wideband channel model for D2D scenarios in urban environments in [14] to help evaluating and validating different communication concepts and datalink candidates via simulations without
Fig. 4: Overview and classification of Detect-and-Avoid systems.
having the need to perform complex and time costly measurement campaigns. By considering the underlying signal propagation effects in urban environments the robustness of datalink candidates can be improved. The model follows a geometrical-statistical channel modeling (GSCM) approach and incorporates coarse-grained knowledge about realistic locations and shapes for buildings to model the propagation effects closely to their physical cause in real-world. It is antenna independent and considers the identified dominant signal propagation effects from our measurements, but can easily incorporate further statistics. A more detailed discussion on the propagation characteristics and preliminary steps are presented in [15] and [16]. Figure 5 gives an overview of the model elements and fig. 6 shows the steps in the simulation chain.
First, coarse-grained abstract building shapes are placed according to the scenario under investigation. The locations and shapes of buildings in the surrounding environment very much influence the propagation characteristics of the urban D2D channel and therefore this initial placement helps to achieve realistic distribution of all other model elements. For this, statistical descriptions for urban environments like in the ITU-R Rec. P.1410 model [17] for example or direct 3D geometries from land surveying offices or similar can be used.
After the definition of the building shapes, the flight trajectories are defined and the properties of two different elements are drawn from statistical distributions. There are point scatterers with certain opening angles and scattering losses placed at different positions on the surfaces as well as reflection surfaces with certain dimensions and reflection losses. After this initialization phase the scenario is fully defined and the communication channel properties are generated in a snapshot based manner given the targeted time resolution. Then different propagation effects are calculated for the simplified resulting signal paths for all elements. Finally, the superimposed signal is calculated at the receiver.
## 6 Development of DroneCAST
As an essential part of the described multi-link approach in sec.3, we aim to develop a data link tailored to the special requirements and challenges for the safe operation of UAVs in urban areas. As a redundant data link based on the direct exchange between the vehicles, this supplements the communication alongside the available communication infrastructure such as through mobile or sat communication. The concept for this ad hoc communication first takes into account the requirements imposed by the main application, collision avoidance, since this is the most safety-critical application and the exchange of information is given priority. But we expect following three different applications within UAM that will have rely on communication to airspace participants over the radio and are safety-critical to ensure a safe operation. First,
Fig. 5: Overview of elements for the D2D channel model.
Fig. 6: Overview of the simulation chain for the D2D channel model.
cooperative collision avoidance based on direct communications will be needed resolve potential conflicts in the last course of action like shown in fig. (a)a. In order to ensure the high requirements for navigation such as high position accuracy and high availability, the navigation concept will also rely on redundant design of onboard sensors such as GNSS, IMU or cameras and the fusion of these sensor data. In addition to his, broadcasting GNSS correction data from ground based augmentation system (GBAS) groundstations to airborne vehicles may support their navigation. For this, information must be transmitted from the ground stations to the drones like shown in fig. (b)b. As third possible application, broadcasting important infos from vertiports to all vehicles in close distance such as in emergency cases for example might be needed like illustrated in fig (c)c.
For this we proposed Drone Communication and Suriveillance Technology (DroneCAST) [18] in order to establish an additional, decentralized and robust safety layer for the UTM concept as third level of the general levels like shown in fig. 7. We discussed first design decisions as well as analyzed requirements for DroneCAST. It is supposed to work reliably up to a drone density of 100 drones per square kilometer while using not more than 5 MHz of frequency bandwidth in the C-band at between 5030 MHz to 5091 MHz, which is already foreseen for drone communications. Major identified challenges are the severe multipath propagation environment and sudden shadowing events from a physical layer point-of-view and the high expected drone densities as well as limited communication resources from a medium access control point-of-view.
Fig. 8: Expected safety-critical applications for urban air mobility relying on communications over radio interface.
Fig. 7: Three general levels of deconfriction.
Experimental Platform and Flightdemonstrations towards DroneCAST
As a first step towards an implementation of DroneCAST, we equipped two drones with hardware prototypes of the experimental communication system and performed several flights around the model city to evaluate the performance of the hardware in comparison to commercially-off-the-shelf (COTS) hardware and to demonstrate different applications that will rely on robust and efficient communications. The flight tests are to show if the hardware is suitable for a later implementation and can be flown by our drones.
### Experimental Radio
The basis hardware for our experimental radio is a Software Defined Radio (SDR) together with a software implementation of the IEEE 802.11p WiFi standard for vehicular communication [7] on an small companion computer. The software implementation, which consists of different building blocks for a transmission system, runs in GNU radio, a signal processing framework, and was developed as an open source stack in [19]. In order increase the transmitted signal power of the SDR we added an signal amplifier and for time synchronization we are using a GPS disciplined oscillator that can be accessed by the SDR.
Following list gives an overview of the hardware elements of the experimental radio.
* **Software Defined Radio:** Ettus USRP B210
* **Companion Computer:** Intel NUC (Ubuntu 20.04.6 LTS, GNU Radio)
* **Amplifier:** Coaxial ZX60-83LN 21 dB gain
* **Time Synchronization:** Board Mounted GPSDO (TCXO)
* **Power Supply:** 100 W DC Converter for 19V, 100 W DC Converter for 6V
The elements are also illustrated in the payload setup shown in fig. 10.
The GNU radio implementation uses a TAP interface, which is a virtual Ethernet device, on the companion computer and it enables to use the SDR as an IP based data link device. Thus, the experimental radio is able to transmit different application data via IP interface. As first modifications towards DroneCAST we also changed the center frequency to 5050 MHz, which is within the foreseen frequency band of 5030 MHz to 5091 MHz and halved the bandwidth from 10 MHz to 5 MHz and tested the modifications.
We first evaluated the performance of our setup under ideal laboratory conditions by connecting two experimental radios with defined attenuators. Thereby we transmitted \(25\cdot 10^{3}\) packets of 125 Byte payload data with a transmission rate of 10 Hz for different attenuation values with and without amplifying the signal. Without amplification the USRP B210 is able to transmit with up to 10 dBm. With the amplifier we set the resulting transmission power to about 23 dBm in order to achieve similar transmission power as for the COTS radio, a Cohda Wireless Mk5. The amplifier has a gain of 21 dB, which means the same settings on the SDR result in a transmission power about 2 dBm. We repeated the measurement with and without the amplifier in order to evaluate if the amplifier causes signal distortions which would lead to increased packet errors. Figure 9 shows the resulting packet error rates over different attenuation values. Furthermore, it shows the Signal to Noise Ratio (SNR) values at packet reception indicated by the SDR. These values help us to evaluate the in-flight measurement results described in section 7.3.
We can clearly see, that the two measurements with and without amplifier only differ according to the gain of 21 dB. Therefore, we assume a low-distortion amplification and the amplifier is feasible for our setup. Furthermore, the measurements reveal that the radios are only able to receive packets in a certain SNR range. For attenuation of 110 dB and 90 dB respectively the received signal power is too low and starting with 90 dB and 70 dB respectively the signal power starts to overdrive the radio until no packet can be decoded any more. The dynamic range is relatively small, because there is no active automatic gain control (AGC). The hardware offers a built-in AGC, but the hardware driver does not active it when using the GNU radio framework.
### Integrated Payload on Hexacopter and Flighttrial Setup
We integrated our experimental radio as payload on our hexacopters. We are using two custom build hexacopters
Fig. 9: Measured packet error rates and received SNR values under ideal lab conditions without and without signal amplification.
based on DJI S900 airframes with upgraded E1200 propulsion system. They are equipped with two 10 Ah batteries at nominal voltage of 22.2 V and are able to carry up to 3 kg of payload with a flight time of approximately 15 to 20 minutes. Furthermore, they are using a Pixhawk 2 flight controller and have Raspberry Pi 4 companion computers in order to communicate with the flight controller via MAVLink messages over a serial interface. Figure 10 shows the main elements of our flight trial setup used for all in-flight measurements. Thereby we switched between the experimental radio and the COTS radio, but we were also able to carry both payloads for the flight demonstrations.
This setup enables the transmission of flight controller data over the radios for different applications and also to send commands to the flight controller.
### _In-flight Measurements_
We equipped our two hexacopters with the given hardware prototypes for collecting in-flight measurement data in and around the model city. In order to asses the experimental radio, we first performed three different flight missions and compared the performance between the experimental radio and the COTS hardware. Then we measured the performance when flying close and in between the model city providing nonLOS scenarios. Figure 11 illustrates the overall flight trial setup for the measurements consisting of our two hexacopters equipped with one of the two payload options.
The measurements and results are discussed in the following sections. Overall the measurements and demonstrations showed a feasible bidirectional information exchange for the experimental radio setup. We increased the transmission power by using an signal amplifier in order to achieve similar transmission power compared to the COTS radio and we used additional GPSDO extension boards in order to synchronize all the radios with GPS time reference. However, the performance of our experimental radio was slightly worse to the COTS hardware due to a missing AGC and weaker SNR values. In all line of sight (LOS) measurement scenarios, no packet losses occurred for the COTS radio whereas the experimental radio showed packet errors between \(4\%-8\%\).
#### 7.3.1 Flight Mission 1
For Mission 1 the transmitting drone was hovering at a defined height of 15 m and the receiving drone was flying four circles with a radius of 30 m around the hovering drone in three different heights at 10 m, 15 m and 20 m. Figure 12 illustrates the flight mission and figure 13 shows the flight heights as well as the distances between the drones as direct three-dimensional distance and as two-dimensional distance above the ground. The flying drone always headed towards the next mission waypoint indicated as dots in the figure. Due to navigation accuracy, the flown trajectories were not always the same but closely followed the path given the waypoints for all measurement scenarios. In this scenario, the distance between the drones stays more or less the same and only slightly changes with the drone heights. Therefore, this scenario enables to analyze the impact of air frame shadowing without changing the fading due to multipath propagation.
#### 7.3.2 Results Mission 1
Figure 14 shows the measured received SNR values of the experimental radio for the whole flight and the instants in time when packets where not successfully received together with the distances. It can be seen that mostly packet errors occur at low SNR values and the SNR values reveal a repeating pattern and vary between values lower than 10 dB and higher than 30 dB. Figure 15 shows again the received SNR value and the packet errors overlay-ed on the trajectories in two and three dimensional layouts. On the map we can clearly see that the repeating pattern of the SNR values results from different height independent viewing angles between the drones and are caused by the airframe shadowing of the hovering drone. The overall packet error rate for this measurement was about \(4\%\) and the main reason was a too weak achieved SNR value for the experimental radio.
For comparison to the experimental radio we performed this measurement for the COTS hardware. Figure 16 shows the received SNR values and the distances, but this time no packet errors have occurred. The repeating pattern is again recognizable and the indicated values are higher compared to the values of the experimental radio but the differences are similar. This result shows, that the COTS radio can achieve higher SNR values for the same received signal powers.
Fig. 11: Overview of flight trial setup for the in-flight measurements.
Fig. 10: Overview of elements for integrated payload on drones.
[MISSING_PAGE_POST]
Figure 17 illustrates the results again on a map. We again can clearly see the influence of the airframe shadowing.
#### 7.3.3 Flight Mission 2
For Mission 2 the receiving drone flew the same flight trajectory as in Mission 1, but the transmitting drone was hovering at a different position outside the circles. In this scenario, the distance between the drones changes in order to analyze the influence of fading in comparison to the results of Mission 1. Figure 18 illustrates the mission plotted on a map and fig. 19 shows the drones heights and the distances between them.
#### 7.3.4 Results Mission 2
Figure 20 shows the measured received SNR values of the experimental radio for the whole flight and the instants in time when packets where not successfully received together with the distances. It can be seen that mostly packet errors occur at low SNR values and the SNR values reveal a repeating pattern and vary between values lower than 10 dB and higher than 30 dB. Figure 21 shows again the received SNR value and the packet errors overlay-ed on the trajectories in two and three dimensional layouts. On the map we can clearly see that the repeating pattern of the SNR values results from different height independent viewing angles between the drones and are caused by airframe shadowing but this time of the hovering drone and the flying drone. In comparison to Mission 1, slightly more packet errors occur. The overall packet error rate for this measurement was about \(8\%\) and the main reason was a too weak achieved SNR value for the experimental radio caused by airframe shadowing.
For comparison to the experimental radio we performed this measurement for the COTS hardware. Figure 22 shows the received SNR values and the distances, but no packet errors have occurred. The repeating pattern is again recognizable and the indicated values are higher compared to the values of the experimental radio but the differences are similar. Figure 23 illustrates the results again on a map. We again can clearly see the influence of the airframe shadowing.
#### 7.3.5 Flight Mission 3
For Mission 3 both drones were flying at heights about 20 m above ground on parallel trajectories and were repeatingly coming close down to about 10 m distance and flying away from each other to about 60 m distance. For this scenario the viewing angles only changed when flying forwards or backwards the trajectory in order to analyze the influences of fading and distance without airframe shadowing.
#### 7.3.6 Results Mission 3
Figure 26 shows the measured received SNR values of the experimental radio for the whole flight and the instants in time when packets where not successfully received together with the distances. This time we can see packet errors additionally due to high SNR values when the receiver was not able to handle the high signal power. Figure 27 shows again the received SNR values and the packet errors overlay-ed on the trajectories in two and three dimensional layouts. On the map we can clearly see that the repeating pattern of the SNR values mostly results from the distances between the drones. When
Fig. 16: Mission 1: Measurement results for COTS hardware setup.
Fig. 17: Mission 1: Measurement results on map for COTS hardware setup.
they come close to each other at half of the mission path then the received signal power gets
and when they are at furthest distance away from each other, and when they are at furthest distance away from each other, the distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(100\) meters. The distance away from each other is at most \(1000\) meters. The distance away from each other is at most \(100\) meters.
the received signal power gets too low.
The overall packet error rate for this measurement was about \(6\%\).
For comparison to the experimental radio we performed this measurement for the COTS hardware. Figure 28 shows the received SNR values and the distances, but no packet errors have occurred. The repeating pattern is again recognizable and the indicated values are higher compared to the values of the experimental radio but the differences are similar.
Figure 28: Mission 3: Measurement results for COTS hardware setup.
Figure 26: Mission 3: Measurement results for SDR setup.
Figure 27: Mission 3: Measurement results on map for SDR setup.
Figure. 31 shows a video screenshot of the collision avoidance scenario demonstrated with our experimental setup at the model city. Thereby it shows the Live Monitoring screen with the received trajectories of both drones at a time when the collision avoidance application stopped the drones flight and sent an emergency message down to the monitoring ground station.
For the GBAS transmission for the drones we used one drone as broadcasting station at the ground like shown in fig. 32 and let one drone fly in and around the model city. In order to secure the transmission we used an software implementation of the TESLA protocol that was already demonstrated in our group at a flight trial with a piloted aircraft to secure the broadcast of GBAS correction data via LDACS [21].
## 8 Conclusion and Outlook
In this work, we presented a multi-link approach with a focus on an ad-hoc communication concept that will help to reduce the probability of mid-air collisions and thus increase social acceptance of urban air mobility. As an essential part
Fig. 31: Flight demonstration of collision avoidance with experimental setup at model city.
Fig. 30: Overview of major elements of flight trial setup.
Fig. 32: Flight demonstration of secured GBAS transmission to drone around model city.
Fig. 29: Mission 3: Measurement results on map for COTS hardware setup.
of the described multi-link approach, we aim to develop DroneCAST, a data link tailored to the special requirements and challenges in urban airspace, in order to establish an additional, decentralized and robust safety layer for the UTM concept. For the development of DroneCAST, we make use of our Drone-to-Drone Channel Model for Urban Environments, which is based on measurements in order to increase the robustness and efficiency. As a first step towards an implementation, we equipped two drones with hardware prototypes of an experimental communication system and performed several flights around a model city to evaluate the performance of the hardware and to demonstrate different applications that will rely on robust and efficient communications. Results showed the feasibility of the experimental hardware setup. However, a missing automatic gain control for this setup resulted in a weaker performance compared to a COTS radio. Therefore, in the next steps we aim to develop a next level hardware prototype for a DroneCAST radio and also will target physical layer robustness and security topics.
|
2309.08782 | Stein Variational Gradient Descent-based Detection For Random Access
With Preambles In MTC | Traditional preamble detection algorithms have low accuracy in the
grant-based random access scheme in massive machine-type communication (mMTC).
We present a novel preamble detection algorithm based on Stein variational
gradient descent (SVGD) at the second step of the random access procedure. It
efficiently leverages deterministic updates of particles for continuous
inference. To further enhance the performance of the SVGD detector, especially
in a dense user scenario, we propose a normalized SVGD detector with momentum.
It utilizes the momentum and a bias correction term to reduce the preamble
estimation errors during the gradient descent process. Simulation results show
that the proposed algorithm performs better than Markov Chain Monte Carlo-based
approaches in terms of detection accuracy. | Xin Zhu, Hongyi Pan, Salih Atici, Ahmet Enis Cetin | 2023-09-15T22:02:20Z | http://arxiv.org/abs/2309.08782v1 | # Stein Variational Gradient Descent-Based Detection for Random Access With Preambles in MTC
###### Abstract
Traditional preamble detection algorithms have low accuracy in the grant-based random access scheme in massive machine-type communication (mMTC). We present a novel preamble detection algorithm based on Stein variational gradient descent (SVGD) at the second step of the random access procedure. It efficiently leverages deterministic updates of particles for continuous inference. To further enhance the performance of the SVGD detector, especially in a dense user scenario, we propose a normalized SVGD detector with momentum. It utilizes the momentum and a bias correction term to reduce the preamble estimation errors during the gradient descent process. Simulation results show that the proposed algorithm performs better than Markov Chain Monte Carlo-based approaches in terms of detection accuracy.
Xin Zhu\({}^{*}\) Hongyi Pan\({}^{\dagger}\) Salih Atici\({}^{*}\) Ahmet Enis Cetin\({}^{*}\)+\({}^{*}\)Department of Electrical and Computer Engineering, University of Illinois Chicago, USA
\({}^{\dagger}\)Machine & Hybrid Intelligence Lab, Northwestern University, USA
Footnote †: This work was supported by NSF IDEAL 2217023.
Preamble detection, Stein variational gradient descent, grant-based random access, massive machine-type communication (mMTC).
## 1 Introduction
Under the background of massive machine-type communication (mMTC) [1, 2], the future wireless communication system needs to hold a large number of devices. However, the preamble resources are limited. With the number of devices increasing, the preamble collision problem [3, 4, 5] becomes more serious, which increases the difficulty of the communication systems design. To reduce power consumption and achieve massive connection in the future Internet of Things (IoT) scenarios [1], communication systems need a random access scheme that can alleviate the preamble collision problem. At present, such random access schemes are mainly divided into two categories: grant-based random access (GBRA) and grant-free random access (GFRA) [6, 7, 8].
In this paper, we use the GBRA scheme. GBRA [9, 10, 11, 12] includes a four-step handshake random access protocol: (1) Each active user randomly selects a preamble from the preamble pool. (2) The base station (BS) delivers random access responses to users. (3) Users return MSG3 to the BS. (4) The BS performs random access conflict resolution. In the dense user scenario, a large number of users access the BS at the same time. Therefore, preamble collision can not be avoided [3], which significantly increases uplink transmission signaling overhead. In addition, it also increases the access delay and reduces the throughput of the system [13]. In the GBRA, colliding users can only be recognized by the base station in the final step. However, the base station will still allocate time-frequency resources to the colliding users in the second step, which causes a waste of resources. Hence, efficient preamble detection schemes are necessary for random access to detect preamble collisions in advance.
A preamble detection random access scheme based on Markov Chain Monte Carlo (MCMC) was proposed in [14]. This scheme establishes the maximum a posteriori (MAP) estimation model [15] to detect preambles. However, the MAP estimation needs the prior distribution of the estimated variables. Additionally, the MCMC method increases the diversity of particles through the randomness of sampling, which decreases the accuracy of preamble detection.
To detect preamble collision early and improve the utilization of wireless resources, we establish a preamble detection model based on maximum likelihood estimation [16] without using any prior distribution in the second step of handshaking. Furthermore, we propose two efficient algorithms based on Stein variational gradient descent (SVGD) [17, 18, 19, 20] to find an approximate solution to the maximum likelihood estimation problem.
The contributions are summarized as follows: (1) We propose a maximum likelihood estimation model based on the SVGD detector to detect preambles in the dense user scenario. (2) Through error analysis, we propose the normalized SVGD (NSVGD) detector with momentum. It has better robustness and a higher preamble detection accuracy than the SVGD detector and MCMC-based methods.
## 2 Background
### Stein Variational Gradient Descent (SVGD)
SVGD is a novel particle-based variational inference algorithm [17]. It efficiently utilizes gradient information for approximating the target distribution through deterministic updates of particle methods. As shown in Algorithm 1, \(n\) particles are generated by the uniform distribution or some other distribution. Then, the particles are updated using an optimized gradient \(\varphi\). Liu _et al._[17] proved that the perturbation direction given by \(\varphi\) is optimal because it corresponds to the steepest descent on the Kullback-Leibler divergence. After sufficient iterations, the obtained particles follow the target distribution. In this paper, we will apply the SVGD algorithm to take samples from the complicated distribution.
### System Model
Assume that there are \(N\) active users in the cell. Each user is equipped with an antenna and the BS is equipped with \(K\) antennas. The number of preambles is \(M\), and the length of each preamble is
\(S\). Each active user randomly selects a preamble in the pool. Then, the signal received by the BS through antenna \(j\) can be expressed as:
\[\mathbf{y}_{j}=\sum_{i=1}^{N}\mathbf{p}_{i0}H_{i,j}e_{i}+\mathbf{n}_{j}, \tag{1}\]
for \(j=1,\ldots,K\), where \(\mathbf{n}_{j}\in\mathbb{C}^{S}\) is the background noise at the \(j\)-th antenna. \(H_{i,j}\) is the channel coefficient from \(i\)-th active device to \(j\)-th antenna. \(e_{i}\) indicates the data symbol sent by the active user to the base station. \(\mathbf{p}_{i0}\in\mathbb{C}^{S}\) stands for the preamble sequence selected by \(i\)-th active user.
Suppose \(x_{m}\) indicates the number of active users who select the \(m\)-th preamble. The numbers of users selecting each preamble are represented by a vector \(\mathbf{x}=[x_{1},\ldots,x_{m},\ldots,x_{M}]^{\mathrm{T}}\), where \(x_{m}\in[0,N]\). Since our model is designed to detect preamble collision as mentioned in Section 1, we focus on estimating \(\mathbf{x}\) in the following steps: Firstly, we define the likelihood of \(\mathbf{x}\):
\[\mathrm{Lik}(\mathbf{x})=f(\mathbf{y}_{j}\mid\mathbf{x}), \tag{2}\]
where \(f(\mathbf{y}_{j}\mid\mathbf{x})\) represents the Likelihood function of \(\mathbf{x}\) given for \(\mathbf{y}_{j}\).
Then, let \(\mathbf{w}_{j}=[w_{1,k},w_{2,j},\ldots,w_{m,j},\ldots,w_{M,j}]^{\mathrm{T}}\), and
\[w_{m,j}=\sum_{i\in N_{m}}H_{i,j}e_{i}, \tag{3}\]
where \(\mathcal{N}_{m}\) stands for the index set of active users who select the \(m\)-th preamble. We assume that the signal transmitted by the active device experiences independent Rayleigh fading:
\[H_{i,j}\sim\mathcal{CN}(0,\delta^{2}) \tag{4}\]
Next, let \(\mathbf{P}=[\mathbf{p}_{1},\ldots,\mathbf{p}_{M}]\). The Eq. (1) can be rewritten as:
\[\mathbf{y}_{j}=\mathbf{P}\mathbf{w}_{j}+\mathbf{n}_{j}, \tag{5}\]
for \(j=1,\ldots,K\).
According to Eq. (3) and Eq. (4), \(\mathbf{w}_{j}\) is a circularly-symmetric-complex-Gaussian (CSCG) vector for a given \(\mathbf{x}\). Furthermore, from Eq. (4) and Eq. (5), \(\mathbf{y}_{j}\) is also a CSCG vector for a given \(\mathbf{x}\). Its mean value is \(0\) and its covariance matrix is:
\[\mathbb{E}[\mathbf{y}_{j}\mathbf{y}_{j}^{\mathrm{H}}\mid\mathbf{x}]=\delta^{ 2}\mathbf{PV}_{\mathbf{x}}\mathbf{p}^{\mathrm{H}}+\beta\mathbf{I}=\phi( \mathbf{x}), \tag{6}\]
where \(\mathbf{V}_{\mathbf{x}}=\mathrm{diag}(x_{1}\ldots x_{M})\). (\(\cdot\))\({}^{\mathrm{H}}\) represents conjugate transpose, and \(\beta\) is the noise power. According to Eq. (6), we have
\[\mathbf{y}_{j}\mid\mathbf{x}\sim CN(\mathbf{0},\phi(\mathbf{x})), \tag{7}\]
After that, \(\ln f(\mathbf{y}_{j}\mid\mathbf{x})\) can be computed as:
\[\ln f(\mathbf{y}_{j}\mid\mathbf{x})=-\mathbf{y}_{j}^{\mathrm{H}}(\phi( \mathbf{x}))^{-1}\mathbf{y}_{j}-\ln\det(\phi(\mathbf{x}))+\xi, \tag{8}\]
where \(\xi\) represents a constant. \(\det(\cdot)\) stands for the determinant of the matrix. According to \(f(\mathbf{y}_{j}\mid\mathbf{x})=\prod_{j=1}^{K}f(\mathbf{y}_{j}\mid\mathbf{x})\), log-likelihood function\(f(\mathbf{y}_{j})\mid\mathbf{x})\) can be represented as:
\[\ln f(\{\mathbf{y}_{j}\mid\mathbf{x}\}=\sum_{j=1}^{K}\ln f(\mathbf{y}_{j}\mid \mathbf{x}), \tag{9}\]
Finally, the maximum likelihood estimation model can be derived by using the log-likelihood function:
\[\mathbf{\bar{x}}=\arg\max f(\mathbf{y}_{j}\mid\mathbf{x}). \tag{10}\]
The computational complexity can be expressed as \((N+1)^{M}\), which increases exponentially with \(M\). Therefore, the computational complexity of the maximum likelihood detection increases significantly when \(M\) becomes large.
## 3 Methodology
Since maximum likelihood detection is computationally prohibitive, the SVGD is employed to find an approximate solution. Specifically, we use SVGD to take samples of \(\mathbf{x}\) from the distribution with the density function \(q(\mathbf{x})=f(\mathbf{y}_{j})\mid\mathbf{x})\).
### SVGD Detector
Eq. (8) shows that for any \(\mathbf{x}\), \(f(\mathbf{y}_{j}\mid\mathbf{x})\geq 0\), which satisfies the SVGD requirement that the density function is positive [17]. Then, we approximate \(q(\mathbf{x})\) with a set of particles \(\{\mathbf{x}_{i}\}_{i=1}^{n}\), where \(\mathbf{x}_{i}\in\mathbb{R}^{M}\) and \(n\) is the number of particles. Next, \(\{\mathbf{x}_{i}^{0}\}_{i=1}^{n}\) is used to initialize particles and SVGD updates the particles iteratively by
\[\mathbf{x}_{i}^{i+1}\leftarrow\mathbf{x}_{i}^{i}+\varepsilon\varphi(\mathbf{x}_ {i}^{i}), \tag{11}\]
where \(\varepsilon\) is a step size. \(\varphi(\cdot)\) is a velocity field that deterministically drives the distribution of particles toward the target. It is computed as:
\[\varphi(\mathbf{x})=\frac{1}{n}\sum_{l=1}^{n}[k(\mathbf{x}_{i}^{l},\mathbf{x}) \ \forall_{i}\log q(\mathbf{x}_{i}^{l})+\forall_{\mathbf{x}_{i}^{l}}k(\mathbf{x}_{i }^{l},\mathbf{x})], \tag{12}\]
where \(k(\mathbf{x}_{i}^{l},\mathbf{x}_{i}^{l})\) is the kernel function. The selection of \(k(\mathbf{x}_{j}^{l},\mathbf{x}_{i}^{l})\) will be introduce in section 4. After a number of iterations, we obtain a set of points \(\{\mathbf{\bar{x}}_{i}\}_{i=1}^{n}\) that approximate our target distribution with the density function \(q(\mathbf{x})\), so we can estimate the number of users selecting each preamble using \(\{\mathbf{\bar{x}}_{i}\}_{i=1}^{n}\). For example, as shown in Fig. 1, given that \(M=2,S=2,N=4,K=20,n=50\), the purpose is to estimate \(\mathbf{x}=[x_{1},x_{2}]\). At the beginning, fifty particles are initialized with a random distribution \(q_{0}(\mathbf{x})\). Then, the particles are updated using Eq. (12). Repeating this iteration, a path of distributions \(\{\mathbf{x}_{i}\}_{i=1}^{50}\) between the \(q_{0}(\mathbf{x})\) and \(q(\mathbf{x})\) is constructed. Finally, the particles converge to the ground truth.
### Error Analysis of the SVGD Detector
Given a noisy environment, there will be a large noisy power \(\beta\) in the system. It is observed that if \(\beta\) is significantly larger than each entry of matrix \(\psi(\mathbf{x})=\delta^{2}\mathbf{PV}_{\mathbf{x}}\mathbf{P}^{\mathrm{H}}\), \(\phi(\mathbf{x})=\delta^{2}\mathbf{PV}_{\mathbf{x}}\mathbf{P}^{\mathrm{H}}+\beta \mathbf{I}\) will be independent
to the change of \(\psi(\mathbf{x})\). Then, \(\phi(\mathbf{x})\approx\beta\mathbf{I}\). Therefore, Eq. (9) can be computed as:
\[\log q(\mathbf{x}_{i}^{\prime})=-\sum_{j=1}^{K}\mathbf{y}_{j}^{\mathrm{H}}( \beta\mathbf{I})^{-1}\mathbf{y}_{j}-K\ln\det(\beta\mathbf{I})+\xi, \tag{13}\]
thus, \(\triangledown_{\mathbf{x}^{\prime}}\log q(\mathbf{x}_{i}^{\prime})=0\). Then \(\varphi(\mathbf{x})\) can be computed as:
\[\varphi(\mathbf{x})=\frac{1}{n}\sum_{i=1}^{n}[\triangledown_{\mathbf{x}}k( \mathbf{x}_{i}^{\prime},\mathbf{x})], \tag{14}\]
from Eq. (14), only \(\triangledown_{\mathbf{x}}k(\mathbf{x}_{i}^{\prime},\mathbf{x})\) can be used to update particles. However, \(\triangledown_{\mathbf{x}}k(\mathbf{x}_{i}^{\prime},\mathbf{x})\) does not include any information about the target density function \(q(\mathbf{x})\). As a consequence, errors are introduced during the update process.
Fig. 2 shows the estimation of active users in a dense user scenario with \((M,S,N,K)=(20,10,20,30)\). When the signal-to-noise ratio (SNR) is low, there is a large error between the estimated value \(\|\mathbf{x}\|_{1}\) and the true value \(\|\mathbf{x}\|_{1}\). With SNR increasing, \(\triangledown_{\mathbf{x}^{\prime}}\log q(\mathbf{x}_{i}^{\prime})\) becomes non-negligible. Therefore, the SGVD detector can extract effective gradient information to update particles, which improves the detection accuracy.
### Normalized SVGD Detector with Momentum
The irregular gradients and noise have a negative influence on the update of particles. To overcome these limitations of the SVGD detector, we propose a normalized SVGD (NSVGD) detector with momentum. Firstly, as is shown in Algorithm 2, a bias correction term \(\Delta\) is added to the updating process to improve the anti-interference performance. This bias term \(\Delta\) calculates the error between the estimated and actual number of active users using
\[\Delta\leftarrow\mu\left(nN-\sum_{i=1}^{n}\|\mathbf{x}\|_{1}\right). \tag{15}\]
This error corrects the updating direction of particles. In a noisy environment, although \(\triangledown_{\mathbf{x}^{\prime}}\log q(\mathbf{x}_{i}^{\prime})\) will disappear during the update process, the bias term \(\Delta\) can still provide useful information for the update of particles. Therefore, the bias term \(\Delta\) can improve the robustness of the detector in different environments.
Next, we compute the gradient using Eq. (12). After that, we accumulate the history gradients and normalize the accumulated gradient:
\[\varphi(\mathbf{x}_{i}^{\prime})\leftarrow\frac{\varphi(\mathbf{x}_{i}^{ \prime})}{\epsilon+\sqrt{\mathbf{\xi}}}, \tag{16}\]
where \(\sqrt{(\cdot)}\) represents element-wise square root. These operations solve the problem of continuous decline in learning rate in comparison to the AdaGrad method [21]. Additionally, the weight decay [22] is applied to adjust the gradient to find an optimal result. Moreover, we apply momentum, a strategy that assists in navigating high error and low curvature regions [23]. Finally, the particles are updated according to the gradient and estimated error:
\[\mathbf{x}_{i}^{\prime+1}\leftarrow\mathbf{x}_{i}^{\prime}+\varepsilon\varphi (\mathbf{x}_{i}^{\prime})+\Delta. \tag{17}\]
## 4 Experimental Results
In this section, we compare our methods with the MCMC method using Gibbs sampling [14]. We consider non-orthogonal preambles. The elements of \(\mathbf{P}\) are independent circularly-symmetric-complex-Gaussian (CSCG) random variables with zero-mean and variance \(\frac{1}{S}\), _i.e._, \([\mathrm{P}]_{\mathbf{x},\nu}\sim\mathcal{CN}(0,\frac{1}{S})\). We initialize vector \(\mathbf{x}\) in a uniform distribution on \([1,1.1]\) in both the SVGD detector and the normalized SVGD detector with momentum. In addition, we use Gaussian radial basis function (RBF) kernel [24]. It is defined as \(k(\mathbf{x},\mathbf{x}^{\prime})=\exp(-\frac{1}{2}\|\mathbf{x}-\mathbf{x}^{ \prime}\|_{2}^{2})\), where bandwidth \(h=\frac{\mathrm{m}\mathrm{e}\mathrm{i}\mathrm{e}\mathrm{i}\mathrm{e}\mathrm{i} \mathrm{e}\mathrm{i}}\), and med is the median of the pairwise distance between the current points \([\mathbf{x}_{i}]_{i=1}^{n}\). The parameters \(n,\mu,\alpha,\epsilon\), \(\gamma\), \(\varepsilon\), and \(\lambda\) are chosen as 6, 0.01, 0.9, 1, 0.9, 0.01, and 0.1, respectively. We use \(N_{iteration}\) to represent the number of iterations to obtain stable particles. The sample mean can be obtained by all particles:
\[\mathbf{\bar{x}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{\bar{x}}_{i}=[\bar{x}_{0}, \dots,\bar{x}_{m},\dots,\bar{x}_{M}]. \tag{18}\]
Figure 1: Updating process of particles
Figure 2: Estimation of active users in a dense user scenario
We use the rounded sample mean to estimate \(x_{m}\), _i.e._, \(\hat{x}_{m}=\lfloor\bar{x}_{m}\rceil\in\{0,\ldots,N\}\), where \(\lfloor x\rceil\) represents the nearest integer of \(x\).
We consider two performance measures. The first one is the mean square error (MSE). The second one is to test the accuracy of different methods named the probability of activity detection error:
\[\text{P}_{\text{ADE}}=\text{Pr}(x_{m}\neq\hat{x}_{m}). \tag{19}\]
In Fig. 3 and Fig. 4, with \(\text{P}_{\text{ADE}}\) and MSE, we compare the performance of the NSVGD detector with momentum, the SVGD detector, and the MCMC-based detector for different values of SNR in a dense user scenario. The NSVGD detector with momentum outperforms the SVGD detector and the MCMC approach. The improvement of performance at a low SNR shows that the NSVGD detector can reduce the impact of external environmental noise on the SVGD detector. The improvement of performance at a high SNR proves that the NSVGD detector has better robustness and self-adaptation than the SVGD detector. In addition, with the number of active users decreasing from 30 to 20, the possibility of preamble collision decreases gradually and the detection accuracy of the three algorithms also improves.
The computation complexities of the MCMC-based detector, SVGD detector and NSVGD detector are \(O(MS(S+K))\), \(O(MKS^{3})\) and \(O(M(KS^{3})+16M)\) respectively. It is noticed that the computational complexities of the three detectors are all linearly proportional to the number of preambles \(M\) and do not change when the number of active users increases. Therefore, three detectors have a similar computation complexity.
## 5 Conclusion
In this paper, we propose a novel preamble detection algorithm based on SVGD, which efficiently leverages deterministic updates of particles to complete preamble detection. Through the error analysis, the performance of the SVGD detector degrades due to the noise. To solve this problem, we propose the NSVGD detector with momentum, which adds a bias correction term to enhance the robustness. Simulation results show that the proposed algorithm performs better than MCMC-based approaches in a dense user scenario. Moreover, the NSVGD detector has a low computation complexity, which is linearly proportional to the number of preambles.
Figure 4: MSE for different SNR when \(M=20,S=10,K=30\).
Figure 3: Probability of ADE for different SNR when \(M=20,S=10,K=30\). |
2308.16695 | A Hölder-type inequality for the Hausdorff distance between
Lagrangians | We prove a H\"older-type inequality for the Hausdorff distance between
Lagrangians with respect to the Lagrangian spectral distance or the
Hofer-Chekanov distance in the spirit of Joksimovi\'c-Seyfaddini
[arXiv:2207.11813]. This inequality is established via methods developped by
the first author [arXiv:2204.02468, arXiv:2108.00555] in order to understand
the symplectic geometry of certain collections of Lagrangians under metric
constraints. | Jean-Philippe Chassé, Rémi Leclercq | 2023-08-31T13:00:17Z | http://arxiv.org/abs/2308.16695v2 | # A Holder-type inequality for the Hausdorff distance between Lagrangians
###### Abstract
We prove a Holder-type inequality (in the spirit of Joksimovic-Seyfaddini [1]) for the Hausdorff distance between Lagrangians with respect to the Lagrangian spectral distance or the Hofer-Chekanov distance. This inequality is established via methods developped by the first author [1, 2] in order to understand the symplectic geometry of certain collections of Lagrangians under metric constraints.
## 1 Introduction
Let \((M,\omega)\) be a symplectic manifold with an \(\omega\)-compatible almost complex structure \(J\). If \(M\) is noncompact, we assume that \(J\) is convex at infinity. We equip \(M\) with the Riemannian metric \(g=g_{J}=\omega(\cdot,J\cdot)\) -- we may assume it is complete and geometrically bounded (cf. [1]).
### Main result
On one hand, Joksimovic and Seyfaddini [1] proved a Holder-type inequality for the \(C^{0}\) distance on Hamiltonian diffeomorphism groups and deduced interesting applications to Anosov-Katok pseudo-rotations.
Namely, the inequality is the following:
\[d_{C^{0}}(\mathds{1},\varphi)\leq C\sqrt{\gamma(\varphi)}\,||d\varphi||, \tag{1}\]
where \(||d\varphi||:=\sup\{|d\varphi_{x}|^{\mathrm{op}}\,|\,x\in M\}\). This inequality holds for any Hamiltonian diffeomorphism \(\varphi\) of a closed symplectic manifold for which one can define Floer spectral invariants. These invariants and their properties are reviewed in Section 2. They induce, on the Hamiltonian diffeomorphism group, the _spectral pseudonorm_\(\gamma\) which appears in the inequality. The constant \(C\) only depends on the choice of a Riemannian metric on the ambient manifold.
On the other hand, the first author [14, 15] initiated the study of the symplectic geometry of certain sets of Lagrangians under (Riemannian) metric constraints, such as the Hofer geometry of Hamiltonian isotopic Lagrangians with uniformly bounded curvature. Holder inequalities on such sets between the Hausdorff distance and a large class of metrics were also obtained. This class of metrics contains, in particular, the _Lagrangian spectral distance_, also defined via spectral invariants and denoted \(\gamma\) as well. This version of the spectral distance was defined for weakly exact Lagrangians in [13] and for monotone Lagrangians with nonvanishing fundamental in [12].
The upshot of this note is a Holder-type inequality for the Hausdorff distance \(\delta_{H}\) between Lagrangians in the spirit of Joksimovic and Seyfaddini's inequality, whose proof is based on the methods of [14].
**Theorem 1**: _Let \(L\) and \(L^{\prime}\) be Hamiltonian isotopic, closed, connected Lagrangian submanifolds of \(M\). Suppose that \(L\) -- and thus \(L^{\prime}\) -- is either weakly exact or monotone with \(N_{L}\geq 2\) and has nonvanishing quantum homology. Let \(\psi\) be a symplectomorphism of \(M\) such that \(\psi(L)=L^{\prime}\). There exist constants \(\delta=\delta(M,J,L)>0\) and \(C=C(M,J,L)>0\) such that whenever \(\gamma(L,L^{\prime})<\delta\), then_
\[\delta_{H}(L,L^{\prime})\leq C\sqrt{\gamma(L,L^{\prime})}\left|\left|d\psi \right|\right|. \tag{2}\]
_Furthermore, when \(M\) is compact, we may take \(\delta=+\infty\)._
This obviously yields the nondegeneracy of \(\gamma\).
**Corollary 2**: _Let \(L\) be a Lagrangian as in Theorem 1. Then, the Lagrangian pseudodistance \(\gamma\) is nondegenerate on the set of Lagrangians which are Hamiltonian isotopic to \(L\)._
Thus, this provides a third proof of this result, after Kawasaki's proof [13] via Poisson bracket invariants _a la_ Polterovich-Rosen [10] and Kislev and Shelukhin's proof [12] via energy-capacity inequalities. However, through the use of the methods of [14] in Section 3 or with the more direct approach of the alternative proof in Section 4.1, the proof does ultimately relies on the same existence result for certain \(J\)-holomorphic curves as Kislev and Shelukhin [12]. The innovation here is how we estimate the area of those \(J\)-holomorphic curves.
Before describing in more details how this result relates to the aforementioned previous works, let us make several quick remarks.
_Remark 3_ (Hofer's geometry): _It is well known that \(\gamma\) is bounded from above by the Hofer-Chekanov distance -- see the properties of the spectral distance in Section 2. Hence, Theorem 1 also holds when \(\gamma\) is replaced by the Hofer-Chekanov distance._
_Remark 4_ (Symplectomorphisms): _Let us emphasize the fact that Theorem 1 holds for any -- even noncompactly-supported -- symplectomorphism \(\psi\); \(L\) and \(L^{\prime}\) are required to be Hamiltonian diffeomorphic only for \(\gamma(L,L^{\prime})\) to be defined._
_Remark 5_ (Variant with the norm of the inverse diffeomorphism): _When \(M\) is compact, there is also an inequality involving \(||d\psi^{-1}||\):_
\[\delta_{H}(L,L^{\prime})\leq C\sqrt{\gamma(L,L^{\prime})}\left|\left|d\psi^{-1 }\right|\right|^{2}. \tag{3}\]
_This variant of inequality (2) will be proved in Section 4.2._
### Main techniques and relations to previous work
Theorem 1 is a specialization of the first author's inequality from [10], which we now recall. For any metric \(D\) in a large class of metrics, said of Chekanov type and which includes \(\gamma\), if \(D(L,L^{\prime})<\delta=\delta(g,g|_{L},g|_{L^{\prime}})\), then
\[\delta_{H}(L,L^{\prime})\leq C(g,g|_{L},g|_{L^{\prime}})\sqrt{D(L,L^{\prime})}\,. \tag{4}\]
By the above notation, we mean that \(\delta\) and \(C\) depend only on Riemannian bounds of \(M\), \(L\), and \(L^{\prime}\), e.g. the sectional curvature of the first and the \(L^{\infty}\)-norm of the second fundamental form of the two latter. The improvement in this note is that we get rid of the dependance of \(C\) on metric invariants of \(L^{\prime}\) at the price of an extra \(||d\psi||\) term.
Note that the first author (Lemma 5 in [10]) partially improved (4) to
\[s(L;L^{\prime})\leq C(g,g|_{L})\sqrt{\gamma(L,L^{\prime})} \tag{5}\]
whenever \(\gamma(L,L^{\prime})<\delta=\delta(g,g|_{L})\), where
\[s(L;L^{\prime}):=\sup_{x\in L}d_{M}(x,L^{\prime})=\sup_{x\in L}\inf_{y\in L^{ \prime}}d_{M}(x,y)\,. \tag{6}\]
Since \(\delta_{H}(L,L^{\prime})=\max\{s(L;L^{\prime}),s(L^{\prime};L)\}\), the left-hand side in (5) is in general smaller than the one in (4). It is through this inequality that Theorem 1 is proved (see Section 3).
### Relations to Joksimovic and Seyfaddini's inequality
Theorem 1 is a Lagrangian generalization of Joksimovic and Seyfaddini's aforementioned inequality (1) for Hamiltonian diffeomorphisms \(\varphi\) on closed symplectic manifolds.
Note that their inequality directly implies
\[\delta_{H}(L,L^{\prime})\leq\inf_{\varphi(L)=L^{\prime}}d_{C^{0}}(\mathds{1},\varphi)\leq C\inf_{\varphi(L)=L^{\prime}}\left(\sqrt{\gamma(\varphi)}\;||d \varphi||\right).\]
However, in general, the inequality
\[\inf_{\varphi(L)=L^{\prime}}\sqrt{\gamma(\varphi)}\;||d\varphi||\geq\inf_{ \varphi(L)=L^{\prime}}\sqrt{\gamma(\varphi)}\,\cdot\inf_{\varphi(L)=L^{ \prime}}||d\varphi||=\sqrt{\gamma(L,L^{\prime})}\cdot\inf_{\varphi(L)=L^{ \prime}}||d\varphi||\]
is strict. Therefore, our inequality gives a better bound in the Lagrangian case, even when \(\varphi\) is a Hamiltonian diffeomorphism1.
Footnote 1: Recall indeed that inequality (2) also holds when \(\varphi\) is a non-Hamiltonian symplectomorphism.
One notable exception to this is when \(L\) is the diagonal in \(M\times M\), and \(L^{\prime}\) is the graph of \(\varphi\). Then, by work of the second author and Zapolsky [11, 12], we know that \(\gamma(L,L^{\prime})=\gamma(\varphi)\), so that equality follows. The constant we get here is however hard to compare to theirs.
On the other hand, we present below a different proof of a variant of (2), based on the method from [11] which gives a less natural, but more easily comparable, constant see Section 4.1.
### Organization and acknowledgements
After reviewing necessary preliminaries in Section 2, we prove Theorem 1 in Section 3. Finally, Section 4 presents the proofs of two inequalities similar to (2), the first one based on Joksimovic and Seyfaddini's method in Section 4.1, the second one involving the inverse norm of \(\psi\) as mentioned in Remark 5; see Section 4.2.
The main lines of this project were drawn during a stay of the first author at the Institut Mathematique d'Orsay. We thank the Laboratoire Mathematique d'Orsay for making that stay possible. The first author is partially supported by the SNF grant 200021_204107, and the second author is partially supported by the ANR grant 21-CE40-0002 (CoSy).
## 2 Preliminaries
We fix a symplectic manifold \((M,\omega)\) and consider different types of Lagrangian submanifolds. They are characterized by two functions defined on the second homotopy group of \(M\) relative to \(L\), i.e. the symplectic area and the Maslov class of disks in \(M\) with boundary in \(L\):
\[\omega_{L}:\pi_{2}(M,L)\to\mathds{R}\quad\text{ and }\quad\mu_{L}:\pi_{2}(M,L) \to\mathds{Z}\,.\]
A Lagrangian submanifold \(L\) is called _weakly exact_ if \(\omega_{L}\) and \(\mu_{L}\) vanish identically. Otherwise, \(L\) is called (positively) monotone whenever there exists a positive constant \(\kappa_{L}>0\) such that \(\omega_{L}=\kappa_{L}\cdot\mu_{L}\). In that case, \(\kappa_{L}\) is called the monotonicity constant of \(L\).
When \(L\) is monotone, we define its _minimal Maslov number_\(N_{L}\) to be the positive generator of \(\langle\mu_{L},\pi_{2}(M,L)\rangle=N_{L}\,\mathds{Z}\), and we require \(N_{L}\geq 2\).
In what follows, we fix a Lagrangian as above and consider the set \(\mathcal{L}^{\mathrm{Ham}}(L)\) of all Lagrangian submanifolds which are Hamiltonian isotopic to \(L\). We now recall how the two metrics on \(\mathcal{L}^{\mathrm{Ham}}(L)\) of interest in this note are defined.
### The Hofer-Chekanov distance
The Hofer norm was introduced by Hofer [10] on Hamiltonian diffeomorphism groups and extended as a distance to sets of the type \(\mathcal{L}^{\mathrm{Ham}}(L)\) by Chekanov [11].
First define the energy of a Hamiltonian function \(H:[0,1]\times M\to\mathds{R}\) as its \(L^{(1,\infty)}\)-norm:
\[\mathrm{E}(H)=\int_{0}^{1}\left(\max_{M}H_{t}-\min_{M}H_{t}\right)\,dt, \tag{7}\]
where \(H_{t}:=H(t\,,\cdot)\). Then, define the Hofer norm of a Hamiltonian diffeomorphism as
\[\|\varphi\|_{\mathrm{Hof}}=\inf\left\{\mathrm{E}(H)\,\middle|\,\varphi_{H}^{1} =\varphi\right\}\,.\]
Here, \(\{\varphi_{H}^{t}\}_{t\in[0,1]}\) is the Hamiltonian flow of \(H\), i.e. \(\varphi_{H}^{0}=\mathds{1}_{M}\) and \(\frac{d}{dt}\varphi_{H}^{t}=X_{H}^{t}\circ\varphi_{H}^{t}\), where \(X_{H}^{t}\) is the unique time-dependent vector field of \(M\) such that \(\iota(X_{H}^{t})\omega=-dH_{t}\).
Hofer's norm then yields a distance on \(\mathcal{L}^{\mathrm{Ham}}(L)\) by setting
\[d_{\mathrm{Hof}}(L,L^{\prime})=\inf\left\{\left\|\varphi\right\|_{\mathrm{Hof}} \left|\,\varphi(L)=L^{\prime}\right\}=\inf\left\{\mathrm{E}(H)\left|\,\varphi_{ H}^{1}(L)=L^{\prime}\right.\right\}\]
for any \(L^{\prime}\in\mathcal{L}^{\mathrm{Ham}}(L)\).
_Remark 6_: _As noted by Usher [11], replacing the Hofer energy \(\mathrm{E}(H)\) in the above expression by the smaller quantity \(\mathrm{E}_{L}(H)\), defined as in (7) but with oscillations taken only on \(L\) rather than on the whole ambient manifold \(M\), yields the same distance._
### The Lagrangian spectral distance
This distance is based on the theory of spectral invariants initiated by Viterbo [12] via generating functions and adapted to Floer homology theories by Schwarz [10] and Oh [11] in the case of Hamiltonian diffeomorphism groups. The Lagrangian version which is of interest to us here was developed by the second author [10] in the weakly exact setting and by Zapolsky and the second author [12] in the monotone case -- see also work by Fukaya, Oh, Ohta, and Ono [11], which is based on more advanced techniques such as virtual fundamental cycles and Kuranishi structures.
Lagrangian spectral invariantsThe Lagrangian spectral invariants \(\ell(\alpha;H)\) associated to \(L\) are defined for any nonzero quantum homology class \(\alpha\in\mathrm{QH}_{*}(L)\) -- see [1] for the construction of this homology. Since the Lagrangian spectral _distance_ only relies on spectral invariants corresponding to the quantum fundamental class of \(L\), we do not review the construction of the quantum homology of a Lagrangian, nor define spectral invariants in full generality. Instead, we assume that the quantum fundamental class of \(L\), denoted \([L]\), is nontrivial and only present the properties of \(\ell_{+}:=\ell([L],\,\cdot\,)\).
The function \(\ell_{+}:C^{0}([0,1]\times M)\to\mathbb{R}\) satisfies the following properties.
1. Continuity. For any Hamiltonians \(H\) and \(K\), we have that \[\int_{0}^{1}\min_{M}(K_{t}-H_{t})\,dt\leq\ell_{+}(K)-\ell_{+}(H)\leq\int_{0}^{ 1}\max_{M}(K_{t}-H_{t})\,dt\,.\]
2. Triangle inequality. For all \(H\) and \(K\), \(\ell_{+}(H\sharp K)\leq\ell_{+}(H)+\ell_{+}(K)\).
3. Lagrangian control. If \(H_{t}|_{L}=c(t)\in\mathbb{R}\) (resp. \(\leq\), \(\geq\)) for all \(t\), then \[\ell_{+}(H)=\int_{0}^{1}c(t)\,dt\qquad(\text{resp. }\leq\), \(\geq).\]
4. Non-negativity. For all \(H\), \(\ell_{+}(H)+\ell_{+}(\overline{H})\geq 0\).
5. Homotopy invariance. If \(H\) is normalized, \(\ell_{+}(H)\) only depends on the homotopy class relative to endpoints of the isotopy \(\{\varphi_{H}^{t}\}_{t\in[0,1]}\), i.e. the class \([\{\varphi_{H}^{t}\}_{t\in[0,1]}]\in\widetilde{\mathrm{Ham}}(M,\omega)\).
6. Symplectic invariance. For all \(H\) and all \(\psi\in\mathrm{Symp}(M,\omega)\), \(\ell_{+}(H)=\ell_{+}^{\prime}(H\circ\psi^{-1})\).
Let us make a few comments about these properties and the notation used above.
* In Properties 2 and 4 respectively, \(H\sharp K\) denotes the Hamiltonian function \(H_{t}(x)+K\big{(}(\varphi_{H}^{t})^{-1}(x)\big{)}\) which generates the isotopy \(\{\varphi_{H}^{t}\varphi_{K}^{t}\}_{t\in[0,1]}\), and \(\overline{H}\) is the Hamiltonian function \(\overline{H}_{t}(x)=-H_{t}\big{(}(\varphi_{H}^{t})^{-1}(x)\big{)}\) which generates \(\big{\{}(\varphi_{H}^{t})^{-1}\big{\}}_{t\in[0,1]}\). Properties 1 to 4 are part of Theorem 3 in [11].
* Property 3 directly implies that for all \(H\), \[\int_{0}^{1}\min_{L}H_{t}\,dt\leq\ell_{+}(H)\leq\int_{0}^{1}\max_{L}H_{t}\,dt\,.\]
* In Property 5, the _normalization_ refers to the fact that for all \(t\), \(\int_{0}^{1}H_{t}\omega^{n}=0\). This property appears as Proposition 4 in [11].
* Finally, concerning Property 6, note that any symplectomorphism \(\psi\) induces an isomorphism \(\psi_{*}:\operatorname{QH}_{t}(L)\to\operatorname{QH}_{t}(L^{\prime})\) with \(L^{\prime}=\psi(L)\). The fundamental class of \(L\) is mapped to that of \(L^{\prime}\) through this action (up to possible multiplication by a unit of the coefficient field). The notation \(\ell_{+}^{\prime}\) denotes the Lagrangian spectral invariant associated to \(L^{\prime}\) (and its fundamental class). Now, Symplectic invariance only expresses the fact that spectral invariants agree with the action of \(\psi\) by conjugation on the Hamiltonian diffeomorphism group: for any Hamiltonian function \(H\), \(\varphi_{H\circ\psi}^{t}=\psi^{-1}\varphi_{H}^{t}\psi\). This result is part of Theorem 35 in [11].
The Lagrangian spectral distanceThe properties of \(\ell_{+}\) above show that not only \(\ell_{+}\) defines a function on \(\widetilde{\operatorname{Ham}}(M,\omega)\) with similar properties (see Theorem 41 in [11]), but also a pseudodistance on \(\mathcal{L}^{\operatorname{Ham}}(L)\).
Indeed, following [12], first define the length of a Hamiltonian isotopy \(\{\varphi_{H}^{t}\}_{t\in[0,1]}\) by \(\gamma_{L}(H)=\ell_{+}(H)+\ell_{+}(\overline{H})\), then take the infimum over all Hamiltonian isotopies which map \(L\) to \(L^{\prime}\):
\[\gamma(L,L^{\prime})=\inf\{\gamma_{L}(H)\,|\,\varphi_{H}^{1}(L)=L^{\prime}\}\,.\]
The Non-negativity property of \(\ell_{+}\) ensures that \(\gamma(L,\cdot)\) takes non-negative values. Symplectic invariance ensures that for any symplectomorphism \(\psi\) and any Lagrangian \(L^{\prime}\in\mathcal{L}^{\operatorname{Ham}}(L)\), \(\gamma(L,L^{\prime})=\gamma(\psi(L),\psi(L^{\prime}))\). Combined with Triangle inequality, this shows that for all \(L^{\prime}\) and \(L^{\prime\prime}\) in \(\mathcal{L}^{\operatorname{Ham}}(L)\),
\[\gamma(L,L^{\prime\prime})\leq\gamma(L,L^{\prime})+\gamma(L^{\prime},L^{ \prime\prime})\,.\]
Finally, note that if \(L^{\prime}=\varphi_{H}^{1}(L)\), then \(L=\varphi_{\overline{H}}^{1}(L^{\prime})\) and
\[\gamma_{L^{\prime}}(\overline{H}) =\ell_{+}^{\prime}(\overline{H})+\ell_{+}^{\prime}(H)\] \[=\ell_{+}(\overline{H}\circ\varphi_{H}^{1})+\ell_{+}(H\circ \varphi_{H}^{1})\] \[=\ell_{+}(\overline{H\circ\varphi_{H}^{1}})+\ell_{+}(H\circ \varphi_{H}^{1})\] \[=\gamma_{L}(H\circ\varphi_{H}^{1}),\]
where the second line follows from Symplectic invariance, whilst the third one is a direct computation using the fact that \(\varphi^{1}_{H\circ\varphi^{1}_{H}}=(\varphi^{1}_{H})^{-1}\varphi^{1}_{H}\varphi^ {1}_{H}=\varphi^{1}_{H}\). Since \(\gamma(L^{\prime},L)\) is defined by taking the infimum over all possible Hamiltonians whose diffeomorphism sends \(L^{\prime}\) to \(L\), this implies symmetry for \(\gamma\). This justifies the following definition.
**Definition**: _Let \(L\) be a weakly exact Lagrangian or a monotone Lagrangian with \(N_{L}\geq 2\) and nonzero quantum fundamental class. The Lagrangian spectral distance between \(L_{0}\) and \(L_{1}\in\mathcal{L}^{\mathrm{Ham}}(L)\) is \(\gamma(L_{0},L_{1})\)._
The fact that this actually defines a _nondegenerate_ distance is, as usual, the "hard" part. This was proven fairly simultaneously in [11] (via Poisson bracket invariants) and [12] (via energy-capacity inequality). This is also a consequence of the main result of the present note.
Finally, let us emphasize the fact that the Continuity property of \(\ell_{+}\) obviously yields the well-known fact that
\[\text{for all }L^{\prime}\in\mathcal{L}^{\mathrm{Ham}}(L),\quad\gamma(L,L^{ \prime})\leq d_{\mathrm{Hof}}(L,L^{\prime})\,.\]
## 3 Proof of Theorem 1
Fix a Lagrangian submanifold \(L\) which satisfies the assumptions of Theorem 1. Let \(L^{\prime}=\varphi^{1}_{H}(L)\in\mathcal{L}^{\mathrm{Ham}}(L)\) for some Hamiltonian function \(H\), and let \(\psi\in\mathrm{Symp}(M,\omega)\) be such that \(L^{\prime}=\psi(L)\). Notice that \(\psi^{-1}(L)\in\mathcal{L}^{\mathrm{Ham}}(L)\) since the Hamiltonian function \(H\circ\psi\) generates the isotopy \(\{\psi^{-1}\varphi^{t}_{H}\psi\}\) which maps \(\psi^{-1}(L)\) to \(L\) at time \(1\).
Recall from (6) that the Hausdorff distance between \(L\) and \(L^{\prime}\) is defined as \(\delta_{H}=\max\{s(L;L^{\prime}),s(L^{\prime};L)\}\), where \(s(A;B)\) is the supremum of the distance to \(B\) of a point in \(A\).
From (5), i.e. Lemma 5 of [10], we get some constants \(\delta=\delta(g,g|_{L})>0\) and \(C=C(g,g|_{L})>0\) such that
\[s(L;\psi(L))\leq C\sqrt{\gamma(L,\psi(L))}\quad\text{ and }\quad s(L;\psi^{-1}(L))\leq C\sqrt{\gamma(L,\psi^{-1}(L))}\]
whenever \(\gamma(L,\psi(L))\) and \(\gamma(L,\psi^{-1}(L))\) are smaller than \(\delta\).
Let \(\ell(c)\) denote the length of a smooth path \(c:[0,1]\to M\). Then,
\[s(\psi(L);L) =\max_{y\in\psi(L)}\min_{\begin{subarray}{c}c(0)=y\\ c(1)\in L\end{subarray}}\ell(c)\] \[=\max_{x\in L}\min_{\begin{subarray}{c}c(0)=x\\ c(1)\in\psi^{-1}(L)\end{subarray}}\ell(\psi\circ c)\] \[\leq||d\psi||\max_{x\in L}\min_{\begin{subarray}{c}c(0)=x\\ c(1)\in\psi^{-1}(L)\end{subarray}}\ell(c)\] \[=||d\psi||\;s(L;\psi^{-1}(L))\,.\]
From this, we immediately get that
\[\delta_{H}(L,L^{\prime}) \leq\max\left\{s(L;\psi(L)),||d\psi||\;s(L;\psi^{-1}(L))\right\}\] \[\leq C||d\psi||\max\left\{\sqrt{\gamma(L,\psi(L))},\sqrt{\gamma(L, \psi^{-1}(L))}\right\} \tag{8}\]
since \(||d\psi||\geq 1\) for any symplectomorphism \(\psi\). Indeed, a symplectic matrix must always have an eigenvalue with absolute value at least \(1\).
By Symplectic invariance, we know that \(\gamma(L,\psi^{-1}(L))=\gamma(\psi(L),L)\), and (8) gives us the expected inequality (2):
\[\delta_{H}(L,L^{\prime})\leq C\|d\psi\|\sqrt{\gamma(L,L^{\prime})}\]
under the condition \(\gamma(L,L^{\prime})<\delta\).
To get rid of this condition when \(M\) is compact, we use Joksimovic and Seyfaddini's trick [11]: take \(C\) large enough so that
\[C\geq\frac{\operatorname{Diam}(M)}{\sqrt{\delta}}.\]
Then, if \(\gamma(L,L^{\prime})\geq\delta\), we trivially get
\[C\sqrt{\gamma(L,L^{\prime})}\;||d\psi||\geq\operatorname{Diam}(M)\geq\delta_{ H}(L,L^{\prime}),\]
since \(||d\psi||\geq 1\). Here, we have made use of the fact that the distance between two closed subsets of \(M\) is at most the diameter of \(M\).
This ends the proof of Theorem 1.
## 4 Alternative versions of inequality (2)
We conclude with two alternative versions of inequality (2): the first one is established by adapting to the Lagrangian setting Joksimovic and Seyfaddini's proof from [11], and the other one by using methods explored by Chasse and leading to inequality (4) from [10], rather than using directly inequality (5) from [10].
### Joksimovic and Seyfaddini's approach
We could have adapted Joksimovic and Seyfaddini's [11] proof of (1) to the Lagrangian context to get an analogous inequality. We give here the broad idea on how such an inequality is proven.
For each \(x\in L\), take a Darboux chart \(\psi_{x}:U_{x}\to\mathbb{R}^{2n}\) sending \(L\cap U_{x}\) to \(\mathbb{R}^{n}\times\{0\}\). Take also compact neighborhoods \(K_{x}\) and \(K_{x}^{\prime}\) of \(x\) in \(M\) such that
\[K_{x}\subseteq\operatorname{int}(K_{x}^{\prime})\subseteq K_{x}^{\prime} \subseteq U_{x}.\]
By compactness of \(L\), we may take a finite subset \(\{\psi_{i}\}_{1\leq i\leq k}\) of these charts, so that \(\{\mathrm{int}(K^{\prime}_{i})\}_{1\leq i\leq k}\) still covers \(L\). Then, setting \[\varepsilon:=\min_{1\leq i\leq k}\min_{\begin{subarray}{c}x\in\partial K_{i}\\ x^{\prime}\in\partial K^{\prime}_{i}\end{subarray}}d(x,x^{\prime})\quad\text{ and }\quad A:=\max_{1\leq i\leq k}\left\|d\psi_{i}^{-1}\right\|_{\psi_{i}(K^{ \prime}_{i})}\right\|,\] we get the inclusion \[\psi_{i}^{-1}(B_{r}^{2n}(\psi_{i}(x)))\subseteq B_{Ar}(x)\] for \(r=2\sqrt{\frac{\gamma(L,L^{\prime})}{\pi}}\) if \(\gamma(L,L^{\prime})<\delta=\frac{\pi\varepsilon^{2}}{4A^{2}}\). Here, \(B^{2n}\) denotes the Euclidean ball in \(\mathbb{R}^{2n}\), whilst \(B\) is the metric ball in \(M\). But then, if \(Ar<d(x,L^{\prime})\), the map \(\psi_{i}^{-1}|_{B_{r}^{2n}(\psi_{i}(x))}\) would be a symplectic embedding of a ball of radius \(r\) with real part along \(L\) not crossing \(L^{\prime}\), so that \[\gamma(L,L^{\prime})\geq\frac{\pi}{2}r^{2}=2\gamma(L,L^{\prime})\] by the proof of Theorem E of [10], which is of course a contradiction. Therefore, we must have \[d(x,L^{\prime})\leq 2A\sqrt{\frac{\gamma(L,L^{\prime})}{\pi}}.\] This gives an inequality analogous to (5) -- but with a constant depending on local charts -- by taking the maximum over all \(x\in L\).
### An inequality with the inverse norm
When \(M\) is compact, there is also an inequality with \(\|d\psi^{-1}\|\):
\[\delta_{H}(L,L^{\prime})\leq C\sqrt{\gamma(L,L^{\prime})}\,\|d\psi^{-1}\|^{2}. \tag{9}\]
The proof of (9) follows the scheme of the proof of (4) appearing in [11]. We thus recall the idea of said proof.
1. From the proof of Theorem E of [10], we know that there exist, for any \(x\in L\) and any \(x^{\prime}\in L^{\prime}\), \(J\)-holomorphic strips \(u_{x}\) and \(u_{x^{\prime}}\) with boundary along \(L\) and \(L^{\prime}\) -- modulo arbitrarily small Hamiltonian perturbations -- and passing through \(x\) and \(x^{\prime}\), respectively. Furthermore, their area is bounded from above by \(2\gamma(L,L^{\prime})\).
2. Using a version of the monotonicity lemma, we get that \[\omega(u_{x})\geq A(g,g|_{L})r^{2}\] if the closed metric ball \(B_{r}(x)\) does not intersect \(L^{\prime}\) and \(r\) is smaller than some \(\delta=\delta(g,g|_{L})>0\). There is an analogous result for \(u_{x^{\prime}}\) and \(L^{\prime}\). In particular, if \(\gamma(L,L^{\prime})\) is small enough, the inequality holds for all \(r<d_{M}(x,L^{\prime})\). Therefore, it holds for \(r=d_{M}(x,L^{\prime})\).
3. Taking the supremum over all \(x\in L\) of the inequalities for \(L\), we essentially get (5). Taking the supremum over all \(x^{\prime}\in L^{\prime}\) of the inequalities for \(L^{\prime}\) gives an analogous inequality for the pair \((L^{\prime},L)\). Taking the maximum of these two inequalities, we get (4). We thus see that the dependence of \(C\) in (4) on metric invariants of \(L^{\prime}\) comes from the constant \(A\) in Step 2. Therefore, proving Theorem 1 reduces to proving the following proposition.
**Proposition 7**: _There exist constants \(\delta\) and \(A\) depending only on metric invariants of \(M\) and \(L\) with the following property._
_Let \(L^{\prime}\in\mathcal{L}^{\mathrm{Ham}}(L)\) and let \(\psi\) be a symplectomorphism such that \(\psi(L)=L^{\prime}\). Let \(\Sigma\) be a compact Riemann surface with boundary \(\partial\Sigma\) with corners. Consider a nonconstant \(J\)-holomorphic curve \(u^{\prime}:(\Sigma,\partial\Sigma)\to(B_{r}(x^{\prime}),\partial B_{r}(x^{ \prime})\cup L^{\prime})\) for some \(x^{\prime}\in L^{\prime}\) and \(r\leq\frac{\delta}{\|d\psi^{-1}\|}\) such that \(x^{\prime}\in u^{\prime}(\Sigma)\). Suppose that \(u^{\prime}\) sends the corners of \(\Sigma\) to \(\partial B_{r}(x^{\prime})\cap L^{\prime}\). Then,_
\[\omega(u^{\prime})\geq\frac{A}{\|d\psi^{-1}\|^{2}}r^{2}\,.\]
Indeed, Proposition 2.1 of [10], Proposition 7, and Step (1) above yield
\[\min\left\{\delta,d_{M}(x,L^{\prime})\right\}\leq C\sqrt{\gamma(L,L^{\prime})}\]
for all \(x\in L\) and
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C \sqrt{\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
for all \(x^{\prime}\in L^{\prime}\) (with \(C=\frac{1}{\sqrt{2A}}\)). In particular, if we suppose that \(\gamma(L,L^{\prime})<C^{-2}\delta^{2}\|d\psi^{-1}\|^{-4}\leq C^{-2}\delta^{2}\), we get that
\[d_{M}(x,L^{\prime})\leq C\sqrt{\gamma(L,L^{\prime})}\]
for all \(x\in L\) and
\[d_{M}(x^{\prime},L)\leq C\sqrt{\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
for all \(x^{\prime}\in L^{\prime}\). Taking the maximum over all \(x\) and all \(x^{\prime}\), we get \(\delta_{H}(L,L^{\prime})\leq C\sqrt{\gamma(L,L^{\prime})}\max\{1,\|d\psi^{-1} \|\}\) as long as \(\gamma(L,L^{\prime})<C^{-2}\delta^{2}\|d\psi^{-1}\|^{-4}\). This yields (9) -- with the additional \(\gamma\)-smallness assumption -- since \(\|d\psi^{-1}\|\geq 1\).
If \(\gamma(L,L^{\prime})\geq C^{-2}\delta^{2}\|d\psi^{-1}\|^{-4}\), take \(C^{\prime}\geq C\delta^{-1}\operatorname{Diam}(M)\), so that
\[C^{\prime}\sqrt{\gamma(L,L^{\prime})}\|d\psi^{-1}\|^{2}\geq\operatorname{Diam} (M)\geq\delta_{H}(L,L^{\prime}),\]
which gives the desired result.
Only Proposition 7 is thus now left to prove. In order to do so, we first need a new version of the isoperimetric inequality. For an arc \(\gamma^{\prime}:([0,\pi],\{0,\pi\})\to(M,L^{\prime})\) whose image in contained in the metric ball \(B_{\delta/\|d\psi^{-1}\|}(x^{\prime})\) for some \(x^{\prime}\in L^{\prime}\), we get
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
for all \(x^{\prime}\in L^{\prime}\).
**Lemma 8**: _Let \(\gamma\) be a compact Riemann surface with boundary \(\partial\Sigma\) with corners. Then,_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
_for all \(x\in L\) and_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C \sqrt{\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
_for all \(x^{\prime}\in L^{\prime}\)._
Proof.: We first prove the claim.
**Lemma 9**: _Let \(\gamma\) be a compact Riemann surface with boundary \(\partial\Sigma\) with corners. Then,_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
_for all \(x\in L\) and_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
_for all \(x^{\prime}\in L^{\prime}\)._
Proof.: We first prove the claim.
**Lemma 10**: _Let \(\gamma\) be a compact Riemann surface with boundary \(\partial\Sigma\) with corners. Then,_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
_for all \(x\in L\) and_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
_for all \(x^{\prime}\in L^{\prime}\)._
Proof.: We first prove the claim.
**Lemma 11**: _Let \(\gamma\) be a compact Riemann surface with boundary \(\partial\Sigma\) with corners. Then,_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
_for all \(x\in L\) and_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
_for all \(x^{\prime}\in L^{\prime}\)._
Proof.: We first prove the claim.
**Lemma 12**: _Let \(\gamma\) be a compact Riemann surface with boundary \(\partial\Sigma\) with corners. Then,_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
_for all \(x\in L\) and_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
_for all \(x^{\prime}\in L^{\prime}\)._
Proof.: We first prove the claim.
**Lemma 13**: _Let \(\gamma\) be a compact Riemann surface with boundary \(\partial\Sigma\) with corners. Then,_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
_for all \(x^{\prime}\in L\) and_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
_for all \(x^{\prime}\in L^{\prime}\)._
Proof.: We first prove the claim.
**Lemma 14**: _Let \(\gamma\) be a compact Riemann surface with boundary \(\partial\Sigma\) with corners. Then,_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
_for all \(x\in L\) and_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
_for all \(x^{\prime}\in L^{\prime}\)._
Proof.: We first prove the claim.
**Lemma 15**: _Let \(\gamma\) be a compact Riemann surface with boundary \(\partial\Sigma\) with corners. Then,_
\[\min\left\{\frac{\delta}{\|d\psi^{-1}\|},d_{M}(x^{\prime},L)\right\}\leq C\sqrt {\gamma(L,L^{\prime})}\|d\psi^{-1}\|\]
\(L^{\prime}\), set \(a(\gamma^{\prime})\) to be the symplectic area \(\omega(u^{\prime})\) of any map \(u^{\prime}:\mathbb{D}\cap\{\operatorname{Im}z\geq 0\}\to M\) such that \(u^{\prime}(e^{i\theta})=\gamma^{\prime}(\theta)\) and \(u^{\prime}(\mathbb{D}\cap\mathbb{R})\subseteq L^{\prime}\). Here, \(\mathbb{D}\) is the unit disk in \(\mathbb{C}\).
First note that this definition is independent of the choice of extension \(u^{\prime}\). To see this, take
\[\delta=\min\left\{\varepsilon,\frac{\varepsilon}{2}r_{\operatorname{inj}}(L),\frac{\varepsilon}{2}r_{0},\frac{\pi}{4\sqrt{K_{0}}}\right\}. \tag{10}\]
if \(L\) is \(\varepsilon\)-tame (see [10] for the definition) and \(M\) has injectivity radius bounded away from zero by \(r_{0}\) and sectional curvature takes values in \([-K_{0},K_{0}]\). Here, \(r_{\operatorname{inj}}(L)\) is the injectivity radius of \(L\) with the Riemannian metric induced by \(M\). Then, for all \(z\in\mathbb{D}\cap\{\operatorname{Im}z\geq 0\}\), we have that
\[d_{M}\left(\psi^{-1}(u(z)),\psi^{-1}(x^{\prime})\right)\leq\|d\psi^{-1}\|\,d_{ M}(u(z),x^{\prime})\leq\delta,\]
i.e. \(u:=\psi^{-1}\circ u^{\prime}\) has image in the metric ball \(B_{\delta}(x)\) with \(x:=\psi^{-1}(x)\in L\). Take two extensions \(u_{0}^{\prime}\) and \(u_{1}^{\prime}\) of an arc \(\gamma^{\prime}\) as above, and denote \(\alpha_{i}^{\prime}:=u_{i}^{\prime}|_{\mathbb{R}}\) and \(\alpha_{i}:=\psi^{-1}\circ\alpha_{i}^{\prime}\). Then, \(u_{0}\#\overline{u_{1}}\) is a disk whose boundary \(\alpha_{0}\#\overline{\alpha_{1}}\) lies in \(L\). Here, \(\overline{f}(a+ib):=f(-a+ib)\) for any map \(f:U\subseteq\mathbb{C}\to M\). But by \(\varepsilon\)-tameness of \(L\), \(\alpha_{0}\#\overline{\alpha_{1}}\) must be a loop in a metric ball of \(L\) (in the intrinsic metric) of radius \(\frac{2\delta}{\varepsilon}\leq r_{\operatorname{inj}}(L)\), and must thus be contractible in the same ball. This nullhomotopy extends to a homotopy in a metric ball of \(M\) of radius \(\frac{2\delta}{\varepsilon}\) of \(u_{0}\#\overline{u_{1}}\) to a topological sphere. Since \(\frac{2\delta}{\varepsilon}\leq r_{0}\), this topological sphere must be itself contractible, so that
\[0=\omega(u_{0}\#\overline{u_{1}})=\omega(u_{0})-\omega(u_{1})=\omega(u_{0}^{ \prime})-\omega(u_{1}^{\prime}),\]
where the last inequality follows from the fact that \(\psi\) is a symplectomorphism. In other words, \(a(\gamma^{\prime})\) is indeed well defined.
We can now prove the following isoperimetric inequality.
**Lemma 8**: _There exist constants \(\delta\) and \(B\) depending only on metric invariants of \(M\) and \(L\) such that, for all arcs \(\gamma^{\prime}:([0,\pi],\{0,\pi\})\to(M,L^{\prime})\) with image in \(B_{\delta/\|d\psi^{-1}\|}(x^{\prime})\) for some \(x^{\prime}\in L^{\prime}\), we have that_
\[a(\gamma^{\prime})\leq B\|d\psi^{-1}\|^{2}\ell(\gamma^{\prime})^{2}.\]
Proof.: As noted above, we know that \(\psi^{-1}\circ\gamma\) has image in the metric ball \(B_{\delta}(\psi^{-1}(x^{\prime}))\). Therefore, by said Lemma 2.1 of [10], we know that
\[a(\psi^{-1}\circ\gamma)\leq B(g,g|_{L})\,\ell(\psi^{-1}\circ\gamma)^{2}.\]
However, \(a(\psi^{-1}\circ\gamma)=a(\gamma)\), since \(\psi\) is a symplectomorphism, and \(\ell(\psi^{-1}\circ\gamma)\leq\|d\psi^{-1}\|\ell(\gamma)\), which give the desired inequality.
The proof of Proposition 7 then follows the same scheme as the proof of Proposition 2.1 of [10], excepts that we use Lemma 8 above -- instead of Lemma 2.1 of [10] -- to estimate the local action \(a(\gamma^{\prime})\) for arcs \(\gamma^{\prime}\) with boundary on \(L^{\prime}\). |
2307.16776 | Global Compactness, subcritical approximation of the Sobolev quotient,
and a related concentration result in the Heisenberg group | We investigate some effects of the lack of compactness in the critical
Sobolev embedding in the Heisenberg group. | Giampiero Palatucci, Mirco Piccinini, Letizia Temperini | 2023-07-31T15:43:32Z | http://arxiv.org/abs/2307.16776v1 | Global compactness, subcritical approximation of the Sobolev quotient, and a related concentration result in the Heisenberg group
###### Abstract.
We investigate some effects of the lack of compactness in the critical Sobolev embedding in the Heisenberg group.
Key words and phrases:Sobolev embeddings, Heisenberg group, CR Yamabe, Global compactness, Profile decompositions, Green's Function 2010 Mathematics Subject Classification: 35R03, 46E35, 35J08, 35A15
## 1. Critical Sobolev embeddings in the Heisenberg group
Let \(\mathds{H}^{n}:=(\mathds{C}^{n}\times\mathds{R},\circ,\delta_{\lambda})\) be the usual Heisenberg-Weyl group, endowed with the group multiplication law \(\circ\),
\[\xi\circ\xi^{\prime}:=\Big{(}x+x^{\prime},\,y+y^{\prime},\,t+t^{\prime}+2 \langle y,x^{\prime}\rangle-2\langle x,y^{\prime}\rangle\Big{)}\]
for \(\xi:=(x+iy,t)\) and \(\xi^{\prime}:=(x^{\prime}+iy^{\prime},t^{\prime})\in\mathds{R}^{n}\times \mathds{R}^{n}\times\mathds{R}\), whose group of non-isotropic _dilations_\(\{\delta_{\lambda}\}_{\lambda>0}\) on \(\mathds{R}^{2n+1}\) is given by
\[\xi\mapsto\delta_{\lambda}(\xi):=(\lambda x,\,\lambda y,\,\lambda^{2}t). \tag{1.1}\]
Consider the standard Folland-Stein-Sobolev space \(S^{1}_{0}(\mathds{H}^{n})\) defined as the completion of \(C^{\infty}_{0}(\mathds{H}^{n})\) with respect to the homogeneous subgradient norm \(\|D_{H}\cdot\|_{L^{2}}\), where the horizontal (or intrinsic) gradient \(D_{H}\) is given by
\[D_{H}u(\xi):=\big{(}Z_{1}u(\xi),\dots,Z_{2n}u(\xi)\big{)},\]
with \(Z_{j}:=\partial_{x_{j}}+2y_{j}\partial_{t}\), \(Z_{n+j}:=\partial_{y_{j}}-2x_{j}\partial_{t}\) for \(1\leq j\leq n\), and \(T:=\partial_{t}\) being the Jacobian base of the Heisenberg Lie algebra.
As well known, the following Sobolev-type inequality holds for some positive constant \(S^{*}\),
\[\|u\|_{L^{2^{*}}}^{2^{*}}\leq S^{*}\|D_{H}u\|_{L^{2}}^{2^{*}},\quad\forall u \in S^{1}_{0}(\mathds{H}^{n})\,, \tag{1.2}\]
where \(2^{*}=2^{*}(Q):=2Q/(Q-2)\) is the Folland-Stein-Sobolev critical exponent, depending on the _homogeneous dimension_\(Q:=2n+2\) of the Heisenberg group \(\mathds{H}^{n}\).
The validity of (1.2) is equivalent to show that the constant \(S^{*}\) defined in the following maximization problem,
\[S^{*}:=\sup\left\{\int_{\mathds{H}^{n}}|u(\xi)|^{2^{*}}\,\mathrm{d}\xi\,:\,u \in S^{1}_{0}(\mathds{H}^{n}),\int_{\mathds{H}^{n}}|D_{H}u(\xi)|^{2}\mathrm{d} \xi\leq 1\right\}, \tag{1.3}\]
Introduction
Let \(\Omega\subseteq\mathds{H}^{n}\) be a bounded domain, and denote by \(\mathcal{M}(\overline{\Omega})\) the set of nonnegative Radon measures in \(\Omega\). Let \(X=X(\Omega)\) be the space_
\[X:=\Big{\{}(u,\mu)\in S^{1}_{0}(\Omega)\times\mathcal{M}(\overline{\Omega}):\mu \geq|D_{H}u|^{2}\mathrm{d}\xi,\,\mu(\overline{\Omega})\leq 1\Big{\}},\]
_endowed with the product topology \(\mathcal{T}\) such that_
\[(u_{k},\mu_{k})\stackrel{{\mathcal{T}}}{{\to}}(u,\mu)\, \stackrel{{\text{def}}}{{\Leftrightarrow}}\,\,\begin{cases}u_{k} \rightharpoonup u\text{ in }L^{2^{*}}(\Omega),\\ \mu_{k}\stackrel{{*}}{{\to}}\mu\text{ in }\mathcal{M}( \overline{\Omega}).\end{cases} \tag{2.1}\]
_Let us consider the following family of functionals,_
\[\mathcal{F}_{\varepsilon}(u,\mu):=\int_{\Omega}|u|^{2^{*}-\varepsilon} \mathrm{d}\xi\quad\forall(u,\mu)\in X\,.\]
_Then, as \(\varepsilon\to 0\), the \(\Gamma^{+}\)-limit of the family of functionals \(\mathcal{F}_{\varepsilon}\) with respect to the topology \(\mathcal{T}\) given by (2.1) is the functional \(\mathcal{F}\) defined by_
\[\mathcal{F}(u,\mu)=\int_{\Omega}|u|^{2^{*}}\mathrm{d}\xi+S^{*}\sum_{j=1}^{ \infty}\mu_{j}^{\frac{2^{*}}{2}}\quad\forall(u,\mu)\in X.\]
_Here \(S^{*}\) is the best Sobolev constant in \(\mathds{H}^{n}\), \(2^{*}=2Q/(Q-2)\) is the Folland-Stein-Sobolev critical exponent, and the numbers \(\mu_{j}\) are the coefficients of the atomic part of the measure \(\mu\)._
In order to prove such a result in the very general situation considered here, and thus requiring no additional regularity assumptions nor special geometric features on the domains, we attack the problem pursuing a new approach and for this we rely on De Giorgi's \(\Gamma\)-convergence techniques. This is in the same spirit of previous results regarding the classical Sobolev embedding in the Euclidean framework, as seen in [1, 12, 13], though the core of the proof in [17] goes in a very different line because the optimal recovery sequences have been concretely constructed whereas in all the aforementioned Euclidean papers such an existence result has been proven via compactness and locality properties of the \(\Gamma\)-limit energy functional. In this respect, the adopted strategy is surprisingly close to that in the _fractional Sobolev spaces_ framework ([18, 20]), but various differences evidently arose because of the natural discrepancy between the involved frameworks.
It could be interesting to investigate whether or not the techniques introduced in [17] and [20] could be combined with the estimates involving the "nonlocal tail" in the Heisenberg framework firstly introduced in [14] in order to prove a similar result for fractional Folland-Stein-Sobolev spaces; see also [11, 21, 15].
As a corollary of Theorem 2.1, one can deduce that the sequences of maximizers \(\{u_{e}\}\) for the subcritical Sobolev quotient \(S^{*}_{\varepsilon}\) concentrates energy at one point \(\xi_{\mathrm{o}}\in\overline{\Omega}\), and this is in clear accordance with the analogous result in the Euclidean case.
**Theorem 2.2** (See Theorem 1.2 in [17]).: _Let \(\Omega\subset\mathds{H}^{n}\) be a bounded domain and let \(u_{\varepsilon}\in S^{1}_{0}(\Omega)\) be a maximizer for \(S^{*}_{\varepsilon}\). Then, as \(\varepsilon=\varepsilon_{k}\to 0\), up to subsequences, we have that there exists \(\xi_{\mathrm{o}}\in\overline{\Omega}\) such that_
\[u_{k}=u_{\varepsilon_{k}}\rightharpoonup 0\text{ in }L^{2^{*}}(\Omega),\]
_and_
\[|D_{H}u_{k}|^{2}\mathrm{d}\xi\overset{*}{\rightharpoonup}\delta_{\xi_{ \mathrm{o}}}\text{ in }\mathcal{M}(\overline{\Omega}),\]
_with \(\delta_{\xi_{\mathrm{o}}}\) being the Dirac mass at \(\xi_{\mathrm{o}}\)._
## 3. Struwe's Global Compactness in the Heisenberg group
Since the seminal paper [23] by Struwe, the celebrated Global Compactness in the Sobolev space \(H^{1}\) have become a fundamental tool in Analysis which have been proven to be crucial in order to achieve various existence results, as e. g. for ground states solutions for nonlinear Schrodinger equations, for prescribing \(Q\)-curvature problems, for solutions of Yamabe-type equations in conformal geometry, for harmonic maps from Riemann surfaces into Riemannian manifolds, for Yang-Mills connections over four-manifolds, and many others. The involved literature is
really too wide to attempt any reasonable account here. In Theorem 3.1 below, we will state the counterpart of Struwe's Global Compactness in the Heisenberg framework.
In order to precisely state such a result, consider for any fixed \(\lambda\in\mathds{R}\) the problem,
\[-\Delta_{H}u-\lambda u-|u|^{2^{*}-2}u=0\qquad\text{in }(S^{1}_{0}(\Omega))^{ \prime},\] ( \[P_{\lambda}\] )
together with its corresponding Euler-Lagrange energy functional \(\mathcal{E}_{\lambda}:S^{1}_{0}(\Omega)\to\mathds{R}\) given by
\[\mathcal{E}_{\lambda}(u)=\frac{1}{2}\int_{\Omega}|D_{H}u|^{2}\,\mathrm{d} \xi-\frac{\lambda}{2}\int_{\Omega}|u|^{2}\,\mathrm{d}\xi-\frac{1}{2^{*}}\int_{ \Omega}|u|^{2^{*}}\,\mathrm{d}\xi.\]
Consider also the following limiting problem,
\[-\Delta_{H}u-|u|^{2^{*}-2}u=0\qquad\text{in }(S^{1}_{0}(\Omega_{\mathrm{o}}))^{ \prime},\] ( \[P_{0}\] )
where \(\Omega_{\mathrm{o}}\) is either a half-space or the whole \(\mathds{H}^{n}\); i. e., the Euler-Lagrange equation which corresponds to the energy functional \(\mathcal{E}^{*}:S^{1}_{0}(\Omega_{\mathrm{o}})\to\mathds{R}\),
\[\mathcal{E}^{*}(u)=\frac{1}{2}\int_{\Omega_{\mathrm{o}}}|D_{H}u|^{2}\, \mathrm{d}\xi-\frac{1}{2^{*}}\int_{\Omega_{\mathrm{o}}}|u|^{2^{*}}\,\mathrm{d}\xi.\]
**Theorem 3.1** (See Theorem 1.3 in [17]).: _Let \(\{u_{k}\}\subset S^{1}_{0}(\Omega)\) be a Palais-Smale sequence for \(\mathcal{E}_{\lambda}\); i. e., such that_
\[\mathcal{E}_{\lambda}(u_{k})\leq c\quad\text{for all }k,\] \[d\mathcal{E}_{\lambda}(u_{k})\to 0\quad\text{as }k\to\infty\quad \text{in }(S^{1}_{0}(\Omega))^{\prime}.\]
_Then, there exists a (possibly trivial) solution \(u^{(0)}\in S^{1}_{0}(\Omega)\) to (\(P_{\lambda}\)) such that, up to a subsequence, we have_
\[u_{k}\rightharpoonup u^{(0)}\quad\text{as }k\to\infty\quad\text{in }S^{1}_{0}(\Omega).\]
_Moreover, either the convergence is strong or there is a finite set of indexes \(I=\{1,\ldots,J\}\) such that for all \(j\in I\) there exist a nontrivial solution \(u^{(j)}\in S^{1}_{0}(\Omega^{(j)}_{\mathrm{o}})\) to (\(P_{0}\)) with \(\Omega^{(j)}_{\mathrm{o}}\) being either a half-space or the whole \(\mathds{H}^{n}\), a sequence of nonnegative numbers \(\{\lambda^{(j)}_{k}\}\) converging to zero and a sequences of points \(\{\xi^{(j)}_{k}\}\subset\Omega\) such that, for a renumbered subsequence, we have for any \(j\in I\)_
\[u^{(j)}_{k}(\cdot):=\lambda^{(j)}_{k}\tfrac{{}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
In the display above, given \(\xi^{\prime}\in\mathds{H}^{n}\), we denoted by \(\tau_{\xi^{\prime}}\)_the left translation_ defined by \(\tau_{\xi^{\prime}}(\xi):=\xi^{\prime}\circ\xi\) for all \(\xi\in\mathds{H}^{n}\).
The original proof by Struwe in [23] consists in a subtle analysis concerning how the Palais-Smale condition does fail for the functional \(\mathcal{E}^{*}\), based on rescaling arguments, used in an iterated way to extract convergent subsequences with nontrivial limit, together with some slicing and extension procedures on the sequence of approximate solutions to \((P_{\lambda})\). Such a proof revealed to be very difficult to extend to different frameworks, and the aforementioned strategy seems even more cumbersome to be adapted to the Heisenberg framework considered here. For this, we completely changed the approach to the problem, and we proved how to deduce the results in Theorem 3.1 in quite a simple way by means of the so-called _Profile Decomposition_, firstly proven by Gerard for bounded sequences in the fractional Euclidean space \(H^{s}\), and extended to the Heisenberg framework by [2]. This is in clear accordance with the strategy in [19]; see the related result in the fractional Heisenberg framework in [7].
**Remark 3.2**.: _The limiting domain \(\Omega_{\circ}\) in Theorem 3.1 can be either the whole \(\mathds{H}^{n}\) or a half-space. On the contrary, in the original proof in the Euclidean case by Struwe ([23]) one can exclude the existence of nontrivial solutions to the limiting problem in the half-space by Unique Continuation and Pohozaev's Identity. Such a possibility can not be a priori excluded in the sub-Riemannian setting, even in the very special case when a complete characterization of the limiting set is possible under further regularity assumptions on \(\Omega\). Indeed, in the Heisenberg framework, a very few nonexistence results are known, basically only in the case when the domain reduces to a half-plane parallel or perpendicular to the group center; see [4]. We also refer to the last paragraphs in [17, Section 5] for further details._
## 4. Asymptotics of the optimal functions
We present an asymptotic control of the maximizing sequence \(u_{\varepsilon}\) for \(S^{*}_{\varepsilon}\) in (1.5) via the Jerison & Lee extremals. This is shown in Theorem 4.1 below, which will be one of the key in the proof of the localization of the concentration result presented in Section 5 below and it could be also useful to investigate further properties related to subcritical Folland-Stein-Sobolev embeddings.
**Theorem 4.1** (See Theorem 1.2 in [16]).: _Let \(\Omega\subset\mathds{H}^{n}\) be a smooth bounded domain such that_
\[\liminf_{\rho\to 0}\frac{|(\mathds{H}^{n}\setminus\Omega)\cap B_{\rho}(\xi)|}{ |B_{\rho}(\xi)|}>0\;\;\forall\xi\in\partial\Omega.\]
_Then, for each \(0<\varepsilon<2^{*}-2\) letting \(u_{\varepsilon}\in S^{1}_{0}(\Omega)\) being a maximizer for \(S^{*}_{\varepsilon}\), there exist \(\{\eta_{\varepsilon}\}\subset\Omega\), \(\{\lambda_{\varepsilon}\}\subset\mathds{R}^{+}\) such that, up to choosing \(\varepsilon\) sufficiently small, we have that_
\[u_{\varepsilon}\lesssim\,U_{\lambda_{\varepsilon},\eta_{\varepsilon}}\text{ on }\Omega,\]
_where \(U_{\lambda_{\varepsilon},\eta_{\varepsilon}}=U\left(\delta_{1/\lambda_{ \varepsilon}}\big{(}\tau_{\eta_{\varepsilon}}(\xi)\big{)}\right)\) are the Jerison & Lee extremal functions, and the sequences \(\{\eta_{\varepsilon}\}\) and \(\{\lambda_{\varepsilon}\}\) satisfy_
\[\eta_{\varepsilon}\sim\,\xi_{\circ}\quad\text{and}\quad\lambda_{\varepsilon} ^{\varepsilon}\sim 1\quad\text{as }\varepsilon\searrow 0,\]
_with \(\xi_{\circ}\) being the concentration point given in Theorem 2.2._
The result in Theorem 4.1 above reminds to the literature following the pioneering work in the Euclidean framework due to Aubin and Talenti, and in such a framework it is fundamental in the proof of a precise conjecture about the localization of the concentration point \(\xi_{\circ}\) given in Corollary 2.2 by Han in [8]. In the proof of Theorem 4.1 in [16] in the sub-Riemannian framework we are dealing with, one has also to deal with the fact that, in strong contrast with the Euclidean setting, the Jerison & Lee extremals cannot be reduced to functions depending only on the standard Koranyi gauge. For this, such a proof will require a delicate strategy which makes use and refines the concentration result obtained via the \(\Gamma\)-convergence result in Theorem 2.1 in order to detect the right involved scalings \(\eta_{\varepsilon}\) and \(\lambda_{\varepsilon}\). Also the Global Compactness-type result presented in Section 3 is needed.
## 5. Localization of the energy concentration
A natural question arises: can the blowing up be localized; i. e., is the concentration point \(\xi_{\circ}\) in Theorem 2.2 in Section 2 related in a specific way to the geometry of the domain \(\Omega\)?
In the Euclidean framework, under standard regularity assumptions, Han ([8]) and Rey ([22]) proved the connection with the Green function associated to the domain \(\Omega\) by answering to a famous conjecture by Brezis and Peletier ([3]), who had previously investigated the spherical domains setting. The involved proofs strongly rely on the regularity of Euclidean domains, which is in clear contrast with the complexity of the underlying sub-Riemannian geometry here; as well-known, even if the domain \(\Omega\) is smooth, the situation is drastically different because of the possible presence of characteristic points on the boundary \(\partial\Omega\). From one side, near those characteristic points - as firstly discovered by Jerison - even harmonic functions on the Heisenberg group can encounter a sudden loss of regularity; from the other side, one did not want to work in the restricted class of domains not having characteristic points. In order to deal with those specific difficulties, it is thus quite natural to work under the assumption that the domain \(\Omega\) is _geometrical regular near its characteristic set_ as given by Definition 5.2 below. In forthcoming Theorem 5.3 we state the expected localization result for the concentration point \(\xi_{\circ}\) of the maximizing sequence \(u_{\varepsilon}\) in terms of the Green function associated with the domain \(\Omega\), in turn establishing the validity of the aforementioned Brezis-Peletier conjecture in the Heisenberg group.
As customary, denote by \(\mathcal{D}\) the infinitesimal generator of the one-parameter group of non-isotropic dilations \(\{\delta_{\lambda}\}_{\lambda>0}\) in (1.1); that is,
\[\mathcal{D}:=\sum_{j=1}^{n}\big{(}x_{j}\partial_{x_{j}}+y_{j}\partial_{y_{j}} \big{)}+2t\partial_{t}. \tag{5.1}\]
**Definition 5.1** (\(\delta_{\lambda}\)**-starlike sets)**.: _Let \(\Omega\) be a \(C^{1}\) connected open set of \(\,\mathds{H}^{n}\) containing the group identity \(\mathfrak{e}\). We say that \(\Omega\) is \(\delta_{\lambda}\)-starlike\((\)with respect to the identity \(\mathfrak{e})\) along a subset \(K\subseteq\partial\Omega\) if_
\[\langle\mathcal{D},\mathfrak{n}\rangle(\eta)\geq 0,\]
_at every \(\eta\in K\); in the display above \(\mathfrak{n}\) indicates the exterior unit normal to \(\partial\Omega\)._
_We say that \(\Omega\) is uniformly \(\delta_{\lambda}\)-starlike \((\)with respect to the identity \(\mathfrak{e})\) along \(K\) if there exists \(\alpha_{\Omega}>0\) such that, at every \(\eta\in K\),_
\[\langle\mathcal{D},\mathfrak{n}\rangle(\eta)\geq\alpha_{\Omega}.\]
_A domain as above \(\Omega\) is \(\delta_{\lambda}\)-starlike (uniformly \(\delta_{\lambda}\)-starlike, respectively) with respect to one of its point \(\zeta\in\Omega\) along \(K\) if \(\tau_{\zeta^{-1}}(\Omega)\) is \(\delta_{\lambda}\)-starlike (uniformly \(\delta_{\lambda}\)-starlike, respectively) with respect to the origin along \(\tau_{\zeta^{-1}}(K)\)._
Given a domain \(\Omega\subset\mathds{H}^{n}\), we recall that its _characteristic set_\(\Sigma_{\Omega,D_{H}}\), the collection of all its characteristic point, is given by
\[\Sigma_{\Omega,D_{H}}:=\Big{\{}\xi\in\partial\Omega\,|\,Z_{j}(\xi)\in T_{\xi} (\partial\Omega),\,\text{for $j=1,\ldots,2n$}\Big{\}}.\]
We now recall the definition of regular domains in accordance with the by-now classical paper [5].
**Definition 5.2** (See Definition 2.2 in [16]).: _A smooth domain \(\Omega\subset\mathds{H}^{n}\) such that \(\partial\Omega\) is an orientable hypersurface is "geometrical regular near its characteristic set" if the following conditions hold true,_
1. _There exist_ \(\varPhi\in C^{\infty}(\mathds{H}^{n})\)_,_ \(c_{\Omega}>0\) _and_ \(\rho_{\Omega}\in\mathds{R}\) _such that_ \[\Omega:=\big{\{}\varPhi<\rho_{\Omega}\big{\}},\quad\text{and}\quad|D\varPhi| \geq c_{\Omega}.\]
2. _For any_ \(\xi\in\partial\Omega\) _it holds_ \[\liminf_{\rho\to 0^{+}}\frac{|(\mathds{H}^{n}\smallsetminus\Omega)\cap B_{ \rho}(\xi)|}{|B_{\rho}(\xi)|}>0.\]
3. _There exist_ \(M_{\Omega}\) _such that_ \[\Delta_{H}\varPhi\geq\frac{4|z|}{M_{\Omega}}\langle D_{H}\varPhi,D_{H}|z| \rangle\quad\text{in $\omega$},\] _where_ \(\omega\) _is an interior neighborhood of_ \(\Sigma_{\Omega,D_{H}}\)_._
4. \(\Omega\) _is_ \(\delta_{\lambda}\)_-starlike with respect to one of its point_ \(\zeta_{\mathrm{o}}\in\Omega\) _and uniformly_ \(\delta_{\lambda}\)_-starlike with respect to_ \(\zeta_{\mathrm{o}}\) _along_ \(\Sigma_{\Omega,D_{H}}\)_._
We are finally in the position to state the localization result.
**Theorem 5.3** (See Theorem 1.3 in [16]).: _Consider a bounded domain \(\Omega\subset\mathds{H}^{n}\) geometrical regular near its characteristic set, and let \(u_{\varepsilon}\in S^{1}_{0}(\Omega)\) be a maximizer for \(S^{*}_{\varepsilon}\). Then, up to subsequences, \(u_{\varepsilon}\) concentrates at some point \(\xi_{\mathrm{o}}\in\Omega\) such that_
\[\int_{\partial\Omega}\!\!|D_{H}G_{\Omega}(\cdot,\xi_{\mathrm{o}})|^{2}\langle \mathcal{D},\mathfrak{n}\rangle\,\mathrm{d}\mathscr{H}^{Q-2}=0, \tag{5.2}\]
_with \(G_{\Omega}(\cdot;\xi_{\mathrm{o}})\) being the Green function associated to \(\Omega\) with pole in \(\xi_{\mathrm{o}}\), and \(\mathcal{D}\) being the infinitesimal generator of the one-parameter group of non-isotropic dilations in the Heisenberg group defined in (5.1)._
The proof can be found in Section 7 in [16]; it involves all the results stated in the preceding sections together with other general tools in the sub-Riemaniann framework, as e. g., maximum principles, Caccioppoli-type estimates, \(H\)-Kelvin transform, boundary Schauder-type regularity estimates, as well as with a fine boundary analysis of the solutions to subcritical Yamabe equations. We refer also
to the interesting related result in [10] in the case of domains with no characteristic points.
## Acknowledgements
The authors are member of Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni (GNAMPA) of Istituto Nazionale di Alta Matematica "F. Severi" (INdAM), whose support is acknowledged. The authors are also supported by INdAM Project "Fenomeni non locali in problemi locali", \(\operatorname{CUP\_E55F220}\) 00270001. The second author is also supported by the Project "Local vs Nonlocal: mixed type operators and nonuniform ellipticity", \(\operatorname{CUP\_D91B21005370003}\).
|
2306.17741 | Perturbing Chaos with Cycle Expansions | Due to existence of periodic windows, chaotic systems undergo numerous
bifurcations as system parameters vary, rendering it hard to employ an analytic
continuation, which constitutes a major obstacle for its effective analysis or
computation. In this manuscript, however, based on cycle expansions we found
that spectral functions and thus dynamical averages are analytic, if symbolic
dynamics is preserved so that a perturbative approach is indeed possible. Even
if it changes, a subset of unstable periodic orbits (UPOs) can be selected to
preserve the analyticity of the spectral functions. Therefore, with the help of
cycle expansions, perturbation theory can be extended to chaotic regime, which
opens a new avenue for the analysis and computation in chaotic systems. | Huanyu Cao, Yueheng Lan | 2023-06-30T15:42:35Z | http://arxiv.org/abs/2306.17741v1 | # Perturbing Chaos with Cycle Expansions
###### Abstract
Due to existence of periodic windows, chaotic systems undergo numerous bifurcations as system parameters vary, rendering it hard to employ an analytic continuation, which constitutes a major obstacle for its effective analysis or computation. In this manuscript, however, based on cycle expansions we found that spectral functions and thus dynamical averages are analytic, if symbolic dynamics is preserved so that a perturbative approach is indeed possible. Even if it changes, a subset of unstable periodic orbits (UPOs) can be selected to preserve the analyticity of the spectral functions. Therefore, with the help of cycle expansions, perturbation theory can be extended to chaotic regime, which opens a new avenue for the analysis and computation in chaotic systems.
## I Introduction
Turbulent systems often exhibit characteristic recurrent patterns, which are routinely observed in both numerical simulations and wet experiments and are termed coherent structures. These recurrent patterns are compact invariant sets with relatively simple topology in phase space [1; 2; 3] and dominate dynamics of fluid systems [3; 4]. Intuitively, at finite resolution, the spatiotemporal evolution can be regarded as a walk through the labyrinth of finitely many unstable periodic orbits (UPOs, also called cycles). Such a view enables a hierarchical description of the fluid motion as demonstrated in cycle expansions [5]. These cycles are locally well organized and accessible through analytical approximation or numerical computation, which provides the desired skeleton of the irregular dynamics as mentioned above.
The importance and properties of UPOs have been emphasized ever since Poncare's work on dynamical systems, for they carry both topological and dynamical information [5]. From the perspective of physical intuition, a trajectory in a chaotic system always evolves adjacent to a UPO for some time, and then sticks to another UPO for some time, and so on [6]. The UPOs act as the "skeleton" of the system, which could be organized in a hierarchical manner [7]. The POT supplies a formalism of relating dynamical averages to the spectra of appropriate evolution operators with the natural measure as the eigenstate corresponding to the leading eigenvalue. The trace and spectral determinant of the evolution operator are defined which can be evaluated with UPOs and dynamical averages are expressible in terms of their eigenvalues [5]. As a result, an average over the natural measure is expressed in terms of the corresponding quantity evaluated on UPOs. Cycle expansions is an efficient way to reveal the shadowing property embedded in system dynamics which efficiently procures the spectrum of evolution operators with cycles. For nice hyperbolic systems, the spectral determinant and the dynamical zeta function turns out to be analytic in a neighborhood of \(z=0\) and the cycle expansion technique re-expresses them in terms of a convergent sum over UPOs ordered in a hierarchical way with corrections from long cycles declining rapidly.
All the previous discussions are concentrated on the unperturbed systems whose state evolution is governed by given maps or differential equations. Nevertheless, in realistic experiments, the deterministic behaviour or fixed dynamics is only an idealization since noise and perturbations are inevitable. Under perturbation, chaotic systems may retain chaoticity or enter different regimes of dynamics [8]. During the process, very often bifurcations are observed and it is possible that small perturbation may induce qualitative changes in global dynamics, which is best exemplified in chaos control [9; 10; 11] or transient chaos maintenance [12; 13]. However, the quantitative analysis of chaotic systems subject to perturbations are hard to carry out due to these "unpredictable bifurcations" which exist densely in the parameter space.
Essentially, the basis for performing a quantitative analysis with perturbation scheme is analyticity or continuity of the sought solutions. A specific cycle changes smoothly upon parameter variation until disappears at some bifurcation point. However, infinitely many unstable cycles exist in a chaotic system, part of which always changes qualitatively when system parameters shift. Therefore, the average of an observable in general is not an analytic function of parameters since all the cycles have to be included in its computation based on cycle expansions. Nevertheless, for a computation with finite accuracy, only finitely many cycles are used. If their creation or annihilation can be
tracked during the whole process, analyticity may be recovered on the relevant subset of cycles. In this paper, we focus on a perturbative computation of observable averages with these cycles based on cycle expansions. In one or two dimensions, symbolic dynamics could be used to monitor the existence of cycles. If no bifurcation occurs during parameter shift, the observable average is an analytic function of the parameter and a simple Taylor expansion may be employed. If bifurcations do occur, only a subset of cycles may be used to compute expansion coefficients. Several examples are utilized to demonstrate the validity of the current scheme. It turns out that this combination of qualitative analysis based on symbolic dynamics and quantitative computation with cycle expansion is indeed able to provide a new tool to cross bifurcations and recovers a perturbative investigation of chaotic systems.
The paper is organized as follows: in Sect. II, we briefly review the related contents of POT and symbolic dynamics. In Sect. III, our perturbation scheme based on cycle expansions is introduced in detail for predicting observable averages. In addition, some necessary pruning rules and algorithms are discussed. In Sect. IV, several 1- and 2-dimensional models are applied to demonstrate the effectiveness of our scheme, and then we conclude with a summary and a vision of future developments in Sect. V. Some details are included in the appendix.
## II Periodic orbit theory and symbolic dynamics
### Periodic Orbit Theory
In chaotic systems, it is difficult to track long-term evolution of individual trajectories due to sensitivity to initial conditions so we focus on statistical properties of chaotic systems instead, i.e. the averages of certain observables. The phase space of a chaotic system is densely covered with UPOs, which could be conveniently used to compute these averages. Here, we do not pursue mathematical rigor but rather emphasize physical intuitions and practical applications. From a statistical physics perspective, POT actually provides a method to accurately extract the information we are interested in with a series of UPOs and is powerful for reliable and accurate analysis in hyperbolic chaotic systems.
Although the method is applicable to both continuous and discrete time evolution, here we only discuss discrete dynamics for brevity without loss of generality. Very often, the time average of an observable \(a(x)\) can be evaluated [5] along a trajectory from an arbitrary typical initial point \(x_{0}\) in phase space \(\mathcal{M}\)
\[\bar{a}_{x_{0}}=\lim_{n\to\infty}\frac{A^{n}}{n}=\lim_{n\to\infty}\frac{1}{n} \sum_{k=0}^{n-1}a(f^{k}(x_{0}))\,, \tag{1}\]
where \(x_{n+1}=f(x_{n})\) describes the dynamics of the given system and \(A^{n}(x_{0})=\sum_{k=0}^{n-1}a(f^{k}(x_{0}))\) is defined as the integrated observable [5]. In practical computation, however, a time average can only be approximated with a finite number of iterations. An alternative scheme is to compute the weighted spatial average [5]. If a normalized measure \(\omega(x)\) exists in the phase space \(\mathcal{M}\), the weighted spatial average could be defined as
\[\langle a\rangle_{\omega}=\int_{\mathcal{M}}a(x)\omega(x)dx\,. \tag{2}\]
As time tends to infinity, any typical initial measure evolves to an asymptotic measure \(\rho(x)\), being named natural measure [5; 14]. If the dynamics is ergodic, a natural measure \(\rho(x)\) exists for which the two averages are equal, _i.e._\(\langle a\rangle_{\rho}=\bar{a}_{x_{0}}\) for almost all initial point \(x_{0}\). Generally, it is difficult to obtain an explicit expression for the natural measure defined on a fractal set characteristic of a strange attractor in chaotic dynamics. Fortunately, POT provides new insight into capturing the generally elusive natural measure. In brief, dynamical features can be extracted through a well-designed evolution operator \(\mathcal{L}^{n}\)[5], which is defined as
\[\mathcal{L}^{n}\circ\omega(y)=\int_{\mathcal{M}}dx\delta(y-f^{n}(x))e^{\beta A ^{n}(x)}\omega(x)\,, \tag{3}\]
where \(n=1,2,\cdots\) for discrete mappings. The kernel function \(\mathcal{L}^{n}(y,x)=\delta(y-f^{n}(x))e^{\beta A^{n}}\) depends on the integrated quantity \(A^{n}\) and an auxiliary variable \(\beta\). That is, the evolution operator is able to describe the evolution of the measure \(\omega(x)\) and to record the integrated observable along an orbit. If we set \(\beta=0\), \(\mathcal{L}\) is the famous Perron-Frobenius operator [5].
Denoting the spectrum of \(\mathcal{L}\) by \(\{s_{m}\}_{m\in\mathbb{N}}\) with \(\mathrm{Re}(s_{m})>\mathrm{Re}(s_{m+1})\). From the perspective of spectral considerations, high powers of the linear operator \(\mathcal{L}\) are dominated by the leading eigenvalue \(s_{0}\), specifically
\[\mathcal{L}^{n}\circ I(x)=\sum_{m}b_{m}\phi_{m}(x)e^{ns_{m}}\to b_{0}\phi_{0}(x )e^{ns_{0}},\,n\to\infty\,, \tag{4}\]
where \(I(x)\equiv 1/\langle 1\rangle_{I}\) is the identity function and expressed as an expansion of the eigenfunctions \(\phi_{m}(x)\) of \(\mathcal{L}\), _i.e._, \(I(x)=\sum_{m}b_{m}\phi_{m}(x)\). Thus, in terms of the evolution operator, we have
\[\langle e^{\beta A^{n}}\rangle_{I}=\int_{\mathcal{M}}dx[\mathcal{L}^{n}\circ I ](x)\to b_{0}e^{ns_{0}},\,n\to\infty\,, \tag{5}\]
where \(s_{0}\) is a function of \(\beta\) and thus we have
\[s_{0}(\beta)=\lim_{n\to\infty}\frac{1}{n}\ln(\langle e^{\beta A^{n}}\rangle)_{ I}\,. \tag{6}\]
If the system is ergodic, the average
\[\langle a\rangle=\lim_{n\to\infty}\frac{1}{n}\langle A^{n}\rangle_{I}=\frac{ds _{0}(\beta)}{d\beta}|_{\beta=0} \tag{7}\]
is directly related to the leading eigenvalue \(s_{0}(\beta)\). So, all we need to do is extract the spectrum of \(\mathcal{L}\), especially the leading eigenvalue \(s_{0}\).
The spectrum of the linear operator \(\mathcal{L}\) is determined by solving the resolvent equation \(\det(\mathbf{1}-z\mathcal{L})=0\). Borrowing the identity between the determinant and trace of an arbitrary square matrix \(M\): \(\ln\det M=\mathrm{tr}\ln M\), we have the spectral determinant [15; 5; 16]
\[\det(\mathbf{1}-z\mathcal{L}) =\exp(\mathrm{tr}\ln(\mathbf{1}-z\mathcal{L}))=\exp\left(-\sum_{ n=1}^{\infty}\frac{z^{n}}{n}\mathrm{tr}\mathcal{L}^{n}\right)\] \[=\exp(-\sum_{p}\sum_{r=1}^{\infty}\frac{1}{r}\frac{z^{n_{p}}x^{ r\beta A_{p}}}{|\mathrm{det}(\mathbf{1}-M_{p}^{r})|})\,, \tag{8}\]
where \(p\) denotes prime cycles which are not repeats of shorter ones and \(n_{p}\) is the length of the cycle \(p\). \(A_{p}\) and \(M_{p}\) are the integrated physical quantity and the Jacobian matrix along the prime cycle \(p\). The trace \(\mathrm{tr}\,\mathcal{L}^{n}\) in the above equation has been computed with the trace formula [16; 5]
\[\mathrm{tr}\mathcal{L}^{n}= \int_{\mathcal{M}}dx\mathcal{L}^{n}(x,x)=\int_{\mathcal{M}}dx \delta(x-f^{n}(x))e^{\beta A^{n}}\] \[= \sum_{f^{n}(x_{i})=x_{i}}\frac{e^{\beta A^{n}(x_{i})}}{|\mathrm{ det}(\mathbf{1}-M_{n}(x_{i}))|},\forall n\in\mathbb{Z}^{+}\,, \tag{9}\]
where \(x_{i}\) is a periodic point of period \(n\) and \(M_{n}(x_{i})\) is the Jacobian matrix of \(f^{n}(x)\) evaluated at \(x_{i}\). Based on the hyperbolicity assumption [5] that the stabilities of all cycles included in Eq.(8) are exponentially bounded away from unity, we make the approximation \(1/|\mathrm{det}(\mathbf{1}-M_{p}^{r})|\approx 1/|\Lambda_{p}|^{r}\), where \(\Lambda_{p}=\prod_{e}\Lambda_{p,e}\) is the product of expanding eigenvalues of the matrix \(M_{p}\). With \(r\to\infty\), the spectral determinant Eq.(8) becomes the dynamical zeta function [17; 5]
\[\frac{1}{\zeta}=\prod_{p}(1-t_{p}), \tag{10}\]
where \(t_{p}=\frac{z^{n_{p}}e^{\beta A_{p}}}{|\Lambda_{p}|}\) denotes the weight of prime cycle \(p\). It can be proved that the dynamical zeta function is the 0th-order approximation of the spectral determinant and they have identical leading eigenvalue but different analytic properties [15; 5].
For a chaotic system satisfying the hyperbolicity assumption, a long cycle is often well approximated with several shorter ones, which is indicated by the shadowing lemma in nonlinear dynamics [5]. Based on this property, cycle expansion is designed to efficiently deal with the spectral functions Eq.(8) or Eq.(10), with short periodic orbits capturing the major part of the natural measure and longer cycles delivering systematic curvature corrections. For maps with binary symbolic dynamics [16], Eq.(10) is expanded as
\[\begin{split}\frac{1}{\zeta}=& 1-\sum_{f}t_{f}-\sum_{p}c_{p}=1-t_{0}-t_{ 1}-[(t_{01}-t_{0}t_{1})]\\ -&[(t_{001}-t_{01}t_{0})+(t_{011}-t_{01}t_{1})]-...,\end{split} \tag{11}\]
where the fundamental terms \(t_{f}\) include all unbalanced, not shadowed prime cycles and the rest terms \(c_{p}\), called curvature corrections, consist of longer prime cycles and pseudo-cycles that shadow them. Cycle expansions are dominated by fundamental terms, with long orbits contributions cancelled by short ones, so that curvature corrections decay exponentially or even super-exponentially if uniform hyperbolicity is assumed [15]. The cancellation between prime cycles and pseudo-cycles reflects the smoothness of the underlying dynamics [6].
Very often in practical computation, a good truncation to the spectral functions is a crucial operation to restrict the computation within finitely many unstable cycles in a chaotic system. The usually adopted truncation with cycle length corresponds to a geometric envelope approximation of the original map [4]. Compared with the reserved term, the magnitude of the discarded terms in the formula decreases exponentially with the topological length and higher order truncations lead to a more accurate evaluation.
However, most physical systems are not uniformly hyperbolic so that the cancellation is poor. One non-hyperbolicity case is marked with the strong contraction at specific locations of an attractor such as critical points in 1-d maps or homoclinic tangencies in the Henon map [15; 16]. As a consequence, there are singularities in the natural measure which undermine the shadowing and slow down the convergence of cycle expansions. Several accelerating schemes have been proposed, among which stability ordering is a good choice [18]. It retains all the cycles or pseudo-cycles that have stability eigenvalues smaller than a threshold in the cycle expansion. The method is based on analyticity of the spectral functions [16; 19], which identifies and removes the poles that are near the origin and thus expands the radius of convergence. With appropriate coordinate transformations, dynamical conjugacy can be used to remove the singularities in the natural measure and accelerate the convergence [20]. In intermittent systems, the dynamics could alternate between regular and chaotic motion which results in non-hyperbolicity. The spectrum of the evolution operator is no longer discrete and the dynamical zeta function exhibit branch cut [21]. Geometrically, the UPOs which have a stability eigenvalue close to 1 possess an unusually large weight and cannot be efficiently shadowed by shorter cycles [6]. A dynamics-splitting algorithm has been proposed to take advantage of the partial integrability of intermittent systems which analytically estimates the natural measure near the singularities but employ cycle expansions to treat the rest [22]. In the situations of interest in this paper, upon parameter change some UPOs may disappear and lead to bad-shadowing. Or, there is a quenched disorder in the dynamics and we need to do cycle expansion for many different parameter values. It will be shown below that the analyticity of the cycles could be used to carry out perturbation in the spectral functions.
### Symbolic Dynamics
Symbolic dynamics [23] is a very effective theory to divide and encode the whole phase space when searching for orbits or exploring the topological structure of dynamics. We introduce basic notions about it with the logistic map \(x\mapsto f(x)=4x(1-x),x\in[0,1]\). We partition the phase space with the critical point \(x_{c}=1/2\), and label the two non-overlapping intervals \([0,1/2)\) and \([1/2,1]\) with "0" and "1" respectively so that a trajectory is uniquely associated with a binary symbol sequence \(x_{0}x_{1}x_{2}x_{3}...,x_{i}\in\{0,1\}\), called itinerary, according to the intervals which the trajectory consecutively visits. A good partition ensures that two different unstable trajectories have distinct itineraries. A family of orbits can be denoted as \(x_{0}x_{1}x_{2}...x_{k-2}x_{k-1}\), which visit same intervals within \(k\) iterations. A period-\(m\) prime cycle is denoted as \(\overline{x_{0}x_{1}...x_{m-1}}\) which is not repeats of shorter ones. For example, the period-2 cycle in Fig. 1 is described by the infinite sequence \(010101...\), which may be denoted as \(\overline{01}\) and has a topological length of 2. Combining geometric thinking, it is feasible to establish criteria to identify inaccessible itineraries and thus detect all short prime cycles in a given system. In other words, we can rely on symbolic dynamics to sort the spatial orders of the prime cycles and search for admissible UPOs. In 1-dimensional cases, the kneading theory [5] (detailed in App. 1) provides a precise and definitive criterion of admissibility which eliminates all itineraries and UPOs that cannot occur for a given map. And in 2-dimensional cases, the kneading theory can be generalised to the so-called pruning front conjecture [24] (detailed in App. 1), which offers a complete description of the symbolic dynamics of orientation reversing once-folding maps in the same sense as that the kneading sequence gives in a 1-dimensional unimodal map. In some cases, we may still find all the short admissible UPOs even without an elaborate pruning rule. Based on a good partition of the phase space and the associated mapping relation, admissible UPOs are found directly with cycle-detecting algorithms, although many symbol sequences do not match any admissible orbits.
## III Perturbation scheme
### Perturbed Model
As introduced in [6], in locally well organized flows, coherent structures interact weakly with each other except at some discrete space-time points where they are annihilated or created. Similar cellular subsystems may be simplified as a series of low-dimensional models with parameters selected from a given distribution [25; 26] and are ready for thorough analytical or numerical investigation. On this occasion, it is essential to study a series of chaotic systems with similar structure which may be treated in batch with a specifically designed perturbation theory.
Without loss of generality, we consider a model \(f(x)\) defined in \(\mathcal{M}\) under a given perturbation being expressed as
\[f_{\epsilon}(x)=f(x)+\epsilon g(x),x\in\mathcal{M},|\epsilon|\ll\mathcal{O}( 1)\,, \tag{12}\]
where \(g(x)\) defines the form of the perturbation and \(\epsilon\) indicates its strength. Obviously, as long as \(f_{\epsilon}(x)\) retains hyperbolicity, a fast convergence of cycle expansion results no matter what form \(g(x)\) is. Thus we have the perturbed dynamical zeta function
\[\frac{1}{\zeta}_{\epsilon}=\prod_{p}(1-t_{p,\epsilon}),t_{p,\epsilon}=\frac{z ^{n_{p}}e^{\beta A_{p,\epsilon}}}{|\Lambda_{p,\epsilon}|}. \tag{13}\]
For convenience, we denote \(\frac{1}{\zeta}_{\epsilon}\) as \(F_{\epsilon}(s_{0,\epsilon}(\beta),\beta)\) where \(s_{0,\epsilon}\) is the leading eigenvalue of perturbed evolution operator \(\mathcal{L}_{\epsilon}\) and \(F_{0}\equiv F(s_{0,0}(\beta),\beta)\) is the unperturbed dynamical zeta function. According to Eq.(7), observable averages can be computed through the derivatives of \(F_{\epsilon}(s_{0,\epsilon}(\beta),\beta)\)[5]
\[\langle a\rangle_{\epsilon}=\frac{ds_{0,\epsilon}}{d\beta}_{\beta=0}=-\frac{ \partial F_{\epsilon}/\partial\beta}{\partial F_{\epsilon}/\partial s_{0, \epsilon}}\|_{\beta=0}\,, \tag{14}\]
where \(\langle a\rangle_{\epsilon}\) is the perturbed observable average and \(\langle a\rangle_{\epsilon=0}\equiv\langle a\rangle\) is the original one.
The idea of perturbing chaotic systems may seem unreliable in regards with the presence of the dense set of periodic windows in the parameter space, but POT provides us with an intuitive theoretical framework to evaluate various perturbations. From Eqs. 13 and 14, it is clear that the continuous deformation of an individual cycle \(p\) changes \(F_{\epsilon}\) and its derivatives smoothly, and thus the observable average \(\langle a\rangle_{\epsilon}\) is an analytic function of \(\epsilon\) for a finite truncation if all the involved cycles continue existing. Even weak perturbations may be classified into two basic types: the ones that maintain the symbolic dynamics and those that result in birth or death of cycles. Both types could lead to displacement and deformation of the UPOs while the latter ones further lead to creation or annihilation of UPOs and loss of analyticity of Eq.(13) as an infinite product. Nevertheless, if the pruning rule is known as \(\epsilon\) varies, the analyticity could still be utilized for each cycle that continues to exist throughout, which still results in a good approximation as we will see in the following. Hence, with different \(\epsilon\)'s, the amount of calculation will be greatly reduced if we have the qualitative knowledge of the influence of the perturbation on the existence of cycles. From this standpoint, cycle expansions give us inspiration to quantify the perturbation on chaos.
Figure 1: The logistic map \(f(x)\) (blue line) is defined on the unit interval \([0,1]\) and the critical point (blue dot) divides the phase space into two halves denoted by alphabet \(\{0,1\}\) separately. The fixed points \(x=0\) and \(x=3/4\) (red dots) are actually period-1 orbits and represented by \(\bar{0}\) and \(\bar{1}\) respectively. The period-2 orbit (red line) has the sequence \(0101...\) and is denoted by \(0\bar{1}\).
### Perturbations in the Complex Plane
As discussed in Sect. III.1, we evaluate the observable average with a given perturbation \(\epsilon g(x)\). To accomodate the continuous change of \(\epsilon\), a natural and efficient approach is to perform a series expansion. For a prefixed \(g(x)\), \(\langle a\rangle_{\epsilon}\) can be viewed as a function of \(\epsilon\). With a proper selection of cycles, the average at a "target" \(\hat{\epsilon}\) based on the values at \(\epsilon=\epsilon_{0}\) could be written as
\[\langle\hat{a}\rangle_{\hat{\epsilon}}=\langle a\rangle_{\epsilon=\epsilon_{0} }+(\hat{\epsilon}-\epsilon_{0})\frac{d\langle a\rangle_{\epsilon}}{d\epsilon} |_{\epsilon=\epsilon_{0}}+\frac{(\hat{\epsilon}-\epsilon_{0})^{2}}{2!}\frac{d^ {2}\langle a\rangle_{\epsilon}}{d\epsilon^{2}}|_{\epsilon=\epsilon_{0}}+\frac {(\hat{\epsilon}-\epsilon_{0})^{3}}{3!}\frac{d^{3}\langle a\rangle_{\epsilon}} {d\epsilon^{3}}|_{\epsilon=\epsilon_{0}}+...\,, \tag{15}\]
where the accuracy of \(\langle\hat{a}\rangle_{\hat{\epsilon}}\) depends on the order and accuracy of its derivatives we evaluate. It has to be emphasized that now we assume that \(\epsilon\) changes in a direction that maintains or reduces cycles and the series expansion of \(\langle\hat{a}\rangle_{\epsilon}\) is only performed with the cycles that continue to exist at \(\hat{\epsilon}\), which requires extra effort to judge. The derivatives could be conveniently evaluated with parameter values slightly different from \(\epsilon_{0}\) on the complex plane, to be explained below.
For hyperbolic maps with complete binary symbolic dynamics, Eq.(13) as a 0th-order approximation of Eq.(8) is an exponentially convergent infinite product over UPOs and the observable average \(\langle a\rangle_{\epsilon}\) related to the leading eigenvalue \(s_{0,\epsilon}\) can be viewed as an analytic function in the complex-\(\epsilon\) plane. Thus, an effective approach is to evaluate the derivative of \(\langle a\rangle_{\epsilon}\) through the Cauchy integral formula [27]
\[\frac{d^{k}\langle a\rangle_{\epsilon}}{d\epsilon^{k}}|_{\epsilon=\epsilon_{0} }=\frac{k!}{2\pi i}\oint_{|r|<|\hat{\epsilon}-\epsilon_{0}|}\frac{\langle a \rangle_{\epsilon}}{(\epsilon_{0}-\epsilon)^{k+1}}d\epsilon\,, \tag{16}\]
where we evaluate all the \(\langle a\rangle_{\epsilon}\) along the circular integration path which encircles \(\epsilon_{0}\) in the anti-clockwise direction on the complex plane and \(|r|=|\epsilon-\epsilon_{0}|\) is a chosen integration radius which is usually smaller than \(|\hat{\epsilon}-\epsilon_{0}|\). Of course, on the \(\epsilon\)-complex plane, the dynamics \(f_{\epsilon}\) and the periodic points of the UPOs are all extended from the original ones (detailed in App. II). It should be noted that if a UPO is pruned at the chosen \(\hat{\epsilon}\), it will not be included in the computation of \(\langle a\rangle_{\epsilon}\) in Eq.(16). Therefore, rigorously, the expansion Eq.(15) holds only when there is no creation or annihilation of cycles. Otherwise, it has to be taken into account as just noted.
### Perturbation While Pruning
In certain cases, some symbol sequences have to be pruned (_e.g._, Fig. 3.(a)) that correspond to non-existing orbits. To maintain the consistency and analyticity of the formulas and evaluate the derivatives of \(\langle a\rangle_{\epsilon}\) reliably, before our computation, the prime cycles need to be judged to be admissible (discussed in Sect. II.2) and all the inadmissible ones at a chosen \(\hat{\epsilon}\) have to be eliminated. In one dimension, this could be done by the kneading theory while in two dimensions, the pruning front is a useful tool [24]. Nevertheless at different points along the integration path, cycle expansions only involve those prime cycles that continue to exist up to the "target" perturbation. This judgement step does increase our computational effort, but it is clearly much more advantageous than the possible huge amount of computation involved in a direct application of the cycle expansions in the presence of continuously varying parameters.
Next, we introduce the concept of covering map. A covering map covers the whole phase space, which admits the full symbolic dynamics. That is, all the symbolic prime cycles may be matched with admissible UPOs. Even if the map is covering at a particular \(\epsilon_{0}\), on the complex-\(\epsilon\) plane, it is possible that some cycles may get pruned along the integration path around \(\epsilon=\epsilon_{0}\), which should be avoided. Very often, the problem could be fixed with a new choice of the perturbation center \(\epsilon_{0}^{\prime}\) or a new integration path. One good thing about the current scheme is that once the covering map with the complex parameters are found, it could be utilized throughout the whole computation. If some cycles are pruned at \(\epsilon=\hat{\epsilon}\), we just do not include them in the calculation of \(\langle a\rangle_{\epsilon}\) when evaluating the derivatives with Eq.(16). Thus, if the pruning rule could be figured out when parameters are varying, the covering map could be conveniently used to compute dynamical averages of any smooth observables. Of course, if the symbolic dynamics remains unchanged during the parameter variation, the average \(\langle a\rangle_{\epsilon}\) becomes truly analytic and Eq.(15) holds for all parameters on the variation path. All the involved derivatives need just evaluating once.
## IV Examples
Based on cycle expansions, we demonstrate the perturbation scheme when varying a parameter in chaotic systems. In view of the two types of perturbation proposed in Sect. III.1, we apply the scheme to compute the observable averages of the following perturbed models to verify its effectiveness. Before doing that, some details in numerical computation need to be noted.
### Some Notes on Numerical Computation
To integrate along the path in Eq.(16), it is necessary to introduce a feasible discrete scheme. In the following computation, the circular integration path is sampled regularly and the \(m\) lattice points are named sequentially as \(\{\epsilon_{r,i}\},i=1,2,3,...m\). Further, \(d\epsilon\) is replaced by \(\Delta\epsilon_{i}=\epsilon_{r,i+1}-\epsilon_{r,i}\) when \(i=1,2,3,...m-1\) and \(\Delta\epsilon_{r,m}=\epsilon_{r,1}-\epsilon_{r,m}\). Then we approximate Eq.(16) with a summation
\[\frac{d^{k}\langle a\rangle_{\epsilon}}{d\epsilon^{k}}_{\epsilon=\epsilon_{0 }}=\frac{n!}{2\pi i}\sum_{i=1}^{m}\alpha_{i}\frac{\langle a\rangle_{\epsilon_{ mid,i}}}{\epsilon_{mid,i}^{k+1}}\Delta\epsilon_{i}=\frac{n!}{2\pi i}\sum_{i=1}^{m} \frac{\alpha_{i}\langle a\rangle_{\epsilon_{mid,i}}\Delta\epsilon_{i}}{ \epsilon_{mid,i}^{k+1}}\,, \tag{17}\]
where the point \(\epsilon_{mid,i}=\frac{\epsilon_{r,i}+\epsilon_{r,i+1}}{2}\) is the midpoint of the \(i-\)th edge and \(\alpha_{i}=1+\frac{(-1)^{i}}{3}\) are set according to the Simpson's rule. All the \(\langle a\rangle_{\epsilon_{r,i}}\)'s are obtained with Eq.(13) and the corresponding complex UPOs. The radius \(|r|=|\epsilon-\epsilon_{0}|\) of our chosen integration path should not be too small which could lead to large errors in the evaluation of high order derivatives, but usually needs to be small enough to ensure that all prime cycles exist along the integration path. As shown in Fig. 2.(d), the larger \(m\) is, the more accurate this approximation is. The averages \(\langle a\rangle_{\hat{\epsilon}}\) obtained by a direct application of Eq.(13) are "target values" in comparison with the predicted averages \(\langle\hat{a}\rangle_{\hat{\epsilon}}\) to assess the accuracy of our scheme. Due to the limited precision, our computation yields complex values with small imaginary parts which are also an indicator of the calculation accuracy. In addition, the values obtained through the Monte Carlo method [28] are used as a benchmark to compare the "target values". In some cases, the direct calculation tends to converge slowly and is not as accurate as the Monte Carlo one, but we still use the "target values" for comparison and to evaluate the results of the new scheme.
In the following examples, we will state each chosen circular integration path and the regular and predict the observable averages with the Taylor expansion Eq.(15), where the series are kept up to the 6th-order for good accuracy. Both computational accuracy and efficiency are considered for setting the truncation length \(L_{max}\) for cycle expansions. Different \(L_{max}\) are set in different examples to adapt to different convergence rates. A large \(L_{max}\) is employed when the convergence is slow. With a good Markov partition of the phase space, the symbolic dynamics could be used to mark admissible cycles [23; 29]. The multiple shooting method [5] is very effective in this case and thus used to search cycles. In addition, it is useful to note that in different examples, \(\epsilon\) appears in different locations to indicate different types of perturbations while the current scheme applies to all different cases.
### Perturbations Maintaining Symbolic Dynamics
If a perturbation slightly deforms the UPOs but maintains the symbolic dynamics, we may directly apply the perturbation expansion Eq.(15) and (16) for different values of \(\hat{\epsilon}\). Furthermore, if the distribution of \(\hat{\epsilon}\) is known in this case, another average with respect to \(\hat{\epsilon}\) could be done easily. The famous tent map is not a good choice to be used as an demonstration here, because the uniform natural measure and the observable averages do not depend on the position of the critical point. Instead, we use a slightly altered tent-like model to validate our method in a simple case
\[f_{\epsilon}(x)=\begin{cases}-2x^{2}+\frac{\epsilon^{2}+10\epsilon+75}{25+5 \epsilon}x&x\in[0,x_{c,\epsilon}]\\ -2x^{2}-\frac{\epsilon^{2}+10\epsilon-25}{25-5\epsilon}x+\frac{\epsilon^{2} +25}{25-5\epsilon},&x\in(x_{c,\epsilon},1]\end{cases}\,, \tag{18}\]
where \(\epsilon\) controls the degree of deformation and the critical point \(x_{c,\epsilon}=\frac{5+\epsilon}{10}\) moves as \(\epsilon\) varies while the function value is always \(1\) at this point (the perturbed tent-like maps with \(\epsilon=-0.8,0\) and \(0.8\) are shown in Fig. 2.(a)). We choose the perturbation center at \(\epsilon_{0}=0\), the number of sampling points along the integration contour is \(m=500\), with the integration radius \(r=0.1\) and the truncation length \(L_{max}=10\). According to Eqs. 15 and 17, we have
\[\langle\hat{a}\rangle_{\hat{\epsilon}}=\langle a\rangle_{\epsilon=0}+\frac{ \hat{\epsilon}}{2\pi i}\sum_{i=1}^{500}\frac{\alpha_{i}\langle a\rangle_{ \epsilon_{r,i}}\Delta\epsilon_{i}}{\epsilon_{r,i}^{2}}+\frac{\hat{\epsilon}^{ 2}}{2\pi i}\sum_{i=1}^{500}\frac{\alpha_{i}\langle a\rangle_{\epsilon_{r,i}} \Delta\epsilon_{i}}{\epsilon_{r,i}^{2}}+\frac{\hat{\epsilon}^{3}}{2\pi i}\sum_{ i=1}^{500}\frac{\alpha_{i}\langle a\rangle_{\epsilon_{r,i}}\Delta\epsilon_{i}}{ \epsilon_{r,i}^{4}}+...\,, \tag{19}\]
where the "target" \(\hat{\epsilon}\) actually refers to each point in the "target interval" \([-2,2]\) in this example and \(\{\epsilon_{r,i}\}\) are uniformly distributed on the integration path. Fortunately, the prediction \(\langle\hat{x}\rangle_{\epsilon}\) can match the "target values" and the Monte Carlo results very well in the effective interval \(\epsilon\in[-0.9,0.8]\) while the prediction is less accurate beyond this range. Actually, an increase in error outside the effective range is a reasonable phenomenon for perturbation approximation. For higher accuracy, Eq.(15) needs to be extended to higher orders. The comparison among different methods is made in Fig. 2.(b). The errors \(\log[\langle\hat{x}\rangle_{\epsilon}-\langle x\rangle_{\epsilon}]\) are evaluated (Fig. 2.(c)) to show the accuracy of our
predictions which can reach \(10^{-5.4}\) for reasonable perturbations. The improvement in accuracy shown at the two endpoints of the effective interval is due to the fact that the systematic error is accidentally compensated by the increase of the predicted values of the observables away from \(\epsilon_{0}\). As we can see in Fig. 2.(d), the computation gains accuracy as \(m\) increases while the effective range becomes narrowing down. This example illustrates the effectiveness of our scheme under a weak perturbation which keeps the symbolic dynamics invariant, and the ability to adjust the computational parameter (_i.e._, the number of sampling points \(m\)) to meet the accuracy requirements.
### Perturbation that Induces Pruning
Usually, perturbed chaotic systems that maintain the symbolic dynamics are rare, and the annihilation or creation of UPOs invariably occurs. For instance, we consider a simple tent map the peak height of which changes as \(\epsilon\) varies
\[f_{\epsilon}(x)=\begin{cases}(2-0.2\epsilon)x&x\in[0,1/2]\\ (2-0.2\epsilon)(1-x),&x\in(1/2,1]\end{cases}\,, \tag{20}\]
where \(\epsilon\) marks the strength of the perturbation. When \(\epsilon<0\), the peak is beyond the phase space \([0,1]\) which leaves some trajectories quickly escaping but this situation has complete symbolic dynamics and thus could be treated in a way similar to the previous example. Here the real concern is the pruning case with \(\epsilon>0\) and the perturbed tent
Figure 2: A perturbed tent-like map which maintains the complete symbolic dynamics. (a) The original map (blue solid line, \(\epsilon=0\)) and the perturbed ones (red dashed lines, \(\epsilon=-0.8\) and \(0.8\)). (b) the predicted values \(\langle\hat{x}\rangle_{\epsilon}\) (blue line) are in good agreement with the directly calculated values \(\langle x\rangle_{\epsilon}\) (red dots) and the Monte Carlo results \(\langle x\rangle_{MC}\) (green dashed line) for given \(\epsilon\)-perturbations. (c) the errors between the predicted values \(\langle\hat{x}\rangle_{\epsilon}\) and the directly calculated values \(\langle x\rangle_{\epsilon}\) could reach \(10^{-5.4}\) in the effective interval (roughly the interval between the black dashed lines). (d) the approximation Eq.(17) gains accuracy (red star) as the number \(m\) of sampling points increases while the applicability range (the blue dashed lines indicate the change of the interval endpoints) of our method narrows.
maps with \(\epsilon=0\) and \(1\) are shown in Fig. 3(a). As discussed in Sect. III.3, we need to evaluate in advance which UPOs should be pruned before predicting the observable averages at the chosen \(\hat{\epsilon}\). Intuitively speaking, any UPO that visits the pruning interval \((f_{\epsilon}(1/2),1]\) is inadmissible. Within the set truncation length \(L_{max}=12\), for instance, there are 747 UPOs at \(\epsilon=0\) while 504 UPOs are pruned at \(\epsilon=1\). In the practical computation, importantly, we must ensure that the coverage along the integral path includes the coverage at the chosen \(\hat{\epsilon}\), so that all the prime cycles at \(\hat{\epsilon}\) also exist on the path. Thus, we expand \(\langle x\rangle_{\epsilon}\) at \(\epsilon_{0}=-0.15\) and compute the derivatives along a new integration path \(|\epsilon-\epsilon_{0}|=0.1\) with \(m=500\), and the latter steps are unchanged. As shown in Figs. 3.(b) and (c), the observable averages \(\langle\hat{x}\rangle_{\epsilon}\) in the interval \(\epsilon\in[0,3]\) match the "target values" and the Monte Carlo results with an expected and further improvable accuracy.
### Perturbing 2-dimensional Model
The previous two examples demonstrate the effectiveness of our scheme in dealing with the two types of perturbations in one-dimensional maps. We further validate our scheme in a well-known two-dimensional model, the Lozi map [30]
\[(x,y)\mapsto f^{(a,b)}(x,y)=(1-a|x|+by,x)\,, \tag{21}\]
where \(a\) and \(b\) are adjustable parameters controlling the folding and stretching in the phase space. By varying the parameters, the structure of the invariant manifolds keeps changing and so does the strange attractor. A partition of
Figure 3: The tent map with a perturbation that induces pruning. (a) The original map (blue solid line, \(\epsilon=0\)) and a perturbed one (red dashed line, \(\epsilon=1\)). (b) the predicted values \(\langle\hat{x}\rangle_{\epsilon}\) (blue line) match the “target values” \(\langle x\rangle_{\epsilon}\) (red dots) and the Monte Carlo results \(\langle x\rangle_{MC}\) (green dotted line) with an expected and improvable accuracy in the effective interval. (c) the errors between the predicted values \(\langle\hat{x}\rangle_{\epsilon}\) and the “target values” \(\langle x\rangle_{\epsilon}\) which could reach \(10^{-5.5}\) in the interval of applicability (roughly between the black dashed lines).
the plane by the y-axis determines the UPOs uniquely through the binary symbolic sequences. The parameters \(a,b\) on the crisis line \(a=2-b/2\)[31] are the largest values for which a strange attractor exists [24] and there is a heteroclinic tangency at the intersection of the unstable manifolds of the "1" and "0" fixed point. As \(b\) increases, some UPOs covering the phase space are gradually pruned and no new UPOs appear, so that our perturbation scheme can be applied. Here we set up a perturbed model
\[(x,y)\mapsto f_{\epsilon}(x,y)=(1-\epsilon)f^{(a_{1},b_{1})}(x,y)+\epsilon f^{ (a_{2},b_{2})}(x,y)\,, \tag{22}\]
where \((a_{1},b_{1},a_{2},b_{2})\) is set to \((1.85,0.3,1.8,0.4)\) defining a perturbation direction that we select along the crisis line. Eq.(22) allows us to control the values of both \(a\) and \(b\) in Eq.(21) with a single parameter \(\epsilon\). The attractors corresponding to \(\epsilon=0\) and \(1\) are plotted in Fig. 4. As \(\epsilon\) increases, \(b\) increases along the crisis line and \(a\) decreases accordingly. In this computation, we expand \(\langle x\rangle_{\epsilon}\) at \(\epsilon_{0}=0.1\) and compute the derivatives along the integration path \(|\epsilon-\epsilon_{0}|=0.1\) with an approximation \(m=100\), and the cycle expansion is truncated at \(L_{max}=17\). It is worth clarifying that the perturbation center and the integration path are not the only choice here. We just need to ensure that all the prime cycles at the target \(\hat{\epsilon}\) exist on the integration path. As before, for any selected target \(\hat{\epsilon}\), we should determine which prime cycles need to be pruned in advance as described in Sect. II.2. The \(\epsilon\)-values are sampled from the interval \([0.2,1.6]\) and the results are plotted in Figs. 4.
As shown in Figs. 4 and (c), the predicted values \(\langle\hat{x}\rangle_{\epsilon}\) and the directly calculated ones \(\langle x\rangle_{\epsilon}\) match well as expected. However, they do not agree very well with the Monte Carlo results. The discrepancy originates from the slow convergence of cycle expansion at some parameter values. Of course \(\langle x\rangle_{MC}\) itself may not be so accurate. An increase of the truncation length may reduce the discrepancy. Actually, accelerating conver
Figure 4: Computation on the perturbed lozi map: (a) the attractor images of the lozi map with parameters \((1.8,0.4)\) (blue line) or \((1.85,0.3)\) (red line). (b) the predicted values \(\langle\hat{x}\rangle_{\epsilon}\) (blue line) agree with the “target values” \(\langle x\rangle_{\epsilon}\) (red dots) with expected accuracy while slight error exists between the target values and the Monte Carlo results (green dotted line). (c) the logarithmic errors between the predicted values \(\langle\hat{x}\rangle_{\epsilon}\) and the “target values” \(\langle x\rangle_{\epsilon}\) which prove the validity of the perturbation scheme in this 2-dimensional model.
this paper while the good agreement between the predictions and the "target values" already tells the validity of our perturbation scheme with cycle expansion in 2-dimensional models.
## V Summary
The main body of work in this paper is to verify our proposed perturbation calculation for chaotic systems. While the evolution of a single trajectory of a chaotic system is difficult to track, the POT states that the global behaviour of the system can be computed with UPOs which densely cover the phase space. In view of the smooth change of the UPOs before bifurcation, the dynamical zeta function (Eq.(13)) varies analytically in a finite approximation based on these cycles so that the observable averages are amenable to simple Taylor expansion if the parameters do not change too much and the system remains chaotic.
We propose a feasible scheme combining cycle expansions, analyticity and some necessary approximations to quantify the impact of perturbations on the statistical behaviour of chaotic systems. The scheme is detailed in the presence or absence of pruning upon parameter changes. Its effectiveness is demonstrated with several 1- or 2-dimensional models in Sect. IV. Of course, there are limitations in the computation, for example, the accuracy of our prediction depends on the convergence rate of the cycle expansion, etc. In fact, accelerating convergence is not the focus of this paper and a few acceleration schemes have been proposed in the literature [16; 18; 20; 22], but further discussion on how to integrate them into our scheme needs to be investigated. Furthermore, the admissibility criterion of UPOs in more complex systems also requires more discussion.
During parameter changes, chaos attractors may lose stability to periodic motions and the chaotic trajectory turns transient. The current scheme is still valid but the obtained result is an average over the transient chaotic set. It is not the dynamical average obtained in a long-term simulation of the system and supported on the stable periodic orbit. Another complication is associated with the convergence rates of the Taylor and the cycle expansion. Although, in numerical computation, the valid range of perturbation parameter seems quite broad but we do not have a quantitative estimation of what it should be. On the other hand, even if chaotic motion is maintained some cycles may be on the edge of losing hyperbolicity, requiring more cycles to achieve high accuracy [15]. The good news is that if the system is nearly uniformly hyperbolic, this property will remain in a small perturbation of parameters and the current algorithm should work well.
It is appropriate to say that the study of cycle expansions in perturbed chaotic systems is just started and far from complete. Therefore, we hope that this paper will give a taste of a new approach and provide a new tool to cope with perturbations in chaotic systems. Of course, exploring and promoting our scheme both in theory and in applications for higher-dimensional or even real systems requires further discussion and penetrating reflection.
###### Acknowledgements.
This work was supported by the National Natural Science Foundation of China under Grants No.11775035, by BUPT Excellent Ph.D. Students Foundation, and also by the Key Program of National Natural Science Foundation of China (No. 92067202).
## Conflict of Interest
The authors declare that they have no conflict of interest.
## Data Availability Statement
All data, models, generated or used during the study appear in the submitted article, code generated during the study are available from the corresponding author by request.
## Appendices
### Pruning Algorithms in Sect. II.2
In 1-dimensional maps, the spatial ordering of a binary symbolic future itinerary \(S^{+}=.s_{1}s_{2}s_{3}s_{4}...\) where \(s_{i}\in\{0,1\}\) is converted to a binary number \(\gamma(S^{+})\), called future topological coordinate, by the converting algorithm [5]
\[\gamma(S^{+})=\sum_{n=1}^{\infty}\frac{c_{n}}{2^{n}}\,, \tag{23}\]
where \(c_{n+1}=s_{n+1}+(-1)^{s_{n+1}}c_{n}\) and \(c_{1}=s_{1}\). The itinerary of the critical point \(x_{c}\), the kneading sequence, is denoted as \(S^{+}(x_{c})\) which represents the upper bound of spatial order in the phase space, that is, any prime cycles whose spatial order exceeds \(S^{+}(x_{c})\) is inadmissible. Thus, an applicable admissibility criterion can be expressed as: all the realized prime cycles must satisfy the discriminant condition
\[\hat{\gamma}(p)\leq\gamma(S^{+}(x_{c}))\,, \tag{24}\]
where \(\hat{\gamma}(p)\) is the maximal topological coordinate of prime cycle \(p\), _e.g._, \(\hat{\gamma}(\overline{011})=max\{\gamma(011011011...),\gamma(101101101...),\)\(\gamma(110110110...)\}\). When promoted to 2-dimensional cases, the algorithm will also take into account the past itinerary. The spatial ordering of a binary symbolic past itinerary \(S^{-}=...s_{-3}s_{-2}s_{-1}s_{0}.\) is converted to a binary number, the past topological coordinate [5], as
\[\delta(S^{-})=\sum_{n=1}^{\infty}\frac{d_{1-n}}{2^{n}}\,, \tag{25}\]
where \(d_{n-1}=1-s_{n}+(-1)^{s_{n}+1}d_{n}\) and \(d_{0}=s_{0}\). Thus, we can construct a symbol square \([\delta,\gamma]\) in which the admissible and the forbidden motions are separated by a 'pruning front' in the two-dimensional phase space, which is usually fractal and consists of the set of all primary turning points. All the realized prime cycles must be located in the admissible zones. Certainly, in physical or numerical experiments, only finite precision can be achieved and it is reasonable to choose an n-bit precision approximation (subshift of finite type) [32].
In some cases, we are able to locate all the short admissible UPOs even without knowing the pruning rule as long as a symbolic partition of the phase space is achieved. We simply try all the possible sequences which will provide different initial guesses to cycle searching algorithms such as the multiple shooting method in Sect. IV.1. Certainly, many symbol sequences do not match any admissible orbits because we have not used the pruning rule. However, this sort of search will cover all the possible cases and will not miss any existing cycle.
### Details of the extension of dynamics to complex domains
If Eq.(13) is analytic in \(\epsilon\), it can be extended to the complex \(\epsilon\)-domain. Thus, the observable average \(\langle a\rangle_{\epsilon}\) related to the leading eigenvalue \(s_{0,\epsilon}\) can be viewed as an analytic function in the complex \(\epsilon\)-plane, which is the core of the perturbation scheme and used to compute coefficients of the Taylor expansion. Correspondingly, the dynamics of the system \(f_{\epsilon}\) and the periodic points should all be extended to the complex domain according to the following rules:
* The formula of the dynamics remains unchanged, except that the critical points on the real axis become critical lines perpendicular to the real axis in the complex domain.
* The periodic points of all the UPOs become complex and follow the dynamics of the system strictly. We search for UPOs in the complex domain similarly as in the real domain, but the admissibility is checked by the real part of the coordinates.
* To ensure the consistency and analyticity, Eq.(13) should be modified slightly in the complex plane. The denominator of \(t_{p}\) denotes the stability of each UPO and should change analytically with \(\epsilon\) to preserve the weight of each UPO in cycle expansion, _e.g._, for the tent map with binary symbolic dynamics, \(|\Lambda_{p,\epsilon}|\) in Eq.(13) should be modified to \((-1)^{s_{1}+s_{2}+...+s_{n_{p}}}\Lambda_{p,\epsilon}\) (\(s_{i}\in\{0,1\}\)) and the sign of \(t_{p,\epsilon}\) corresponds to the topological property of cycle \(p\). |
2307.00114 | A Personalized Household Assistive Robot that Learns and Creates New
Breakfast Options through Human-Robot Interaction | For robots to assist users with household tasks, they must first learn about
the tasks from the users. Further, performing the same task every day, in the
same way, can become boring for the robot's user(s), therefore, assistive
robots must find creative ways to perform tasks in the household. In this
paper, we present a cognitive architecture for a household assistive robot that
can learn personalized breakfast options from its users and then use the
learned knowledge to set up a table for breakfast. The architecture can also
use the learned knowledge to create new breakfast options over a longer period
of time. The proposed cognitive architecture combines state-of-the-art
perceptual learning algorithms, computational implementation of cognitive
models of memory encoding and learning, a task planner for picking and placing
objects in the household, a graphical user interface (GUI) to interact with the
user and a novel approach for creating new breakfast options using the learned
knowledge. The architecture is integrated with the Fetch mobile manipulator
robot and validated, as a proof-of-concept system evaluation in a large indoor
environment with multiple kitchen objects. Experimental results demonstrate the
effectiveness of our architecture to learn personalized breakfast options from
the user and generate new breakfast options never learned by the robot. | Ali Ayub, Chrystopher L. Nehaniv, Kerstin Dautenhahn | 2023-06-30T19:57:15Z | http://arxiv.org/abs/2307.00114v1 | A Personalized Household Assistive Robot that Learns and Creates New Breakfast Options through Human-Robot Interaction
###### Abstract
For robots to assist users with household tasks, they must first learn about the tasks from the users. Further, performing the same task every day, in the same way, can become boring for the robot's user(s), therefore, assistive robots must find creative ways to perform tasks in the household. In this paper, we present a cognitive architecture for a household assistive robot that can learn personalized breakfast options from its users and then use the learned knowledge to set up a table for breakfast. The architecture can also use the learned knowledge to create new breakfast options over a longer period of time. The proposed cognitive architecture combines state-of-the-art perceptual learning algorithms, computational implementation of cognitive models of memory encoding and learning, a task planner for picking and placing objects in the household, a graphical user interface (GUI) to interact with the user and a novel approach for creating new breakfast options using the learned knowledge. The architecture is integrated with the Fetch mobile manipulator robot and validated, as a proof-of-concept system evaluation in a large indoor environment with multiple kitchen objects. Experimental results demonstrate the effectiveness of our architecture to learn personalized breakfast options from the user and generate new breakfast options never learned by the robot.
## I Introduction
With a rapid increase in the aging population worldwide [1, 2], research is being conducted to develop autonomous robots that can assist older adults in their homes. These assistive robots are being designed for various roles, such as caretakers, cleaning robots, and home assistants [3, 4, 5, 6]. To create robots that can assist users with household tasks, the robots will first need to learn the preferences of the users related to the assistive tasks. For example, for the task of setting up a table for breakfast, the robot must first learn the different kinds of breakfasts that the user likes. Further, after learning the user preferences, the robot must find creative ways to perform the assistive tasks, because performing the same task every day can become boring for the user. For example, setting up the same breakfast option for the user over multiple days could become boring and the user might want to try new things. Therefore, in this paper, our goal is to develop a computational architecture that can allow a household assistive robot to learn different breakfast options from its user, use the learned knowledge to set up a table for breakfast, and also create new breakfast options for the user.
For a household assistive robot to perform tasks, it needs the semantic knowledge of the household i.e. objects (e.g. bowl, spoon) and related contexts (e.g. kitchen). The robot must also be able to reason on the semantic knowledge to perform tasks using the objects in the household. Extensive research has been conducted in recent years to create semantic reasoning architectures for performing assistive tasks in household environments [7, 8]. Most of these works use a pre-specified knowledge base to perform household tasks. However, in the real world, different users can have different preferences about the tasks that they need assistance with. Therefore, for such cases, we need to develop personalized household robots [9] that can learn about the tasks that the users need assistance with, from the users. Research has also been conducted on creativity for robots. Most research in this field has been on developing cognitive architectures for social robots to create new artistic drawings [10], or for humanoid robots to perform creative dance moves [11, 12]. However, these works are not directly applicable to household assistive robots for completing tasks in creative ways.
In this paper, we develop a cognitive architecture that allows a robot to learn different breakfast options using the objects in the household from its user, set up the learned breakfast options on a table upon request from the user, and create new breakfast options for the user over the long term. The architecture allows the robot to interact with its user using a graphical user interface (GUI) and learn different breakfast options. Inspired by the dual memory theory of mammalian memory [13], the breakfast options taught by the user, grounded in the processed sensory data of the robot, are stored in the long-term episodic memory. The architecture also keeps track of different breakfasts eaten by the user over multiple days and stores them in short-term memory (STM). The architecture can access the learned knowledge from the episodic memory and plan lower-level actuator commands for the robot to set up a table for the learned breakfasts. The architecture can further reason on the knowledge stored in the episodic memory to generate a semantic knowledge graph which can be used to create new breakfast options. The user can ask the robot to set up a previously learned breakfast or create a new breakfast option through the GUI. We integrate the proposed architecture on the Fetch mobile manipulator robot [14] and test it in a large indoor space with 9 common kitchen objects. Experimental results confirm that the robot can accurately learn different breakfast options from the user and set them up on a table. The results also show that the robot can create various new breakfast options that were never observed by the robot in its experience in the household context.
## II Related Work
Socially assistive robots have been developed in recent years that can be interactive meal partners for older adults in long-term care homes [15, 16]. These robots, however, only interact with older adults to suggest different meal options and do not physically perform the task of setting up the table for a meal. Various cognitive architectures have been developed that can use the semantic knowledge of a household environment and physically perform tasks in the household, such as fetching an object, setting up a table for breakfast, cleaning a table [17, 7, 8]. Although these robots can perform different tasks in a household environment, they perform only a pre-programmed set of tasks, and they do not adapt to the preferences of their users. For example, the mobile manipulator robot in [7] can set up a table for only one type of breakfast. This can also get boring for the users if the robot sets up the same breakfast every single day over multiple weeks. In such cases, the robot must create new breakfast options for its users.
Research for developing creative robots has been limited to creating artistic drawings or dancing robots. For example, Augello et al. [10] develop a cognitive architecture for social robots that can create a new drawing while collaborating with a human. Infantino et al. [11] and Manfre et al. [12] develop cognitive architectures to enable creativity in humanoid robots so that they can dance in pleasant manners. These works, however, are not applicable to household assistive robots that can perform household tasks in creative ways. Research has also been conducted on developing cognitive architectures that can allow social robots to stimulate creativity in children [18, 19]. These architectures, however, do not allow a robot to be creative but rather stimulate creativity in children.
With the advent of deep learning, generative adversarial networks (GANs) have been developed that can generate new data the model never learned [20, 21, 22]. These networks can learn general semantic representations about different household contexts (e.g. bedroom) from a large amount of training data, and then generate new images that were never seen by the model. One of the main limitations of these models is that they can generate many random images which do not belong to any context, such as creating random images that do not look like a bedroom context. Therefore, they cannot be applied to make assistive robots creative, as the robot would make many mistakes, which can hurt the trust of its user towards the robot [23]. Further, GANs also require a large amount of training data to learn, which might be infeasible in real-world situations where the robot learns from the supervision provided by its users. Real users (especially older adults) would be unwilling to provide hundreds and thousands of examples of a single task to teach the robot. In this paper, we use Gaussian processes [24] as generative models to create new breakfast options, as these models have been shown to work with limited data [25].
## III Contextual Memory System for a Creative Robot
Figure 1 shows our cognitive architecture for a creative breakfast setting robot. Different computational modules in the architecture were integrated using ROS on the Fetch mobile manipulator robot. Note that all the modules are stand-alone, therefore they can be reused as blocks in different frameworks. These modules are described below:
### _Robot's Sensors_
The Fetch mobile manipulator robot was used for this project [14]. Fetch consists of a mobile base and a 7 DOF arm. The robot also contains an RGB camera, a depth sensor and a Lidar sensor. These sensors can be used for 3D perception, slam mapping, and obstacle detection in the robot's environment. In our architecture, the mobile base, the 7 DOF arm, and all three sensors are used for perception, manipulation, mapping, and navigation in an indoor environment.
Fig. 1: Our complete architecture for learning and setting up breakfast options in a household. Sensory inputs from the Fetch robot are processed through the perceptual system and encoded into latent variables, which are stored in the episodic memory during the learning phase, and stored in STM to track breakfasts eaten by the user over multiple days. The breakfast creation module can use the data in the episodic memory to create new breakfast options. The task planner can plan lower-level commands for the robot’s actuators to set up a table for breakfast. The wide, dark blue line indicates that all three outputs from the perceptual system are passed on to the task planner. Circled numbers show the flow of information in the architecture, with pink-colored numbers for the learning process and yellow-colored numbers for the breakfast setup. Processes that run in parallel are tagged with the same number.
### _Perceptual System_
The perceptual system of the architecture takes an RGB image and point cloud data as input from the robot's sensors, and parses this data into separate objects. We use the YOLOv2 object detector [26] for the detection of objects in the RGB images. The 2D bounding boxes from YOLO are converted into 3D coordinates using the point cloud data. We collected \(\sim\)5000 images of 9 household objects used in our experiments and trained the YOLO object detector on the collected data. The perceptual system, thus, parses the input images and outputs the object categories, 2D bounding boxes and 3D coordinates for all the objects in the image.
### _Memory Encoding_
The data obtained from the robot's sensors or the perceptual system must be encoded into a low dimensional feature space (also called a _latent variable_), before it can be used to reason about the entities in the world (e.g. objects in the household). In this paper, we encode the processed sensory inputs by the perceptual system, using _conceptual spaces_[27, 28]. In cognitive science, a Conceptual Space is a metric space in which entities are characterized by quality dimensions. Conceptual spaces have mostly been used in cognitive science for category learning, where the dimensions of a latent variable (LV) in a conceptual space represent the category features. In this paper, we use a conceptual space LV to represent different breakfast setups (such as {cereal, milk, bowl, spoon} make a breakfast setup), where the features of the LV represent the collection of objects in the breakfast setup represented by the LV. Further, as each breakfast setup contains food items such as cereal, milk, etc and utensils such as spoon, bowl, etc, we also encode this information about the objects in another LV. We term this LV, a food-context LV to differentiate it from the object LV for the breakfast options. This information can help the architecture generate creative breakfast setups (Section III-F).
### _Short-Term Memory (STM)_
Once an input image of a breakfast setup is encoded into a latent variable, it is stored in the short-term memory (STM) of the architecture. The size (\(k\)) of STM is set as a hyperparameter to allow the architecture to store encoded images for a certain number of days. Once STM is full, data stored from earlier days is removed to make room for more data.
STM tracks the breakfast eaten by the user over multiple days. Using the data stored in STM, the architecture can suggest new breakfast options that the user has not eaten in previous days. Formally, let's consider there are \(n\) number of breakfast options stored in the episodic memory as LVs \(X=\{x_{1},x_{2},...,x_{n}\}\). Over the course of \(k\) (hyperparameter in STM) number of days, the user eats different breakfast options, where \(M=\{m_{1},m_{2},...,m_{n}\}\) represents the total number of times each of the \(n\) breakfast options was eaten by the user. From this set, the robot can find the breakfast options that have been least eaten by the user over \(k\) days as \(\arg\min M\) and set it up on the table. If multiple breakfast options were eaten the least number of times, then the robot randomly chooses one of these breakfast options.
### _Episodic Memory_
The episodic memory stores different breakfast options taught by the robot's user. As different users can have different breakfast preferences, it is not possible to store a general set of breakfast options. Therefore, the robot must learn about these preferences by interacting with the user.
In our architecture, a user can initiate a learning session using a GUI (details in Section III-H) and provide examples of different breakfast setups. The robot captures the breakfast setups as images using its sensors. The perceptual system (Section III-B) processes the training images which are then encoded into latent variables (Section III-C). The encoded LVs (both object LVs and food-context LVs) are then stored in the episodic memory, which can be accessed later to set up a table for breakfast.
### _Creating New Breakfast Options_
The user can also ask the robot to surprise them (see Figure 3) by creating a new breakfast option that the user never taught the robot i.e. such a breakfast option does not exist in the episodic memory. We define a creative breakfast as a new combination of food and utensil items that were never directly learned by the robot from the user. To achieve this, we use the object LVs stored in the episodic memory to find the mean \(\mu\) and covariance matrix \(\Sigma\) for a Gaussian distribution in a Gaussian process. We generate a pseudo-LV1 after sampling the Gaussian distribution. However, the pseudo-LV can be the same as one of the object LVs stored in the episodic memory i.e. it is not a new breakfast option. Therefore, if the pseudo-LV is the same as any of the object LVs in the episodic memory, we continue to resample the Gaussian distribution until we get a pseudo-LV that is different from the object LVs in the episodic memory.
Footnote 1: The sampled LVs are termed as pseudo-LVs because they are not real LVs learned from the user.
The new pseudo-LV, however, can be an invalid breakfast setup. For example, {cereal, milk, spoon} is an invalid breakfast setup as it does not contain any container (such as a bowl) to pour cereal and milk. To fix such cases, we find the conditional relationships among various objects that are used in different breakfast setups stored as LVs in the episodic memory. Using these conditional relationships we infer logic-based rules to generate a knowledge graph, which can be used to fix invalid breakfast setups.
We use the food-context LVs in the episodic memory to determine the dependency of different food items on a combination of other food items and utensils. To achieve this, let's consider \(n\) LVs in the episodic memory, and consider a food object represented by dimension \(i\) in the LVs. For each \(i\)th food item, we consider all the breakfast setups (say \(r\) LVs) where this food item exists. Among the \(r\) LVs, we first calculate the probability \(\mathrm{P}(i|no\_utensil)\), i.e. if the \(i\)th food item does not require a utensil to be present in a breakfast setup. If \(\mathrm{P}(i|no\_utensil)>0\), there is at least one breakfast
setup where the food item is not accompanied by a utensil, therefore the food item can be a part of a breakfast setup without a utensil. Otherwise, if \(\mathrm{P}(i|no\_utensil)=0\), the food item requires at least one utensil. In this case, we go through all \(r\) LVs to find different combinations of utensils that the food item depends on, as a food item could depend on multiple utensils, e.g. cereal would depend on a spoon and a bowl. For this, let's consider that there are a total of \(m\) utensils present in the \(r\) LVs. For each \(j\)th utensil present in the \(r\) LVs, we find the conditional probability \(\mathrm{P}(j|l)\) with all the other \(l=\{1,...,m\}\) utensils in the \(r\) LVs. \(\mathrm{P}(j|l)\) represents the probability that \(j\)th utensil exists given that \(l\)th utensil exists in the same LV. \(\mathrm{P}(j|l)\) is determined as follows:
\[\mathrm{P}(j|l)=\frac{\sum_{q=1}^{r}z_{q}^{l}}{\sum_{q=1}^{r}z_{q}^{l}}\text{ such that }z_{q}^{l}z_{q}^{l}>0, \tag{1}\]
where \(z_{q}^{l}\) represents the value of \(l\)th utensil item in the \(q\)th food-context LV. If \(\mathrm{P}(j|l)=1\), utensil \(j\) must exist when utensil \(l\) exists in an LV accompanied by the \(i\)th food item. As a result, we get an \(m\times m\) matrix representing the dependency of utensils on other utensils that accompany the \(i\)th food item in \(r\) LVs. Using this dependency matrix, we find all the utensil items that are independent of other utensils or that are interdependent with other utensils i.e. for two utensils \(j\) and \(l\), \(\mathrm{P}(j|l)\)=1 and \(\mathrm{P}(l|j)\)=1. The resulting set represents combinations of different utensils that the \(i\)th food item depends on. These sets are then used to generate a logic-based knowledge graph based on \(is\_required\) relationships (see Figure 2 for an example). Note that the food item requires only one of the dependent utensil combinations to be present in the breakfast setup, not all the combinations. For example, milk must either be accompanied by a cup for drinking or {bowl, spoon} in a cereal breakfast. Figure 2 shows a simple example of generating a logic-based knowledge graph from the learned breakfast options in memory.
After finding the dependencies on utensils, the same process is repeated to determine if a food item depends on other food items. After this process, we can find a combination of objects (foods or utensils) that each food item in the LVs depends on for valid breakfasts. We do not find a separate list of dependent objects for utensils as these items are only needed to accompany the food items in breakfast setups. Note that the knowledge graph is generated based on the breakfast options taught by the user, so the dependency rules encoded in the graph are personalized to the user. Experimental results in Section IV confirm this.
Using the logic-based knowledge graph, we can determine if a feature dimension in a pseudo-LV satisfies its dependency on other items. If a feature dimension is not accompanied by its dependent items, we manually add the dependent items in the pseudo-LV (see Figure 2 for an example). Finally, after the dependency check, the pseudo-LV is decoded using the inverse of the procedure in Section III-C to get the objects in the new breakfast option. The object names/labels are then passed on to the task planner.
### _Task Planner_
The task planner gets the decoded breakfast option from the creativity module (Section III-F), and plans lower-level actions to be taken by the robot to set up a table for breakfast. The task planner passes lower-level commands to the mobile base and the arm of the robot to move and fetch objects from the kitchen to the dining table.
### _Graphical User Interface_
A simple graphical user interface (GUI) is integrated with the architecture to allow the robot to communicate with the
Fig. 3: The graphical user interface (GUI) used to interact with the robot. The head camera output of the Fetch robot with objects detected through YOLO is in the window to the right. The window to the left shows three buttons that can be used to teach the robot a new breakfast option, ask the robot to bring a previously learned breakfast option and ask the robot to create a new breakfast option.
Fig. 2: An example of logic-based knowledge graph generation using the three breakfast options learned from a user. Digits next to visualizations correspond to the rule generation process for each food object. For _Apple_ and _Banana_, we show the final probabilities only. Note that the logic-based rules are learned solely from the data shown by the user to the robot, therefore, some of the rules could be imperfect or unconventional, such as the requirement of _Apple_ and _Banana_ with each other. (Bottom) An example of fixing the generated breakfast option using the logic-based rules from the knowledge graph.
user. The GUI allows the user to initiate a teaching session with the robot where the user can show the robot different breakfast setups on a table. The user physically places the set of objects in a breakfast setup on the table in front of the robot's camera (see Figure 3). The user can provide the name for the breakfast option by typing it in a textbox. The robot captures the breakfast data using the RGB camera and the depth sensor and then encodes and stores the breakfast option in the episodic memory (Section III-E).
The GUI also allows the user to ask the robot to set up a table for breakfast. The user can type in the name of the breakfast that they want, ask the robot to set up the table for breakfast without typing any particular breakfast name or ask the robot to surprise them by creating a new breakfast option. After getting the input from the user, the architecture can use a combination of all the modules to allow the robot (Section III-G) to set up the table for breakfast.
## IV Experiments
In this section, we first describe the experimental setup and the implementation details. We then describe two experiments to evaluate the performance of our architecture for learning different breakfast options from the user, setting them up on the table, and creating new breakfast options. For all the experiments reported in this section, the experimenters take the role of a user.
### _Experimental Setup_
We use the Fetch robot [14] and its associated ROS packages for all the experiments. We performed experiments in a large indoor space where we set up the kitchen and the dining area with realistic household objects. The indoor space is mapped using the Lidar sensor on the Fetch robot and an existing SLAM algorithm available from Fetch Robotics. Navigation in the environment was achieved using ROS packages provided by Fetch Robotics. Common household items/objects belonging to 9 categories (see Table II for a list of graspable objects) are placed on three tables in the kitchen. Out of the 9 objects, 3 (_Banana_, _Bowl_, and _Spoon_) were not graspable by the robot. Therefore, for breakfast setups that required these 3 objects, the user had to fetch the objects themselves. Manipulation of objects (pick and place) was achieved using ROS packages for gripper, arm, and torso control provided by Fetch Robotics. The RGB camera and depth sensors on the Fetch robot were used for visual sensing of the environment. RGB images from the camera are passed through the perception module of the architecture which uses YOLOv2 [26] to detect and localize objects in the images (see Section III-B for details).
For all the experiments (unless mentioned otherwise), the user (experimenter) first teaches the Fetch robot different breakfast options on the dining table using the GUI (Section III-H). The robot learns the breakfast options and stores them in episodic memory. As the user would eat breakfast once every day, we can'simulate' multiple days by asking the robot to set up a table for breakfast multiple times a day. For the short-term memory (STM) in the architecture, we set the hyper-parameter \(k\) to 5 days. Examples of teaching breakfast options to the robot and testing the robot to set up known and new breakfast options are shown in the supplementary video.
### _Experiment 1: Setting Up Known Breakfast Options_
In this experiment, we tested if the robot can learn breakfast options from the user and then set up the learned breakfast options when asked by the user. We taught the robot 7 different breakfast options as shown in Table I. Figure 4 shows examples of 2 out of 7 breakfast options learned by the robot. The robot was then asked to set up a table for breakfast 15 times. In each of the 15 turns (except for the 5th, 10th, and 15th run), the user typed the name of the breakfast in the GUI to ask the robot to set up the table for a particular breakfast. We randomly chose a breakfast option to be typed in each turn. For the 5th, 10th, and 15th turn, the user did not type any breakfast name, therefore the robot used the data stored in STM to choose the least eaten breakfast to set up. For each breakfast setup, the robot moved all the graspable objects in the breakfast setup from the kitchen to the dining table. On average, it took the robot \(\sim\)4 minutes to set up a breakfast option on the table.
Table I shows the results of setting up 7 breakfast options learned by the robot. All the breakfast options were learned correctly by the robot, and there was no learning error. The robot was able to correctly set up breakfast options in 10 out of 15 runs. As each breakfast setup required multiple objects, failing to fetch even a single object would result
\begin{table}
\begin{tabular}{c|c|c|c} \hline
**Breakfast Options** & **Accuracy** & **STM** & **LE** \\ \hline milk, cup & 2/3 & 1 & 0 \\ milk, cup, banana & 3/3 & 0 & 0 \\ milk, cereal, spoon, bowl & 1/2 & 1 & 0 \\ banana, milk, cereal, spoon, bowl & 1/1 & 0 & 0 \\ honey, milk, cereal, spoon, bowl & 1/2 & 0 & 0 \\ honey, milk, cup & 1/2 & 1 & 0 \\ apple, orange, banana & 1/2 & 0 & 0 \\ \hline \end{tabular}
\end{table} TABLE I: Results of learning and setting up 7 breakfast options by the robot over 15 runs. Accuracy represents the ratio of the number of times a breakfast was correctly set up and the number of times it was chosen. STM shows the number of times a breakfast option was chosen by tracking data in short-term memory, without explicitly being asked by a user. LE represents the learning error.
Fig. 4: Examples of two different breakfast options learned and set up by the Fetch robot.
in an incorrect breakfast setup. Most of the breakfast setup failures happened because of a single object in the breakfast setup (more details below). There were three runs when the robot was asked to set up a breakfast option using STM. The robot correctly chose one of the least-eaten breakfast options in all three runs.
Table II shows the results of the experiment in 15 runs, with three different kinds of errors for each graspable object used in the 7 breakfast setups. The most common error type was the manipulation error (ME), which occurred because of two reasons: (1) the motion planner could not find a path to reach the goal, (2) the perceptual system provided an incorrect pose estimate to pick the object (perceptual error (PE)). There were no object detection failures during the experiment because even if the robot failed to detect an object, it moved its head up and down until it found the correct object. Therefore, all of the perceptual errors happened during the 3D pose estimation of objects. Finally, there was only one grasping error for _Orange_. The robot's arm was not low enough and because the orange is round it was not captured in the gripper of the robot. Objects with sharper edges, such as _Milk_ did not face this issue. These results confirm that our architecture can allow a robot to learn most breakfast options from the user and set up the learned breakfast options on a table.
### _Experiment 2: Creating New Breakfast Options_
#### Iv-C1 Experiment with a Robot
In this experiment, we tested the ability of our architecture to allow a robot to create new breakfast options that were never learned by the robot. The experimental setup was the same as in experiment 1. The robot was started with the same 7 breakfast options in the beginning as in experiment 1. After that, we tested the robot 5 times to create and set up new breakfast options.
Table III shows the five new breakfast options created by the robot. All five breakfast setups were valid setups because each food object was accompanied by the correct set of utensils. Two out of five breakfast options generated by the Gaussian process were invalid. For example, breakfast option 2 had bowl missing, and breakfast option 5 had cup missing. However, these objects were added by the architecture using the logic-based rules encoded in the knowledge graph for the food items (Section III-F). Finally, note that there was no learning error (LE) encountered for these breakfast options because they were not learned from any example provided by the user to the robot. The values for perceptual and manipulation errors were consistent with the previous experiments.
#### Iv-C2 Simulated Experiments
To further evaluate the effectiveness of our breakfast creativity algorithm, we tested the architecture in simulation to create 50 breakfast options. Note that in this case the architecture was only asked to suggest the breakfast option and the robot did not physically set up the generated breakfast option on the table. Out of the 50 breakfast options, 27 were the same setups as the ones stored in the episodic memory and were thus discarded. Out of the other 23 options, 7 were invalid options generated by the Gaussian process. However, these invalid options were corrected using the logic-based knowledge graph for the food items. Overall, out of the 23 new breakfast options, 6 were duplicates, so there were 17 distinct new options. These results confirm that our architecture can allow the robot to create and set up new breakfast options that were not learned by the robot. Further, our architecture was able to create more than double the breakfast options (17) it had learned by interacting with the user (7). However, the robot cannot generate a significantly large number of distinct breakfast options when learning from a few examples.
We further test our approach on a larger scale with a total of 25 objects, an initial set of 20 breakfasts and ask the creativity module to generate 200 breakfast options. Out of the 200 generated breakfasts, 65 were the same as the ones stored in the episodic memory and were therefore discarded. For the rest of the 135 breakfasts, 113 were invalid options but they were corrected by the logic-based knowledge graph for the food items. Finally, out of the 113 new breakfast options 36 were duplicates. Therefore, the architecture was able to generate 99 distinct new breakfast options from only 20 initial breakfast setups. These results confirm the scalability of our approach to larger datasets learned over the long term.
Finally, we tested our approach with some unconventional breakfast setups. For example, we added a breakfast setup {cereal, bowl} as some users might eat cereal without any milk. Other examples of unconventional setups were {peanut_butter, bowl, spoon}, {yogurt, spoon}, etc.. For this experiment, we had a total of same 25 objects as in the previous experiment, and 12 breakfast setups where 6 of the breakfast setups were unconventional. The creativity module generated 50 breakfast options, with 25 out of 50 being distinct new options. Interestingly, we noticed that the creative breakfast setups followed the dependency of food
\begin{table}
\begin{tabular}{c|c|c|c} \hline
**Object** & **PE** & **ME** & **GE** \\ \hline milk & 1/12 & 1/12 & 0 \\ cup & 2/8 & 2/8 & 0 \\ cereal & 0 & 1/5 & 0 \\ apple & 0 & 0 & 0 \\ orange & 1/2 & 0 & 1/2 \\ honey & 1/4 & 2/4 & 0 \\ \hline \end{tabular}
\end{table} TABLE II: Results of setting up 7 breakfast options in 15 runs in terms of perceptual errors (PE), manipulation errors (ME), and grasping errors (GE) for graspable objects. Each column represents the ratio between the number of errors for manipulation of an object and the total number of times the object occurred in 15 runs.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline
**Breakfast Options** & **Valid?** & **LE** & **PE** & **ME** \\ \hline milk, banana, honey, cup & Yes & 0 & 1/1 & 1/1 \\ apple, milk, cereal, spoon, bowl & Yes & 0 & 0 & 1/1 \\ apple, honey, milk, cereal, spoon, bowl & Yes & 0 & 1/1 & 1/1 \\ milk, cereal, bowl, cup, spoon & Yes & 0 & 0 & 0 \\ apple, milk, banana, orange, cup & Yes & 0 & 0 & 0 \\ \hline \end{tabular}
\end{table} TABLE III: Five breakfast options created by the robot that were not taught by the user. LE, PE, and ME represent the learning error, perceptual error, and manipulation error, respectively. |
2309.15923 | $p$-form electrodynamics as edge modes of a topological field theory | $p$-form electrodynamics in $d\geq 2$ dimensions is shown to emerge as the
edge modes of a topological field theory with a precise set of boundary
conditions, through the Hamiltonian reduction of its action. Electric and
magnetic charges correspond to Noether ones in the topological field theory.
For chiral $p$-forms, the topological action can be consistently truncated, so
that the Henneaux-Teitelboim action is recovered from a pure Chern-Simons
theory, with a manifestly covariant stress-energy tensor at the boundary.
Topologically massive $p$-form electrodynamics as well as axion couplings are
also shown to be described through this mechanism by considering suitable
(self-)interaction terms in the topological theory. | Oscar Fuentealba, Ricardo Troncoso | 2023-09-27T18:01:47Z | http://arxiv.org/abs/2309.15923v2 | # \(p\)-form electrodynamics as edge modes of a topological field theory
###### Abstract
\(p\)-form electrodynamics in \(d\geq 2\) dimensions is shown to emerge as the edge modes of a topological field theory with a precise set of boundary conditions, through the Hamiltonian reduction of its action. Electric and magnetic charges correspond to Noether ones in the topological field theory. For chiral \(p\)-forms, the topological action can be consistently truncated, so that the Henneaux-Teitelboim action is recovered from a pure Chern-Simons theory, with a manifestly covariant stress-energy tensor at the boundary. Topologically massive \(p\)-form electrodynamics as well as axion couplings are also shown to be described through this mechanism by considering suitable (self-)interaction terms in the topological theory.
+
Footnote †: institutetext: \({}^{*}\)Université de Paris, CNRS, Sorbonne Paris 11, 91199 Orsay, France
Introduction
The massless Klein-Gordon field, Maxwell electrodynamics and the Kalb-Ramond field are well-known to be described in a unified way through \(p\)-form electrodynamics with \(p=0,1,2\), respectively (see e.g., [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]). \(p\)-form electrodynamics also plays a very relevant role in the description of supergravity and string theory in diverse dimensions [15; 16; 17; 18].
One of the main aims of our work is showing that \(p\)-form electrodynamics in \(d\geq 2\) dimensions emerges as the edge modes of a topological field theory of BF-type once endowed with a very precise set of boundary conditions, defined in Section 2, that requires the presence of a metric only at the boundary. The form of the stress-energy tensor at the boundary naturally suggests the link with \(p\)-form electrodynamics, which is explicitly performed through the Hamiltonian reduction of the topological action in Section 3. It is worth stressing that electric and magnetic charges can then be seen as Noetherian ones in the BF theory. The case of even \(p\)-forms in \(d=2p+2\) spacetime dimensions is discussed in Section 4, where it is shown that the topological action can be consistently truncated to a pure Chern-Simons theory devoid of boundary terms, whose Hamiltonian reduction precisely yields the Henneaux-Teitelboim action for chiral \(p\)-forms. We conclude in Section 5, where (self-)interactions of the topological field theory are considered, which allows to reproduce topologically massive \(p\)-form electrodynamics, extensions of it, as well as axion couplings as edge modes for the same set of boundary conditions.
## 2 Topological field theory of BF-type
Let us consider the action principle of an Abelian BF theory (see e.g., [19; 20; 21; 22; 23; 24; 25; 26; 27; 28]) on a manifold \(\Omega\) of \(d+1\) dimensions, given by
\[I=\int\limits_{\Omega}B\wedge dC+\mathfrak{B}\,, \tag{1}\]
where \(B\) and \(C\) correspond to \(p+1\) and \(\left(d-p-1\right)\)-forms, respectively. The boundary term \(\mathfrak{B}\) is defined on \(M=\partial\Omega\), being generically required in order to have a well-defined variational principle once the boundary conditions are specified. Its precise form can be obtained as follows. The variation of (1) reads
\[\delta I=\int\limits_{\Omega}\left(\delta B\wedge dC+(-1)^{p}dB \wedge\delta C\right)-(-1)^{p}\int\limits_{M}B\wedge\delta C+\delta\mathfrak{B }\,, \tag{2}\]
so that the bulk terms vanish when the field equations, \(dB=0\) and \(dC=0\), hold. Thus, the action attains an extremum provided that the variation of the boundary term is given by
\[\delta\mathfrak{B}=(-1)^{p}\int\limits_{M}B\wedge\delta C\,, \tag{3}\]
which requires a precise choice of boundary conditions to be integrated.
### Boundary conditions
In order to specify our boundary conditions we assume the existence of a metric structure at the boundary, so that \(g_{\mu\nu}\) is defined only at \(M=\partial\Omega\). The boundary conditions are then defined by choosing the \(C\)-field to be the Hodge dual the \(B\)-field at the boundary, i.e.,1
Footnote 1: Note that \(M\) is also assumed to be orientable, so that the Hodge dual of an \(r\)-form \(\omega\) is defined as \(*\omega=\frac{\sqrt{-g}}{r(d-r)!}\omega^{\mu_{1}\cdots\mu_{r}}\epsilon_{\mu_{ 1}\cdots\mu_{r}\nu_{r+1}\cdots\nu_{d}}dx^{\nu_{r+1}}\wedge\cdots\wedge dx^{ \nu_{d}}\).
\[(C-*B)\,\Big{|}_{M=\partial\Omega}=0\,. \tag{4}\]
The boundary condition (4) then allows to integrate the variation of the boundary term \(\delta\mathfrak{B}\) in (3), so that it is given by
\[\mathfrak{B}=\frac{(-1)^{p}}{2}\int\limits_{M}B\wedge*B\,. \tag{5}\]
In sum, the action principle
\[I=\int\limits_{\Omega}B\wedge dC+\frac{(-1)^{p}}{2}\int\limits_{M}B\wedge*B\,. \tag{6}\]
becomes well-defined for our choice of boundary conditions in (4).
### Stress-energy tensor at the boundary
Note that the topological field theory under discussion has no local notion of energy. Nevertheless, since the boundary conditions incorporate a metric, it is possible to define a stress-energy tensor at the boundary \(M=\partial\Omega\) along the lines of Brown and York [29], given by
\[T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta I}{\delta g^{\mu\nu}}\,, \tag{7}\]
which for the action in (6) reads
\[T_{\mu\nu}=\frac{(-1)^{(p+1)(d-p)}}{(p+1)!}\left((p+1)\,B_{\mu\mu_{1}\cdots\mu _{p}}{B_{\nu}}^{\mu_{1}\cdots\mu_{p}}-\frac{1}{2}g_{\mu\nu}B_{\mu_{1}\cdots\mu _{p+1}}B^{\mu_{1}\cdots\mu_{p+1}}\right)\,. \tag{8}\]
The explicit form of the stress-energy tensor in (8) then naturally suggests a link with \(p\)-form electrodynamics at the boundary, which is discussed in the next section.
## 3 \(p\)-form electrodynamics from Hamiltonian reduction
The Hamiltonian reduction of the topological field theory described by (6) can be readily performed due to its simplicity. Assuming the topology of \(\Omega\) to be of the form \(\mathbb{R}\times\Sigma\), indices can be split in space and time so that the action reads
\[I=\frac{1}{\alpha}\int\limits_{\Omega}dtd^{d}x\Big{[}\,(-1)^{\,p}\epsilon^{0i _{1}\cdots i_{d}}\dot{B}_{i_{1}\cdots i_{p+1}}C_{i_{p+2}\cdots i_{d}}+B_{0i_{1 }\cdots i_{p}}G_{C}^{i_{1}\cdots i_{p}}+C_{0i_{1}\cdots i_{d-p-2}}G_{B}^{i_{1} \cdots i_{d-p-2}}\Big{]}+\tilde{\mathfrak{B}}\,, \tag{9}\]
with \(\alpha=(p+1)!(d-p-1)!\), and the boundary term is given by
\[\tilde{\mathfrak{B}}=\mathfrak{B}+\frac{(-1)^{p(d-p-1)}}{(p+1)!(d-p-2)!}\int \limits_{\Omega}dtd^{d}x\epsilon^{0i_{1}\cdots i_{d}}\partial_{i_{d}}\Big{(}C_ {0i_{1}\cdots i_{d-p-2}}B_{i_{d-p-1}\cdots i_{d-1}}\Big{)}\,. \tag{10}\]
Note that \(B_{0i_{1}\cdots i_{p}}\) and \(C_{0i_{1}\cdots i_{d-p-2}}\) stand for Lagrange multipliers, and hence the corresponding constraints fulfilling
\[G_{C}^{i_{1}\cdots i_{p}} :=(p+1)\epsilon^{0i_{1}\cdots i_{d}}\partial_{i_{p+1}}C_{i_{p+2} \cdots i_{d}}=0\,, \tag{11}\] \[G_{B}^{i_{1}\cdots i_{d-p-2}} :=-\,(-1)^{p(d-p-1)}\,(d-p-1)\epsilon^{0i_{1}\cdots i_{d}}\partial _{i_{d}}B_{i_{d-p-1}\cdots i_{d-1}}=0\,, \tag{12}\]
are locally solved by
\[B_{i_{1}\cdots i_{p+1}}\,=\,(p+1)\partial_{[i_{1}}A_{\cdots i_{p+1}]}\quad, \quad C_{i_{1}\cdots i_{d-p-1}}=\partial_{[i_{1}}\tilde{A}_{\cdots i_{d-p-1}]}\,. \tag{13}\]
Thus, replacing the solution of the constraints in (13) back into (10), after suitable integration by parts, the full action reduces to a boundary term that can be written as
\[I=-\frac{(-1)^{p}(p+1)}{\alpha}\int\limits_{M}dtd^{d-1}x\epsilon^{\mu_{1} \cdots\mu_{d}}\partial_{\mu_{1}}A_{\mu_{2}\cdots\mu_{p+1}}C_{\mu_{p+2}\cdots \mu_{d}}+\mathfrak{B}\,. \tag{14}\]
It is then useful to fix the gauge according to
\[B_{0i_{1}\cdots i_{p}}=(p+1)\partial_{[0}A_{i_{1}\cdots i_{p}]}\,, \tag{15}\]
so that \(B=dA\). Besides, the boundary condition (4) allows to trade the \(C\)-field by the Hodge dual of \(B\), and hence, the action (14) becomes that of \(p\)-form electrodynamics, given by
\[I[A]=-\frac{(-1)^{p}}{2}\int\limits_{M}B\wedge*B\,, \tag{16}\]
where the dynamical field turns out to be the \(p\)-form \(A\), whose field strength is \(B=dA\).
One interesting direct consequence of the equivalence of the topological action (6) with that of \(p\)-form electrodynamics at the boundary, is that electric and magnetic charges can both be seen to emerge from Noether ones in the topological theory. Indeed, the suitably normalized conserved charges associated to the gauge transformations of the topological action (\(\delta B=d\lambda_{B}\), \(\delta C=d\lambda_{C}\)) once evaluated at the boundary are given by
\[Q_{B}\Big{|}_{\partial\Omega}=\int C=\int*B\quad,\quad Q_{C}\Big{|}_{\partial \Omega}=\int B\,, \tag{17}\]
corresponding to the electric and magnetic charges of \(p\)-form electrodynamics, respectively.
## 4 Consistent truncation: chiral \(p\)-forms from a pure Chern-Simons theory
Let us consider the case of even \(p\)-forms for \(d=2p+2\), so that one can perform the following change of basis in the fields of the topological theory
\[B^{\pm}=\frac{1}{2}\left(B\pm C\right)\,. \tag{18}\]
In terms of \(B^{\pm}\), the boundary condition (4) then reads
\[\left(B^{\pm}\mp*B^{\pm}\right)\Big{|}_{M=\partial\Omega}=0\,, \tag{10}\]
which amounts to require the fields to be (anti-)selfdual at the boundary, and the topological action (6) becomes just the difference of two pure Chern-Simons forms, given by
\[I=I^{+}-I^{-}=\int\limits_{\Omega}B^{+}\wedge dB^{+}-\int\limits_{\Omega}B^{- }\wedge dB^{-}\,. \tag{11}\]
Note that the simple change of basis (10) yields to the action (11) that is devoid of boundary terms, and leads to a well-defined variational principle for (anti-)chiral fields at the boundary by construction.
It is worth highlighting that the action (11) can be consistently truncated to describe single chiral fields at the boundary, either for vanishing \(B^{+}\)or \(B^{-}\). The link between chiral \(p\)-forms and pure Chern-Simons theories has also been explored in [30].
### Chiral \(p\)-form action from Hamiltonian reduction
The Hamiltonian reduction of the action
\[I^{\pm}=\int\limits_{\Omega}B^{\pm}\wedge dB^{\pm}\,, \tag{12}\]
for an odd \((p+1)\)-form \(B^{\pm}\) in \(2p+3\) spacetime dimensions that is (anti-)chiral at the boundary, is performed along the lines of Section 3. Splitting the indices in space and time, the action (12) can be written as
\[I^{\pm}=\frac{1}{(p+1)!^{2}}\int\limits_{\Omega}dtd^{d}x\epsilon^{0i_{1}\cdots i _{2p+2}}\Big{[}\dot{B}^{\pm}_{i_{1}\cdots i_{p+1}}B^{\pm}_{i_{p+2}\cdots i_{2 p+2}}+2(p+1)B^{\pm}_{0i_{1}\cdots i_{p}}\partial_{i_{p+1}}B^{\pm}_{i_{p+2} \cdots i_{2p+2}}\Big{]}+\hat{\mathfrak{B}}^{\pm}\,, \tag{13}\]
with a boundary term \(\hat{\mathfrak{B}}^{\pm}\) that reads
\[\hat{\mathfrak{B}}^{\pm}=\frac{(p+1)}{(p+1)!^{2}}\int\limits_{\Omega}dtd^{d}x \epsilon^{0i_{1}\cdots i_{2p+2}}\partial_{i_{2p+2}}\Big{(}B^{\pm}_{0i_{1} \cdots i_{p}}B^{\pm}_{i_{p+1}\cdots i_{2p+1}}\Big{)}\,. \tag{14}\]
The constraint associated to the Lagrange multiplier \(B^{\pm}_{0i_{1}\cdots i_{p}}\)in (13) is then exactly solved as
\[B^{\pm}_{i_{1}\cdots i_{p+1}}=\partial_{[i_{1}}A^{\pm}_{\cdots i_{p+1}]}\,, \tag{15}\]
so that the the full action (13) reduces to a boundary term given by
\[I^{\pm}=\frac{1}{(p+1)!^{2}}\int\limits_{M}dtd^{d-1}x\epsilon^{0a_{1}\cdots a _{2p+1}}\left(-\dot{A}^{\pm}_{a_{1}\cdots a_{p}}B^{\pm}_{a_{p+1}\cdots a_{2p+ 1}}+(p+1)B^{\pm}_{0a_{1}\cdots a_{p}}B^{\pm}_{a_{p+1}\cdots a_{2p+1}}\right)\,, \tag{16}\]
where latin indices \(a_{1},\ldots,a_{2p+1}\) stand for the spacelike ones at \(M=\partial\Omega\). The (anti-)self-duality condition (4.2) fixes the Lagrange multiplier in terms of the field strength in (4.7), according to
\[B^{\pm}_{0a_{1}\cdots a_{p}}=\pm\frac{1}{(p+1)!}\sqrt{-g}B^{\mu_{1}\cdots\mu_{p +1}}_{\pm}\epsilon_{\mu_{1}\cdots\mu_{p+1}0a_{1}\ldots a_{p}}\,, \tag{4.9}\]
and hence, the action (4.8), reduces to the Henneaux-Teitelboim action for (anti-)chiral \(p\)-forms [31, 32], given by
\[I^{\pm}[A^{\pm}]=\mp\frac{p!}{(p+1)!^{2}}\int\limits_{\Omega}dtd^{d-1}x\left[ \pm\mathcal{B}^{a_{1}\ldots a_{p}}_{\pm}\dot{A}^{\pm}_{a_{1}\cdots a_{p}}- \left(N\mathcal{H}^{\pm}+N^{a}\mathcal{H}^{\pm}_{a}\right)\right]\,, \tag{4.10}\]
written in terms of the magnetic field
\[\mathcal{B}^{a_{1}\ldots a_{p}}_{\pm}=\frac{1}{p!}\epsilon^{0a_{1}\ldots a_{2 p+1}}B^{\pm}_{a_{p+1}\ldots a_{2p+1}}\,. \tag{4.11}\]
Here, we have also made use of the ADM decomposition of the metric
\[ds^{2}=-N^{2}dt^{2}+\gamma_{ab}(dx^{a}+N^{a}dt)(dx^{b}+N^{b}dt)\,, \tag{4.12}\]
so that \(N\) and \(N^{a}\) stand for the lapse and shift functions, respectively, while the energy and momentum densities explicitly read
\[\mathcal{H}^{\pm}=\frac{1}{\sqrt{\gamma}}\mathcal{B}^{a_{1}\ldots a_{p}}_{\pm }\mathcal{B}^{\pm}_{a_{1}\ldots a_{p}}, \tag{4.13}\]
\[\mathcal{H}^{\pm}_{a}=\pm\frac{1}{p!}\epsilon^{0}{}_{a_{1}\ldots a_{2p}a} \mathcal{B}^{a_{1}\ldots a_{p}}_{\pm}\mathcal{B}^{a_{p+1}\ldots a_{2p}}_{\pm}. \tag{4.14}\]
It is worth pointing out that although covariance is not manifest in the action (4.10), invariance under diffeomorphisms that preserve the background metric holds by virtue of the fact that the energy and momentum densities fulfill the Dirac-Schwinger algebra [31, 32].
### Manifestly covariant stress-energy tensor for chiral \(p\)-forms
One of the advantages of obtaining the Henneaux-Teitelboim action for chiral \(p\)-forms (4.10) as an edge mode of the pure Chern-Simons action in (4.4) is that a manifestly covariant stress-energy tensor can be readily obtained as in Section 2.2. Indeed, the Brown-York stress-energy tensor in (2.7) evaluated at the boundary \(M=\partial\Omega\) is given by
\[T^{\pm}_{\mu\nu}=\pm\frac{2}{p!}B^{\pm}_{\mu\mu_{1}\cdots\mu_{p}}B^{\pm}_{\nu }\,{}^{\mu_{1}\cdots\mu_{p}}\,, \tag{4.15}\]
being traceless, manifestly covariant and clearly conserved by virtue of the Bianchi identity and the (anti-)self-dual boundary condition (4.2). Its components relate to the energy and momentum densities by projecting along the normal vector to the spacelike hypersurface \(n_{\mu}=(N,0)\), so that
\[T^{\pm}_{\perp\perp} =T^{\mu\nu}_{\pm}n_{\mu}n_{\nu}=\pm\frac{2}{\left(p-1\right)^{2}p! \sqrt{\gamma}}\mathcal{H}^{\pm}\,, \tag{4.16}\] \[T^{\pm}_{\perp a} =T^{\mu}_{\pm\;a}n_{\mu}=\pm\frac{2}{\left(p-1\right)^{2}p!\sqrt{ \gamma}}\mathcal{H}^{\pm}_{a}\,, \tag{4.17}\]
where \(\mathcal{H}^{\pm}\) and \(\mathcal{H}^{\pm}_{a}\) are given by (4.13) and (4.14), respectively.
Extensions and final remarks
Axion-like couplings between diverse \(p\)-forms as well as topologically massive extensions can also be seen to emerge as edge modes of topological field theories that include suitable (self-)interaction terms deforming the gauge symmetries without breaking them.
### Topologically massive \(p\)-form electrodynamics from a BF theory with a "cosmological term"
One possibility is to endow the topological theory in (6) with a "cosmological term", extending that in [21] for even \((p+1)\)-forms in higher dimensions, so that our action in (6) is deformed as
\[I_{(\mu)}=I+\frac{\mu}{2}\int\limits_{\Omega}B^{m}\,, \tag{10}\]
being clearly well-defined for the same boundary conditions in (4), provided that \(B\) stands for an even \((p+1)\)-form in \(d+1=m(p+1)\) dimensions. Note that the integer \(m\) ranges as \(2\leq m\leq(d+1)/2\), and the gauge symmetries now become
\[\delta B=d\lambda_{B}\quad,\quad\delta C=d\lambda_{C}-\frac{m(m-1)}{2}\mu B^{ m-2}\lambda_{B}\,. \tag{11}\]
The Hamiltonian reduction can then be carried out as in Section 3, so that once the indices are split in space and time, the action is given by (10) with a deformed constraint \(G_{C}^{i_{1}\cdots i_{p}}\to G_{(\mu)C}^{i_{1}\cdots i_{p}}\), with
\[G_{(\mu)C}^{i_{1}\cdots i_{p}}=G_{C}^{i_{1}\cdots i_{p}}+\frac{(d-p-1)!m}{2p! [(p+1)!]^{m-2}}\mu\left(B^{m-1}\right)^{i_{1}\cdots i_{p}}=0\,, \tag{12}\]
where \(\left(B^{m-1}\right)^{i_{1}\cdots i_{p}}=\epsilon^{0i_{1}\cdots i_{p}i_{p+1} \cdots i_{d}}B_{i_{p+1}\cdots i_{2p+1}}\cdots B_{i_{d-p}\cdots i_{d}}\). Thus, the constraints can also be locally solved, and once the solution is replaced back into the action, fixing the gauge as in (11), it reduces to a boundary term given by
\[I_{(\mu)}[A]=-\frac{(-1)^{p}}{2}\int\limits_{M}\left(B\wedge*B+\mu A\wedge B^ {m-1}\right)\,, \tag{13}\]
where the \(p\)-form \(A\) is the dynamical field with field strength \(B=dA\), precisely reproducing topologically massive \(p\)-form electrodynamics for \(m=2\), and extending it otherwise. In particular, for a standard \(U(1)\) gauge field (\(p=1\)), the original topologically massive electrodynamics of Deser, Jackiw and Templeton [33] is recovered in \(d=3\) (\(m=2\)), while the graviphoton of five-dimensional supergravity is obtained in \(d=5\) (\(m=3\)) for a precise value of the deformation parameter \(\mu\). More possibilities arise in higher dimensions, as in the case of \(d=11\), where three different theories can be obtained, for \(p=1,3,5\) (with \(m=6,3,2\), respectively). Note that the eleven-dimensional supergravity 3-form field [34] is described by (13), where the value of \(\mu\) becomes fixed by local supersymmetry.
### Axion-like couplings from interacting topological theories
A class of couplings between \(p\)-form fields for diverse values of \(p\) can also be described through the edge modes of a topological theory of BF-type with suitable interaction terms.
As a precise example let us consider a five-dimensional action of the form
\[I_{(\lambda)}=\int\limits_{\Omega}\left(B_{[2]}\wedge dC_{[2]}-B_{[1]}\wedge dC _{[3]}+\frac{\lambda}{2}B_{[2]}^{2}\wedge B_{[1]}\right)-\frac{1}{2}\int \limits_{M}\left(B_{[2]}\wedge*B_{[2]}+B_{[1]}\wedge*B_{[1]}\right)\,, \tag{10}\]
which is clearly well-defined for boundary conditions as in (4), regardless the value of the coupling \(\lambda\). Following the same lines as in the previous cases, the Hamiltonian reduction of (10) reduces to a four-dimensional boundary term describing the axion coupling of Maxwell electrodynamics with the massless Klein-Gordon field, given by
\[I_{(\lambda)}[A,\phi]=\frac{1}{2}\int\limits_{M}\left(B_{[2]}\wedge*B_{[2]}+B_ {[1]}\wedge*B_{[1]}+\lambda\phi B_{[2]}\wedge B_{[2]}\right)\,, \tag{11}\]
where \(B_{[2]}=dA\) and \(B_{[1]}=d\phi\).
As a closing remark, it is certainly worth exploring how topological invariants as the Ray-Singer torsion and the generalized linking number, known to be deeply connected with topological field theories of BF-type [19; 20; 21; 22; 23; 24; 25; 26; 27; 28], reflect themselves in the context of \(p\)-form electrodynamics.
**Note added**. This is a slightly updated version of our unpublished preprint [35] that was presented in XVIII Chilean Symposium of Physics during November 2012. Our results possess some overlap with those recently reported in [36].
###### Acknowledgements.
We would like to thank Claudio Bunster, Marcela Cardenas, Hernan A. Gonzalez, Marc Henneaux, Javier Matulich, Alfredo Perez, Miguel Pino, David Tempo and Cedric Troessaert for many useful discussions along the years. O.F. wishes to thank to the organizers of the XVIII Chilean Symposium of Physics hosted by Universidad de La Serena and Sociedad Chilena de Fisica (SOCHIFI), during November 2012, for the opportunity of presenting this work. This research has been partially supported by ANID FONDECYT grants N\({}^{*}\) 1211226, 1220910, 1221624. The work of O.F. was partially supported by a Marina Solvay Fellowship, FNRS-Belgium (conventions FRFC PDRT.1025.14 and IISN 4.4503.15), as well as by funds from the Solvay Family.
|
2309.12094 | RadYOLOLet: Radar Detection and Parameter Estimation Using YOLO and
WaveLet | Detection of radar signals without assistance from the radar transmitter is a
crucial requirement for emerging and future shared-spectrum wireless networks
like Citizens Broadband Radio Service (CBRS). In this paper, we propose a
supervised deep learning-based spectrum sensing approach called RadYOLOLet that
can detect low-power radar signals in the presence of interference and estimate
the radar signal parameters. The core of RadYOLOLet is two different
convolutional neural networks (CNN), RadYOLO and Wavelet-CNN, that are trained
independently. RadYOLO operates on spectrograms and provides most of the
capabilities of RadYOLOLet. However, it suffers from low radar detection
accuracy in the low signal-to-noise ratio (SNR) regime. We develop Wavelet-CNN
specifically to deal with this limitation of RadYOLO. Wavelet-CNN operates on
continuous Wavelet transform of the captured signals, and we use it only when
RadYOLO fails to detect any radar signal. We thoroughly evaluate RadYOLOLet
using different experiments corresponding to different types of interference
signals. Based on our evaluations, we find that RadYOLOLet can achieve 100%
radar detection accuracy for our considered radar types up to 16 dB SNR, which
cannot be guaranteed by other comparable methods. RadYOLOLet can also function
accurately under interference up to 16 dB SINR. | Shamik Sarkar, Dongning Guo, Danijela Cabric | 2023-09-21T14:09:23Z | http://arxiv.org/abs/2309.12094v1 | # RadYOLOLet: Radar Detection and Parameter Estimation Using YOLO and WaveLet
###### Abstract
Detection of radar signals without assistance from the radar transmitter is a crucial requirement for emerging and future shared-spectrum wireless networks like Citizens Broadband Radio Service (CBRS). In this paper, we propose a supervised deep learning-based spectrum sensing approach called RadYOLOLet that can detect low-power radar signals in the presence of interference and estimate the radar signal parameters. The core of RadYOLOLet is two different convolutional neural networks (CNN), RadYOLO and Wavelet-CNN, that are trained independently. RadYOLO operates on spectrograms and provides most of the capabilities of RadYOLOLet. However, it suffers from low radar detection accuracy in the low signal-to-noise ratio (SNR) regime. We develop Wavelet-CNN specifically to deal with this limitation of RadYOLO. Wavelet-CNN operates on continuous Wavelet transform of the captured signals, and we use it only when RadYOLO fails to detect any radar signal. We thoroughly evaluate RadYOLOLet using different experiments corresponding to different types of interference signals. Based on our evaluations, we find that RadYOLOLet can achieve 100% radar detection accuracy for our considered radar types up to 16 dB SNR, which cannot be guaranteed by other comparable methods. RadYOLOLet can also function accurately under interference up to 16 dB SINR.
Spectrum Sensing, Radar Detection, Deep Learning, YOLO, Wavelet Transform, Spectrum Sharing.
## I Introduction
_Motivation:_ Radar bands are increasingly being shared by mobile broadband systems for better radio spectrum utilization via dynamic spectrum access [1]. One such well-known spectrum-sharing paradigm in the United States is CBRS [2]. Hence, robust spectrum sensing methods for detecting radar signals are of prime importance. In such spectrum sensing problems, the sensor does not have a priori knowledge of the radar transmitters' signal parameters, transmission activities, and location. Under these restrictions, prior works have shown that machine learning (ML) based spectrum sensing methods can detect radar with high accuracy when the peak radar signal to average interference and noise ratio (SINR)1 per MHz at the sensor is above 20 dB [3].
Footnote 1: For brevity, throughout the paper, we will use ‘SINR (and SNR)’ to imply peak-to-average SINR (and SNR) per MHz.
_Goals_: Our goal is to push the minimum required radar SINR limit to below 20 dB, at which the radar signals can be detected by the spectrum-sensing sensor (henceforth sensor) with high accuracy. Specifically, we investigate three fundamental aspects of spectrum sensing for radar signals. First, we aim to develop a method to detect low SNR radar signals. Second, we aim to have the capability of detecting radar signals in the presence of interference. Third, while aiming for the above goals, we also want to estimate the parameters of radar and interference signals, e.g., bandwidth, pulse width, pulse interval, etc. These capabilities will be instrumental in designing intelligent and efficient radar-communication spectrum-sharing systems in the future.
For our investigation, we rely on the CBRS framework. CBRS is a complex spectrum-sharing ecosystem with many details [4]. Hence, we enumerate the main features of CBRS that are relevant to our problem. i) We consider five types of radar signals relevant to CBRS. ii) The sensor, known as environmental sensing capability (ESC) in CBRS, must detect the radar signals without assistance from the radar transmitter. iii) The interference at the sensor originates from a cellular network that shares the spectrum with the radar signals. The interference source has no coordination with the radar transmitter. More details about these features are discussed later in Section II. In the current CBRS rules, the sensor must detect the radar signals with high accuracy when the SINR is above 20 dB. However, as mentioned before, we want to go below the 20 dB radar SINR requirement in CBRS without compromising the radar detection accuracy.
_Challenges_: To achieve our ambitious goals, we must address several important challenges.
* First, our considered radar signals have a low duty cycle, which is the ratio of ON time to OFF time. Thus, it is difficult to detect radar signals using their ON times, which are much smaller than their OFF times. This challenge becomes more critical for low SNR radar.
* Second, the sensor must detect different types of radar signals that are dissimilar from one another and have unknown signal parameters within a range. Hence, it is challenging to have a method that can achieve high detection accuracy on all relevant types of radar signals.
* Third, due to the dissimilarity of the different radar types, their parameters belong to very wide ranges. For example, some radar signals have narrow bandwidth, while some have narrow pulses. Hence, it is difficult to estimate the radar signal parameters accurately.
* Fourth, the interference signals from communication systems do not have a low duty cycle. Consequently, their
presence can significantly degrade the sensor's ability to detect ephemeral radar signals, especially when the interference-to-noise ratio (INR) is high.
* Fifth, if the interference signals have certain transmission activity patterns, then the sensor must be capable of distinguishing such patterns from those of the radar signals. Otherwise, there will be spurious false alarms, which can hinder the overall goal of spectrum sharing.
_Approach_: To address the above-mentioned challenges, we propose a supervised deep-learning-based spectrum sensing method called RadYOLOLet. To deal with the first challenge, RadYOLOLet uses two different CNNs. While we develop the first CNN, which we call RadYOLO, to have different necessary capabilities, the second one is built specifically to deal with low SNR radar signals. RadYOLO operates on spectrograms and simultaneously detects the radar signals and estimates their parameters using the YOLO framework [5]. We carefully design the formation of the spectrograms to assist RadYOLO in fulfilling its objectives. By virtue of being a data-driven supervised deep learning method, RadYOLO has the capability of detecting different types of radar signals using the same neural network and, thus, overcomes the second challenge. In RadYOLO, we take the ambitious step of treating each radar pulse as a different object, detecting, and localizing them. This enables us to deal with the third challenge. This approach also helps us counter the fourth challenge as it provides robustness against interference signals that switch between the ON and OFF phases. We develop several strategies to tackle the small-sized radar pulse objects in RadYOLO. RadYOLO treats radar and interference signals as different classes and learns to distinguish between their patterns, which is crucial for tackling the fifth challenge. Finally, RadYOLO can also extract the parameters of the interference signals.
However, RadYOLO is not robust in detecting radar in the low SNR regime. Hence, we use the second CNN, which operates on images generated by Wavelet transform of the captured signals. We call this CNN Wavelet-CNN. Here our intuition is to leverage Wavelet transform, which has been used as a robust method for detecting low SNR radar echoes in traditional radar signal processing [6], where the receiver is aware of the radar signal parameters. However, in our case, the radar signal detection problem is more complex as the sensor is unaware of the transmitted radar signals. Hence, we use a neural network for the detection task. Instead of directly using a Wavelet transformed signal as input to the CNN, we carefully design a preprocessing step before the Wavelet transforms that improves our chances of detecting radar signals. Wavelet-CNN acts as a binary classifier that distinguishes between radar and non-radar signals. Thus, Wavelet-CNN lacks the diversity (multi-class classification and signal parameter estimation) of RadYOLO. For this reason, we use Wavelet-CNN only when RadYOLO does not detect any radar. While Wavelet-CNN provides robustness to low SNR radar, it does not provide robustness to interference inherently. Hence, we develop several strategies in Wavelet-CNN and its associated preprocessing such that it does not miss-classify interference signals as radar.
### _Related Work_
In traditional monostatic radar, the radar transmitter emits pulses that are reflected by objects, received by the radar receiver, and processed for detecting the objects. An essential signal processing tool in this scheme is matched filtering of the received signal for improving the SNR of the reflected pulses [8]. However, matched filtering-based techniques are not applicable in scenarios where the spectrum sensing sensor is unaware of the transmitted signal parameters.
To deal with interference, MIMO radars use beamforming methods like sampled matrix inversion (SMI) based minimum variance distortionless response (MVDR) [9] or ML-based MVDR for suppressing interference [10]. However, we cannot directly apply such beamforming techniques to our problem as the radar and interference signals arrive at the sensor antenna from opposite directions [11]2. Additionally, having an antenna array on the sensor can impact the location privacy (via the direction of arrival estimate) of navy radar transceivers [4].
Footnote 2: The source of interference at the sensors in CBRS is unique due to the deployment factors, and it is different from the traditional interference models in radar signal processing. More details can be found in [11].
Electronic support and electronic intelligence (ES/ELINT) is a broad area where the task is to detect radar signals that have a low probability of intercept (LPI) [12]. In such problems, the radar signals are designed to be challenging to detect. The radar detection problem in CBRS differs from ES/ELINT as the radar transmitter is not trying to hide its signals from the sensor. However, LPI radar detection techniques can be leveraged in our problem.
Since the inception of the idea of ESC in CBRS, there have been several works on detecting radar signals [13, 7, 14, 15, 16]. Most of these works have relied on ML-based spectrum sensing. These methods generally frame the radar detection problem as a classification task. Several feature representations and learning methods have been proposed for this problem in the literature. For example, a combination of signal amplitude and phase difference can be used as input to a CNN for predicting the presence of radar signals [13]. Instead of signal amplitude and phase, the classification task can be performed using spectrograms [15]. Computer vision-inspired objection detection methods can be applied to spectrograms for detecting radar signals and estimating their bandwidth [3] or detecting non-radar signals that might be present on the spectrograms [7]. Instead of deep learning, support vector machines (SVM) based classifiers can also be used for the classification task using features like higher-order and peak statistics [16]. As mentioned before, matched filtering is not directly applicable to our problem. However, if only one type of radar signal is considered and the radar pulse shape is assumed to be known at the sensor, then matched filtering can be applied [14]. Unlike RadYOLOLet, none of these works aim at detecting low-power radar signals, estimating their parameters, and tolerating higher interference.
An important component of RadYOLOLet is YOLO-based radar detection. Similar ideas have been explored in a couple of prior works [3, 7]. Hence, we point out the differences between RadYOLOLet and other YOLO-based radar detection
works in Table I. We see that our proposed approach has several capabilities that are absent in other YOLO-based radar detection works.
Another vital component of our work is using Wavelet transform for radar detection. Hence, we briefly review the relevant works in this domain. Continuous Wavelet transform (CWT) can be used for low SNR radar target detection and can have better processing gain (input to output SNR ratio) than matched filtering [6]. While this work serves as an important motivation, it cannot be directly applied to our problem as it assumes the knowledge of the transmitted radar signal, does not consider different radar types and interference. Wavelet transform also has the capability of reducing noise, and that can be used as used to denoise the radar returns [17]. Multi-scale product of discrete Wavelet transform can be applied to the power spectral densities (PSD) of the captured signals for noise reduction [18]. However, these approaches differ from ours as we use Wavelet transform to mimic the operation of matched filtering without the cognizance of radar signal parameters. Our idea of using a CNN on images generated via Wavelet transform has similarities with the work in [19]. However, our approach and objectives differ from those in [19]. Specifically, the work in [19] aims at modulation recognition and does not consider the presence of interference. In contrast, our focus is on signal detection rather than modulation classification. Additionally, an important focus of our approach is to deal with interference.
### _Contributions_
Our main contributions to this paper are the following.
1. We develop a deep learning-based object detection method, RadYOLO, that simultaneously detects radar and interference signals and estimates their parameters. For radar, RadYOLO estimates the center frequency, bandwidth, number of pulses, pulse width, and pulse interval. For interference, RadYOLO estimates center frequency, bandwidth, and ON times.
2. We develop a deep learning-based binary classifier, Wavelet-CNN, that distinguishes between radar and non-radar signals, especially in the low radar SNR regime. Wavelet-CNN uses a CNN as its core, operating on the Wavelet transform of the captured signals.
3. Our overall design, RadYOLDet, is a tight integration of RadYOLO and Wavelet-CNN. Wavelet-CNN strives to succeed when RadYOLO fails. At the same time, Wavelet-CNN relies on RadYOLO for signal parameter estimation as Wavelet-CNN cannot do that. Importantly, for both the CNNs, we design several preprocessing and postprocessing of the inputs and outputs, respectively, for achieving robustness to noise and interference.
4. We thoroughly evaluate RadYOLDet using a diverse set of experiments involving different radar SNR and interference INR scenarios. Our evaluations show that, when interference signals are not present, RadYOLDet can achieve 100% radar detection accuracy for all five radar types up to 16 dB SNR. In the presence of different types of interference signals, RadYOLDet can detect radar signals with 100% accuracy up to 16 dB SINR.
### _Organization_
In Section II, we present our system model, the detail about the radar signals relevant to our problem, and the problem statement. Then, in Section III, we describe our methodology in RadYOLDet. In Section IV, we explain our evaluation setup and present our results. Finally, Section V provides conclusions and future work.
## II System Model
_Radar characteristics:_ As in CBRS, we consider five radar types whose characteristics are shown in Table II. Radar types 1 and 2 are pulse-modulated, and the remaining ones are frequency-chirping. Thus, the bandwidth of radar types 3-5 in Table II are their chirp width.
_Sensor details:_ We consider a sensor whose task is to detect the above-described radar signals with high accuracy and estimate their parameters. For all radar types, the parameters are in a range that is known to the sensor. The radar pulse parameters do not change within a burst (a set of pulses). However, at a particular time, the exact values of these parameters, along with the radar type, are unknown to the sensor. The sensor's task is to detect, at most, one radar signal at a time. I.e., we assume multiple radar signals do not appear simultaneously. Due to the reasons described in Section I-A, we consider the sensor equipped with a single antenna. The instantaneous bandwidth of the sensor is \(B\) MHz, and the sampling rate is also \(B\) MS/sec. As the radar signals can appear anywhere on a 100 MHz portion of the CBRS band (3550-3650 MHz), ideally, we should have \(B\geq 100\). Such sensors have been considered in [3, 7]. An alternative approach is to use \(B=10\), which is what we consider in the work. In this approach, the 100 MHz band can be broken into 10 different non-overlapping 10 MHz sub-bands, and the spectrum sensing algorithm can be applied to each subband individually. Then, the decisions for all sub-band can be combined into a single decision for the whole 100 MHz band. Accordingly, without loss of generality, we focus on monitoring a \(B=10\) MHz band for our sensor. Dividing the 100 MHz
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{\begin{tabular}{c} **Methods** \\ \end{tabular} } & \multirow{3}{*}{
\begin{tabular}{c} **Radar** \\ **detection** \\ \end{tabular} } & \multicolumn{3}{c|}{All CBRS} & \multicolumn{3}{c|}{Interference} & Radar & Radar pulse & Interference & Tolerance to & Tolerance to & Tolerance to \\ & & radar types & detection & bandwidth & parameters & parameters & low radar & high power & different type \\ \hline RadYOLDet & YES & YES & YES & YES & YES & YES & YES & YES & YES \\ \hline DeepRadar [3] & YES & YES & NO & YES & NO & NO & NO & NO & NO \\ \hline Waldo [7] & YES & NO & YES & YES & NO & NO & NO & YES & NO \\ \hline \end{tabular}
\end{table} TABLE I: Comparison of YOLO-based radar detection methods.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Radar & Pulse & Inter-pulse & Number of & Burst & Band \\ type & width & interval & pulses & duration & width \\ & (jsec) & (msec) & per burst & (msec) & (MHz) \\ \hline
1 & 0.5 - 2.5 & 0.9 - 1.1 & 15 - 40 & 13 - 44 & 1 \\ \hline
2 & 13 - 52 & 0.3 - 33.3 & 5 - 20 & 1 - 66 & 1 \\ \hline
3 & 3 - 5 & 0.3 - 33.3 & 8 - 24 & 2 - 80 & 50 - 100 \\ \hline
4 & 10 - 30 & 0.3 - 3.3 & 2 - 8 & 0.6 - 26 & 1 - 10 \\ \hline
5 & 50 - 100 & 0.3 - 3.3 & 8 - 24 & 2 - 80 & 50 - 100 \\ \hline \end{tabular}
\end{table} TABLE II: Radar signal parameters [20]
band into smaller sub-bands and making independent decisions is computationally more expensive than looking at the whole 100 MHz band altogether and making a single decision. We choose to use the computationally expensive approach as \(B=10\) suites RadYOLOLet better in achieving high radar detection accuracy, as explained later in Section III-C. There is another implication of using \(B=10\). For radar types 3 and 5 (refer to Table II), the signal bandwidth can be larger than 10 MHz. Hence, for these two radar types, we consider only the portion of the radar band that overlaps the sensor's monitoring bandwidth as the radar bandwidth.
_Interference model:_ The sensor should be able to operate in the presence of interference. However, the interference signals are not adversarial. As in CBRS, the source of interference is a cellular network that shares the spectrum with the radar signals. In our system model, we consider that the source of interference is a cellular base station (BS) whose downlink signals appear at the sensor as interference. Following the most popular channel occupancy of 10 MHz by the cellular operators in CBRS [21], we assume the interference signal occupies the whole 10 MHz band on which the sensor operates. As part of our research in this work, we examine different types of interference signals and their impact on detecting radar signals. Considering the LTE sub-frame duration of 1 msec [22], for all the interference signals considered in this paper, we assume that their ON time is at least 1 msec.
_Decision granularity:_ The sensor must continuously monitor the \(B\) MHz band to detect radar signals. The sensor uses the sampled I, Q values from the RF frontend as its observations and makes decisions based on them. We discretize the decisions in contiguous, non-overlapping time windows of duration \(T\) msec. I.e., for every batch of \(N=(T\times 10^{-3}\times B\times 10^{6})\) samples, the sensor makes a decision. The decisions of multiple time windows can be combined to make a single decision over a time duration longer than \(T\) msec. However, without loss of generality, we focus on the sensor's performance on \(T\) msec time windows and do not consider combining decisions of multiple time windows. Finally, the sensor has enough computing capabilities to make decisions at a rate that is faster than the sampling rate.
_Problem Statement:_ Our problem is to develop a method that has the following attributes:
* Capability to make accurate radar (possibly low SNR) detection decisions for each time window of duration \(T\) msec, both in the presence and absence of interference.
* When a radar signal is detected, the capability to estimate radar center frequency, bandwidth, pulse width, pulse interval, and the number of pulses (within a time window of \(T\) msec).
* Capability of detecting interference signals, both in the presence and absence of radar.
* When interference is detected, the capability to estimate its center frequency, bandwidth, and ON times.
While the radar center frequency and bandwidth estimation is an essential in CBRS, the remaining capabilities can serve as general tools of importance in spectrum sensing.
## III Description of RadYOLOLet
In this section, first, we present the overall flow diagram of RadYOLOLet and then explain its components.
_Overview of RadYOLOLet_: Fig. 1 shows the flow diagram of RadYOLOLet. The input to RadYOLOLet is a set of I, Q values corresponding to a time window of \(T\) msec. We denote the I, Q values as a complex vector s of size \(N\). There are two interdependent flows in Fig. 1.
In the first flow, the I, Q values go through the spectrogram preprocessing block, which generates a spectrogram using the Short-Term Fourier transform (STFT) of the I, Q values. The details of generating the spectrogram are described in Section III-A. The generated spectrogram is used as the input to the RadYOLO block. The RadYOLO block is essentially a CNN inspired by the YOLO framework. RadYOLO takes the spectrogram as input and produces as output its detection decisions, i.e., whether radar and/or interference signals are present or not. If it detects a signal, it also estimates the signal parameters as shown in Fig. 1. The details of the RadYOLO are presented in Section III-B. When RadYOLO detects a radar signal, the second flow comprising of Wavelet preprocessing and Wavelet-CNN blocks in Fig. 1 are not used.
If RadYOLO predicts the absence of radar, the switch in Fig. 1 is closed, and the input I, Q values go through the second flow. When the second flow is used, its output overrides the output of the first flow. The buffer block in Fig. 1 shows
Fig. 1: Overall flow diagram of RadYOLOLet. The switch is closed when the branch labeled ‘NO’ is triggered. For each block, we show on top the form of the signal on which the blocks operate.
that both the flows operate on the same set of I, Q values, but the second flow is triggered only after the first flow makes its decision. In our second flow, the Wavelet preprocessing block performs frequency domain filtering on the input signal and performs continuous Wavelet transform on each filtered signal. The output of this block gives us three different images, which we stack along the depth dimension and form a 3-D tensor that is fed to the Wavelet-CNN block. The details of the Wavelet preprocessing block are presented in Section III-C. The Wavelet-CNN block, which is a CNN, takes the 3-D tensor as input, acts as a binary classifier, and produces its detection decision regarding the presence of radar signals as output. The details of the Wavelet-CNN block are presented in Section III-D.
### _Spectrogram Preprocessing_
The processing in this block is shown in Fig. 2.
_Selection of \(T\)_: First, we form the complex vector \(\mathbf{s}\) of size \(N\), which, as defined earlier, is the set of I, Q values corresponding to a time window of \(T\) msec. However, we must decide the value of \(T\), which is the time unit at which RadYOLOLet repeats its operations. Based on the duration of radar signals (burst duration in Table II), we choose \(T=16\) msec. Accordingly, \(N=16\times 10^{4}\). \(T=16\) msec is a reasonable choice as it is not too high compared to the shorter radar signals and large enough to capture a significant portion of longer radar signals.
_Formation of spectrogram_: Next, we perform STFT on the I, Q samples corresponding to a time window. For the STFT, we reshape \(\mathbf{s}\) to a matrix, \(\mathbf{S}\), of size \(R\times C\). Then, we perform a \(C\) point Fast Fourier Transform (FFT) on each of the rows of \(\mathbf{S}\) and obtain a new complex matrix \(\mathbf{F}\), which has the same dimension as \(\mathbf{S}\). Finally, we take the logarithm of each element of \(\mathbf{F}\) and multiply with 20 to obtain the spectrogram, \(\mathbf{X}_{u}\). The columns of \(\mathbf{X}_{u}\) represent the different frequency bins, and the rows correspond to the different short terms (we will use the phrase 'time slots' to imply the short terms) in our STFT. There is no temporal overlap of the different time slots in our STFT, and we use rectangular windowing in our FFTs.
_Dimensions of spectrogram_: Since we have fixed the value of \(T\) (in turn, \(N\)), finding the dimensions of \(\mathbf{S}\) requires fixing either \(R\) or \(C\). Based on the guidelines for selecting \(C\) in [3], we use \(C\) such that each of the rows in \(\mathbf{S}\) corresponds to a time duration of 1.624 \(\mu\)sec. The intuitive reasoning behind using a very small time duration for the rows is that the radar pulses are of very short duration, whereas noise or interference is not. Hence, a longer time duration for a row would not increase the amount of radar energy, whereas the amount of energy from non-radar signals would increase significantly and hamper our chances of detecting radar. Selecting the time duration of a row to be 1.624 \(\mu\)sec implies \(C=1.624\times 10^{-6}\times B=1.624\times 10^{-6}\times 10^{6}=16.24\). Since \(C\) must be an integer, we use \(C=16\). Consequently, \(R\) becomes \(\frac{N}{16}=10^{4}\). This implies the number of rows in \(\mathbf{X}_{u}\) is huge and much higher than the number of columns in \(\mathbf{X}_{u}\).
_Compression of spectrogram_: To deal with the large number of rows in \(\mathbf{X}_{u}\), we use a compression method. We reshape \(\mathbf{X}_{u}\) of size \((10^{4}\times 16)\) to \((312\times 32\times 16)\), as shown in Fig. 2. \(\mathbf{X}_{u}\) can be seen as 312 matrices, each of size \((32\times 16)\). Next, we compress each of these \((32\times 16)\) sized matrices to vectors of size 16 by selecting the column-wise maximum for each column. This way, we obtain a new matrix \(\mathbf{X}\) of size \((312\times 16)\). Fig. 3 shows \(\mathbf{X}\) for a set of realizations of the five different types of radar signals listed in Table II. Compression of the spectrogram helps the RadYOLO block in several ways, as explained in next section. In spectrogram compression, we collapse 32 consecutive rows, which correspond to \(32\times 1.624\approx 52\mu\)secs, to a single row. Referring to the inter-pulse interval column in Table II, we see the inter-pulse interval of all the radar types is much higher than 52 \(\mu\)sec. Hence, the ON-OFF patterns created by radar on the spectrograms are not lost by the compression. Additionally, since we assume that the ON time of interference signals is at least 1 msec, our compression technique would not make the interference signal patterns look like radar signal patterns and impact the detectability of radar.
### _RadYOLO_
The input to this block is the spectrogram, \(\mathbf{X}\), obtained after spectrogram preprocessing. The core idea of this block is to pass \(\mathbf{X}\) through a CNN and apply an object detection algorithm based on YOLO [5] for jointly making radar and interference detection decisions and estimating their parameters. However, using the YOLO framework for signal parameter estimation is challenging. We develop several strategies to deal with these challenges, as described throughout this section.
_Definition of objects_: We formulate the object detection task as shown in Fig. 4. We take the ambitious step of treating each radar pulse as an object. This way, the number of detected objects can estimate the number of radar pulses, and their separation can estimate the radar pulse interval. Additionally, the height and width of the detected objects can estimate the radar pulse width and bandwidth, respectively. Finally, the \(x\) parameter (see Fig. 4(a)) of the detected objects can estimate the radar center frequency. However, as we consider the presence of interference, we must not confuse the interference signals as radar objects. Hence, we treat radar and interference as objects belonging to different classes, as shown in Fig. 4(b). This enables us to detect radar and interference signals simultaneously. Similar to the radar pulses, we define each interference ON duration as an objects (see Fig. 4(b)). Using the detected interference objects, we can estimate their ON times, center frequency, and bandwidth, similarly as described for radar objects.
_Object detection framework_: Now we explain the object detection framework in RadYOLO. We divide each spectrogram
Fig. 2: Operations inside the spectrogram preprocessing block.
in a grid of size \(K_{C}\times 1\) (can be visualized as a set of \(K_{C}\) grid cells stacked vertically). The number of grid cells along the frequency axis is 1 because, based on our system model, we do not anticipate multiple signals with nonoverlapping bands on the same spectrogram. Then for each grid cell, we define a bounding box, \((x_{i},y_{i},w_{i},h_{i});i=1,...,K_{C}\), that specifies the location of the object within that grid cell. An object is associated with a grid cell if that object's center lies within the grid cell. The size of an object can be bigger than the size of a grid cell. However, not every grid cell contains an object. Hence, for every bounding box, we define confidence, \(c_{i};i=1,...,K_{C}\) (as we use one bounding box per grid cell), which defines the confidence that an object is present in that box. Finally, we must associate the objects with classes. For each grid cell, we use two probabilities, \((p_{i}^{R},p_{i}^{I});i=1,...,K_{C}\), that are the probabilities that the object in grid cell \(i\) belongs to the radar class or interference class, respectively. These probabilities are conditioned on the presence of an object in grid cell \(i\). Note that, with our formulation, we can have only one object per grid cell. Thus, at most, one element of \((p_{i}^{R},p_{i}^{I})\) is non-zero for grid cell \(i\). However, that does not deter us from multi-class classification (detecting both radar and interference on the same spectrogram) as long as all the radar and interference objects do not share the same grid cells. To avoid such undesired situations, we must choose the value of \(K_{C}\) carefully. The choice of \(K_{C}\) is also impacted by the fact that we want different radar pulses/objects to fall in different grid cells so that we can detect them individually. Based on the above factors, we choose \(K_{C}\) to be 32. This choice of \(K_{C}\) implies each of the grid cells corresponds to a duration of \(T/K_{C}=16/32=0.5\) msec, which is comparable to the lowest inter-pulse interval of the different radar types (refer to the Inter-pulse interval column in Table II). Hence, different radar objects will fall in different grid cells with a high probability. Additionally, since the interference signals' ON time is at least 1 msec, the choice of \(K_{C}=32\) implies that different interference objects will be in different grid cells, and, with high probability, there will be some grid cells that contain only radar objects. This will improve our chances of detecting radar signals even in interference. Finally, since the number of interference objects on a spectrogram is, in general, smaller than the number of radar objects (see Fig. 4(b)), we associate a grid cell with interference if both radar and interference objects share the grid cell. This will reduce the chances of missing the interference signal altogether.
_Training procedure_: Based on the above description, we must train the CNN in RadYOLO so that it can predict \((p_{i}^{R},p_{i}^{I},x_{i},y_{i},w_{i},h_{i},c_{i})\) for each of the \(K_{C}\) grid cells. Thus, for each input spectrogram, the output of the CNN is of size \(K_{C}\times 7=32\times 7\). During training, we minimize the following loss function using the Adam optimizer [23].
\[\begin{split}\mathcal{L}_{Y}=&\lambda_{coord}\sum_{b \in\mathcal{B}}\sum_{i=1}^{K_{C}}\mathbf{1}_{b,i}^{obj}\big{[}(x_{b,i}-\hat{x }_{b,i})^{2}+(y_{b,i}-\hat{y}_{b,i})^{2}\big{]}\\ &+\lambda_{coord}\sum_{b\in\mathcal{B}}\sum_{i=1}^{K_{C}} \mathbf{1}_{b,i}^{obj}\bigg{[}\bigg{(}\sqrt{w_{b,i}}-\sqrt{\hat{w}_{b,i}} \bigg{)}^{2}\\ &\hskip 113.811024pt+\bigg{(}\sqrt{h_{b,i}}-\sqrt{\hat{h}_{b,i}} \bigg{)}^{2}\bigg{]}\\ &+\lambda_{obj}\sum_{b\in\mathcal{B}}\sum_{i=1}^{K_{C}}\mathbf{1 }_{b,i}^{obj}(c_{b,i}\times\text{IOU}_{b,i}-\hat{c}_{b,i})^{2}\\ &+\lambda_{nobj}\sum_{b\in\mathcal{B}}\sum_{i=1}^{K_{C}}\mathbf{1 }_{b,i}^{nobj}(c_{b,i}\times\text{IOU}_{b,i}-\hat{c}_{b,i})^{2}\\ &+\lambda_{class}\sum_{b\in\mathcal{B}}\sum_{i=1}^{K_{C}}\sum_{j \in\{R,I\}}\mathbf{1}_{b,i}^{obj}\big{(}p_{b,i}^{j}-\hat{p}_{b,i}^{j}\big{)}^{ 2}\end{split} \tag{1}\]
where \(b\) denotes an example belonging to a batch \(\mathcal{B}\), \(\mathbf{1}\) denotes an indicator function, and \(\lambda_{coord}\), \(\lambda_{obj}\), \(\lambda_{nobj}\), and \(\lambda_{class}\) are hyperparameters. IOU\({}_{b,i}\) is the intersection over union (IOU) of the predicted bounding box and ground truth bounding box for grid cell \(i\) of training example \(b\). IOU is defined as the ratio of intersection and union of the predicted bounding box and true bounding box, respectively. IOU represents the quality of localization of an object on the spectrogram.
An important thing to note in (1) is that the ground truth confidence score, \(c_{b,i}\), is multiplied by IOU\({}_{b,i}\) before it is compared to the predicted confidence \(\hat{c}_{b,i}\). This way, the CNN is trained to output high confidence in predicting the presence of an object only when the IOU of the predicted bounding box is
Fig. 4: Definition of radar (red), interference (magenta) objects in RadYOLO.
Fig. 3: Example spectrograms of five different radar (20 dB SNR) types after spectrogram preprocessing. No interference signal is present in these examples.
high. However, this creates a challenging problem in our object detection formulation. Since, in our formulation, individual radar pulses are treated as an object, the radar objects are very small with respect to the spectrogram. Consequently, a small localization error for a radar object can result in a very low IOU. If the IOU of the predicted bounding boxes for the radar objects is always low, the network will not be incentivized to predict high confidence, \(\hat{c}_{i,b}\), for the radar objects (see (1)). In such cases, it would be difficult for the trained CNN to differentiate between radar objects and background.
To deal with this challenge, we use three strategies. The first strategy is the spectrogram compression, described in Section III-A, which increases the radar objects' size compared to the spectrogram's size. Second, we penalize localization errors (first two terms in (1)) higher than the other terms such that the IOU of the predicted bounding boxes improves. This is done via choosing \(\lambda_{coord}\) to be higher than other hyperparameters. At the same time, we also use a higher value of \(\lambda_{nobj}\) such that the predicted confidence is further reduced when no object is present. This way, we aim to have high confidence for radar objects and low confidence for background and, in turn, better distinguishability between radar objects and background. However, using a high value of \(\lambda_{coord}\) causes overfitting. I.e., the localization error is low on the training dataset but not on the validation set. Hence, we carefully choose the values of \(\lambda_{coord}\) and \(\lambda_{nobj}\) via cross-validation such that overfitting does not happen during training. Our third strategy is slightly modifying the loss function in (1). Specifically, we modify the definition of IOU as the following:
\[\text{IOU}_{b,i}=\begin{cases}\text{IOU}_{b,i}+0.5\text{ if }h_{b,i}<2\%\text{ of spectrogram height}\\ \text{IOU}_{b,i}\text{ o.w.}\end{cases} \tag{2}\]
From Table II, we can see that the maximum possible radar pulse width is 100 \(\mu\)sec, which is less than 1% of \(T=16\) msec; the duration (height) of the spectrograms. Hence, whenever the true height of an object is less than 2% of the spectrogram height, we provide a boost of 0.5 to the IOU. The value of the boost parameter is chosen to be 0.5 via cross-validation. It is important to note that we must use the second and third strategies simultaneously. If we only use the second strategy while avoiding overfitting, we will not have sufficient confidence in detecting the radar objects. On the other hand, if we only use the third strategy, the network will not learn to perform accurate localization of radar objects. Finally, the interference objects are not affected by the challenge of small objects as they are much larger than the radar objects. Hence, our IOU modification does not affect interference objects as they do not fulfill the criteria in (2).
After training, we extract the following statistical parameters used in the prediction phase.
\(c_{R,O}^{max}\): We pass all training examples through the trained CNN. For a training example \(b\), we note the predicted radar confidence score \(\hat{c}_{b,i}^{R}=\hat{c}_{b,i}\times\hat{p}_{i}^{R}\) for each of the cells, \(i\), where radar objects are present (based on ground truth). Then, for that training example, we compute \(\hat{c}_{b,max}^{R}=\max_{i}\hat{c}_{b,i}^{R}\). Next, we form a set, \(\mathcal{C}_{R,O}^{max}\), that contains \(\hat{c}_{b,max}^{R}\), for all the training examples where radar was present. Finally, we compute \(c_{R,O}^{max}\) as the \(10^{th}\) percentile of the set of values in \(\mathcal{C}_{R,O}^{max}\). Essentially, \(c_{R,O}^{max}\) indicates the confidence of the trained model in detecting radar objects.
\(c_{R,O}^{min}\): For computing \(c_{R,O}^{min}\), we use the same procedure as \(c_{R,O}^{max}\), but for each training example we compute \(\hat{c}_{b,min}^{R}=\min_{i}\hat{c}_{b,i}^{R}\), instead of \(\hat{c}_{b,max}^{R}=\max_{i}\hat{c}_{b,i}^{R}\).
\(c_{I,O}^{max}\): We compute this using the same procedure as \(c_{R,O}^{max}\), but only for interference objects. \(c_{I,O}^{max}\) indicates the confidence of the trained model in detecting interference objects.
\(c_{I,O}^{min}\): We compute this using the same procedure as \(c_{R,O}^{min}\), but only consider interference objects.
\(c_{B,NO}^{max}\): For a training example \(b\), we note the predicted confidence in background \(\hat{c}_{b,i}^{B}=\hat{c}_{b,i}\times[1-(\hat{p}_{i}^{R}+\hat{p}_{i}^{I})]\) for each of the cells, \(i\), where no object is present. Then, for that training example, we compute \(\hat{c}_{b,max}^{B}=\max_{i}\hat{c}_{b,i}^{B}\). Next, we form a set, \(\mathcal{C}_{B,NO}^{max}\), that contains \(\hat{c}_{b,max}^{B}\), for all the training examples where at least one grid cell is present with no object. Finally, we compute \(c_{B,NO}^{max}\) as the \(95^{th}\) percentile of the set of values in \(\mathcal{C}_{B,NO}^{max}\). \(c_{B,NO}^{max}\) indicates the false object detection confidence of the trained model when no object is present.
_Prediction procedure_: The predictions in RadYOLO are made as shown in Fig. 5, which also shows the architecture of our CNN in RadYOLO. We select this architecture based on experimentation. Importantly, the compression method in Section III-A simplifies the CNN architecture by reducing the size of the input spectrogram. The CNN takes \(\mathbf{X}\) as the input and produces as output \(\mathbf{P}\in\mathcal{R}^{K_{C}\times 7}\) which consists of \((\hat{p}_{i}^{R},\hat{p}_{i}^{I},\hat{x}_{i},\hat{y}_{i},\hat{w}_{i},\hat{h}_{i },\hat{c}_{i})\) for each of the \(K_{C}=32\) cells. First, we multiply \(\hat{c}_{i}\) with \(\hat{p}_{i}^{R}\) and \(\hat{p}_{i}^{I}\) to get the class-specific confidence scores, \(\hat{c}_{i}^{R}=\hat{p}_{i}^{R}\times\hat{c}_{i}\) and \(\hat{c}_{i}^{I}=\hat{p}_{i}^{I}\times\hat{c}_{i}\), for each of the grid cells. As mentioned before, \(\hat{p}_{i}^{R}\) represents \(\text{Pr}[\text{radar}|\text{object is present}]\) for grid cell \(i\), and \(\hat{c}_{i}\) represents the confidence that an object is present in grid cell \(i\). Hence, \(\hat{c}_{i}^{R}\) represents the probability of a radar object's presence in grid cell \(i\). Similarly, \(\hat{c}_{i}^{I}\) represents the probability of an interference object's presence in grid cell \(i\). Next, we find the maximum of \(\hat{c}_{i}^{R}\) and \(\hat{c}_{i}^{I}\) and compare it with a threshold, \(t_{o}\). If \(\max\{\hat{c}_{i}^{R},\hat{c}_{i}^{I}\}\geq t_{o}\), we predict the presence of an object in grid cell \(i\). If the presence of an object is predicted, we associate that object with the radar class if \(\hat{c}_{i}^{R}>\hat{c}_{i}^{I}\); otherwise, we associate the object with the interference class. If an object is detected for grid cell \(i\), we estimate its location using \(\hat{x}_{i},\hat{y}_{i},\hat{w}_{i},\hat{h}_{i}\). For a test spectrogram, \(b\), if a radar object is detected in any grid cell, we decide the presence of radar
Fig. 5: CNN used in RadYOLO, along with the prediction procedure (shown for grid cell 1, but used for all \(K_{C}\) grid cells.)
signal in example \(b\). In such cases, we estimate the radar signal parameters using all the grid cells where radar objects have been detected. Let us denote those grid cells as the set \(\mathcal{K}_{C}^{R}\). We estimate the number of radar pulses as \(|\mathcal{K}_{C}^{R}|\), the center frequency as the mean of \(\hat{x}_{i};i\in\mathcal{K}_{C}^{R}\), the bandwidth of the radar signal as \(\cup_{i}\hat{w}_{i};i\in\mathcal{K}_{C}^{R}\), the pulse width as the minimum of \(\hat{h}_{i};i\in\mathcal{K}_{C}^{R}\), the pulse interval as the minimum difference between any pair of \(\hat{y}_{i};i\in\mathcal{K}_{C}^{R}\). Similarly, if an interference object is detected for any of the cells in test example \(b\), we decide the presence of interference. The interference signal parameters can be estimated using the same procedure as described above for radar signals. For interference signals, we care about center frequency (mean of \(\hat{x}_{i}\)), bandwidth (\(\cup_{i}\hat{w}_{i}\)), and ON times (\(\cup_{i}\hat{y}_{i}\) and the associated \(h_{i}\)), where \(i\) runs over the grid cells where an interference object has been detected.
_Selection of threshold, \(t_{o}\)_: In the prediction procedure, the threshold \(t_{o}\) plays a vital role in deciding whether an object is present. We carefully choose \(t_{o}\) to be \(\max\{c_{B,NO}^{max},\min\{c_{R,O}^{max},c_{I,O}^{max}\}\}\) and our choice is justified below. As defined earlier, \(c_{R,O}^{max}\) and \(c_{I,O}^{max}\) are the confidence of the trained model in detecting radar and interference objects, respectively. Thus by choosing \(t_{o}\) to be \(\min\{c_{R,O}^{max},c_{I,O}^{max}\}\), we declare the presence of an object only when the predicted confidence of the trained model is high enough for our target objects. However, we must also ensure that \(t_{o}\) is higher than \(c_{B,NO}^{max}\) (false object detection confidence of the trained model when no object is present) to minimize the number of false detection of objects. Hence, instead of using \(t_{o}=\min\{c_{R,O}^{max},c_{I,O}^{max}\}\), we use \(t_{o}=\max\{c_{B,NO}^{max},\min\{c_{R,O}^{max},c_{I,O}^{max}\}\}\).
_Capabilities of RadYOLO_: The description of RadYOLO explains that it has several capabilities that we aimed for. Recall from Section I that one of our primary goals is to detect low SNR radar signals. In Section III-A, we chose the number of frequency bins in the spectrograms carefully to improve the detectability of radar signals. However, our experiments suggest that such a measure may not be sufficient for low SNR radar signals. Fig. 6 shows the radar detection capability of RadYOLO for different radar SNR. The details of our experiments are presented later in Section IV. We see from Fig. 6 that as radar SNR reduces below 20 dB, the detectability of radar types 1, 2, and 4 is significantly degraded. To tackle this limitation of RadYOLO, we develop another strategy (the second flow in Fig. 1) described in the following two sections.
### _Wavelet Preprocessing_
_Intuition_: The idea of this block is to mimic the operation of matched filtering for improving radar SNR and, thus, its detectability. However, since the detector has no priori knowledge of the radar signal parameters, we design the steps in this block to overcome this problem. The input to this block is the same as the spectrogram preprocessing block, specifically the complex vector \(\mathbf{s}\) that comprises of the I, Q values. The operations in this block are shown in Fig. 7.
Methods based on matched filtering do not take care of interference. Hence, we take three measures to reduce the impact of interference in the second flow of RadYOLDet. Out of the three, one is explained in this section, and the remaining two are in the following section.
_Filtering_: The purpose of this step is the following. From Fig. 6, we see that RadYOLO's main limitation is with radar types 1, 2, and 4. Both radar types 1 and 2 have a fixed bandwidth of 1.6 MHz, and most of RadYOLO's misdetections for radar type 4 are for lower chirp width. Hence, we use the filtering step to look at smaller sub-bands where that radar signal may reside and possibly improve the radar SNR in these sub-bands compared to the whole monitored band. We select the subbands to be overlapped as we do not know the radar center frequency.
Another objective of the filtering step is to contrast radar signals from interference, which is our first measure to deal with interference in the second flow of RadYOLDet. Recall that the interference signals in our considered system model occupy the whole 10 MHz monitoring band. In contrast, radar types 1, 2, and 4 (main focus of RadYOLDet's second flow based on Fig. 6) occupy smaller bands. Hence, the sub-bands resulting from the filtering step will have dissimilar patterns on different subbands for radar signals but not for interference.
Now, we present the details of the filtering procedure. First, we perform \(N\) point FFT on \(\mathbf{s}\) to get a complex vector \(\mathbf{x}\). Next, we perform rectangular windowing on \(\mathbf{x}\) to get three different complex vectors \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\), and \(\mathbf{x}_{3}\). \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\), and \(\mathbf{x}_{3}\) are the frequency domain representation of \(\mathbf{s}\) over -5 to 0 MHz, -2.5 to 2.5 MHz, and 0 to 5 MHz, respectively, assuming that the sensor's monitoring band is -5 to 5 MHz. Then, we perform IFFT on \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\), and \(\mathbf{x}_{3}\) to obtain \(\mathbf{s}_{1}\), \(\mathbf{s}_{2}\), and \(\mathbf{s}_{3}\), respectively. Essentially, \(\mathbf{s}_{1}\), \(\mathbf{s}_{2}\), and \(\mathbf{s}_{3}\) represent the bandpass filtered time domain signals of the original time domain signal \(\mathbf{s}\). From the above description, we can see that the complexity of our filtering step increases with the monitoring bandwidth, \(B\). For this reason, using \(B=10\) MHz is convenient for RadYOLOLet as pointed out in Section II.
Fig. 6: Radar detection accuracy of RadYOLO for different SNR. No interference is present, and false positive rate is \(\approx\) 0%.
Fig. 7: Operations inside the wavelet preprocessing block.
_Wavelet transform_: Next, for each bandpass-filtered signal, we perform a CWT, which can be thought of as an equivalent of time-lagged correlation (convolution) of the signal and the filter impulse response. Since we do not know the ideal filter impulse response, we need to find a suitable mother Wavelet function that can approximate the filter impulse response. Additionally, since we do not know the radar signal frequency, the transform must be computed for different frequencies, i.e., for different scale parameters in the Wavelet transform. Now, we explain our Wavelet transform procedure for \(\mathbf{s}_{i}\), which is repeated for \(i=1,2,3\).
In CWT, first, we select a mother wavelet function, \(\Psi(t)\). Then, we correlate \(\mathbf{s}_{i}(t)\) with \(\Psi(\frac{t-\delta}{s})\) for different values of \(\delta\) and \(s\), represented by the following equation [24]:
\[\mathbf{W}_{i}^{u}(\delta,s)=\frac{1}{|\sqrt{s}|}\int_{-\infty}^{\infty} \mathbf{s}_{i}(t)\Psi^{*}(\frac{t-\delta}{s})dt \tag{3}\]
where \(\delta\) is the time lag parameter and \(s\) is the scale factor. Lower values of \(s\) correspond to compressed versions of the mother wavelet and extract high-frequency information. Higher values of \(s\) correspond to expanded versions of the mother wavelet and extract low-frequency information. Since \(\mathbf{W}_{i}^{u}\) has two parameters, the output of the CWT can be viewed as a matrix of size \(L\times S\), where the rows correspond to different lag parameters and the columns correspond to different scale parameters. We leverage this structure of \(\mathbf{W}_{i}^{u}\) in the Wavelet-CNN block of RadYOLOLet. We investigate the suitability of various mother Wavelet functions and choose the complex Morlet function, given below [6], based on its similarity with radar pulse shapes.
\[\Psi(t)=\frac{1}{\sigma\sqrt{2\pi}}\exp\Big{[}-\frac{1}{2}\Big{(}\frac{t}{ \sigma}\Big{)}^{2}\Big{]}\exp(j2\pi f_{0}t) \tag{4}\]
Here \(\sigma\) controls the bandwidth of \(\Psi(t)\) and \(f_{0}\) controls its frequency. (4) represents a complex exponential with a Gaussian envelope. In RadYOLOLet we use \(f_{0}=10\) MHz and \(\sigma\) such that the bandwidth of \(\Psi(t)\) is 1.5 MHz. Our choices of \(f_{0}\) and \(\sigma\) are based on the sensor's monitoring bandwidth, \(B\), and radar bandwidth (primarily type 1 and 2).
_Dimensions of \(\mathbf{W}_{i}^{u}\)_: The size of \(\mathbf{W}_{i}^{u}\) is \(L\times S\). Since the size of \(\mathbf{s}_{i}\) is 80000 we use 80000 different lag parameters in CWT. Hence, we have \(L=80000\). For the scale parameter, we use 64 different values that are uniformly spaced in logarithmic scale in the range \([\log_{10}0.5,\log_{10}64]\), which covers the frequencies relevant to our sensor. Thus, we have \(S=64\).
_Compression of \(\mathbf{W}_{i}^{u}\)_: As mentioned before, the matrices \(\mathbf{W}_{i}^{u}\); \(i=1,2,3\) are fed to a CNN. However, with \(\mathbf{W}_{i}^{u}\) having a dimension of \(80000\times 64\) complicates the design of the CNN architecture. Hence, we apply a compression technique on \(\mathbf{W}_{i}^{u}\); \(i=1,2,3\). This compression strategy, denoted as 'Imagewise compression' in Fig. 7, is similar to the compression technique described in Section III-A. Specifically, we reshape \(\mathbf{W}_{i}^{u}\) of size \(80000\times 64\) to \(400\times 200\times 64\). Then for each of the \(200\times 64\) matrices, we retain the column-wise maximum value, resulting in a compressed version of \(\mathbf{W}_{i}^{u}\). Let us denote the compressed version of \(\mathbf{W}_{i}^{u}\) as \(\mathbf{W}_{i}\in\mathcal{R}^{400\times 64}\). For a particular scale, collapsing 200 consecutive values along the time lag dimension in \(\mathbf{W}_{i}^{u}\) can be justified in a similar manner as done in the context of \(\mathbf{X}_{u}\). 200 consecutive values along the time lag dimension correspond to \(200\times 0.1\mu\)sec (inter-sample duration) \(=20\mu\)sec, which is much smaller than the radar inter-pulse intervals (refer to Table II).
### _Wavelet-CNN_
As discussed in the previous section, the Wavelet preprocessing block tries to mimic the operation of matched filtering. Since the filter impulse response is unknown, we perform the computation in (4) for different values of \(s\). However, we still need to make the detection decision after the approximated matched filtering step. Based on the computed Wavelet transforms, we decide whether a radar signal is present or not. For that, we use a CNN as described next.
The input to Wavelet-CNN is the tensor \(\mathbf{W}=[\mathbf{W}_{1},\mathbf{W}_{2},\mathbf{W}_{3}]\), whose dimension is \(3\times 400\times 64\), as shown in Fig. 8. The matrices in \(\mathbf{W}\) are similar to the spectrograms discussed in Section III-A. However, the primary difference is that STFTs can represent high resolution in either time or frequency, whereas the matrices in \(\mathbf{W}\) (the CWTs) represent high-resolution information in both time and frequency domains. Hence we use \(\mathbf{W}\) as the input features to Wavelet-CNN. The neural network acts as a function that performs the following mapping: \(\mathcal{F}:\mathcal{R}^{3\times 400\times 64}\rightarrow\mathcal{R}^{2}\), where the input to \(\mathcal{F}\) is \(\mathbf{W}\) and output is the tuple \((\hat{p}_{W}^{R},1-\hat{p}_{W}^{R})\). Here \(\hat{p}_{W}^{R}\) is the predicted probability of the presence of radar signal. Clearly, \(\mathcal{F}\) acts as a binary classifier.
_Training procedure_: During training, we learn the function \(\mathcal{F}\) by minimizing the binary cross-entropy loss function, \(\mathcal{L}_{W}=\sum_{b\in\mathcal{B}}p_{W,b}^{R}\times\log_{2}(\hat{p}_{W,b}^ {R})+(1-p_{W,b}^{R})\log_{2}(1-\hat{p}_{W,b}^{R})\), using the Adam optimizer. \(p_{W,b}^{R}\) and \(\hat{p}_{W,b}^{R}\) are the true and predicted probabilities of the presence of radar in training example \(b\). After training, we extract the following parameter for the prediction phase.
\(p_{W}^{R,true}\): We pass all the training examples through the trained CNN. We note the predicted radar probability \(\hat{p}_{W,b}^{R}\) for all the examples where radar is present (based on ground truth) and form the set \(\mathcal{P}_{W}^{R,true}\). Finally, we find the \(g^{th}\) percentile of \(\mathcal{P}_{W}^{R,true}\) and denote it as \(p_{W}^{R,true}\).
_Prediction procedure:_ During prediction, we pass the input tensor \(\mathbf{W}_{b}\) through the trained CNN, \(\mathcal{F}\) and note the predicted radar probability, \(p_{W,b}^{R}\). Then, we compare \(p_{W,b}^{R}\) to a threshold, \(t_{w}\), and declare the presence of radar only if \(p_{W,b}^{R}\geq t_{w}\). We select the threshold as \(p_{W}^{R,true}\). Instead of just declaring the presence of radar if \(p_{W,b}^{R}\geq 0.5\), we use the thresholding operation because of the following reason. Since we do not train Wavelet-CNN to differentiate between radar and interference, the network may predict \(p_{W,b}^{R}\) to be greater than 0.5, but not necessarily very high, when only the interference is present. In such cases, we will have false
Fig. 8: Neural network architecture of Wavelet-CNN.
alarms. Hence, we must choose a non-zero but small value of \(g\) in the definition of \(p_{W}^{R,true}\). This is our second measure for tackling interference in RadYOLOLet's second flow. The third measure is the following.
We assume that the sensor is aware of the interference signal's center frequency and introduce a slight frequency offset \(\Delta_{CF}\) at the sensor with respect to the interference signal. The center frequency offset will introduce distortions in the digitally modulated interference signals and appear as noise to the subsequent signal processing steps.
_Parameter estimation:_ Wavelet-CNN cannot estimate signal parameters. We develop a strategy to partially address this limitation. Our strategy is to reuse the neural network output of RadYOLO, \(\mathbf{P}\), but apply a different post-processing technique. Recall from Section III-B that during prediction, RadYOLO first decides whether an object is present or not. Then, it performs object localization (\(\hat{x}_{i},\hat{y}_{i},\hat{w}_{i},\hat{h}_{i}\)) only if an object has been detected. Once we are into the second flow of RadYOLOLet, it is evident that no radar object was detected by RadYOLO. However, if Wavelet-CNN predicts the presence of a radar signal, we can override the object detection decision of RadYOLO. Overriding the object detection decision of RadYOLO implies using an object detection threshold, say \(t_{o}^{w}\), that is different from \(t_{o}\) (refer to Fig. 5). \(t_{o}^{w}\) must be lower than \(t_{o}\); otherwise, no radar object would be detected, as was the case with RadYOLO in the first place. Note that by using a lower object detection threshold, we are not affecting RadYOLOLet's radar false alarm rate as the decision regarding the presence of radar has already been made by Wavelet-CNN. However, a very low value for \(t_{o}^{w}\) may cause many false object detections and adversely affect the radar parameter estimation quality. Based on these factors, we choose \(t_{o}^{w}\) to be \(c_{H,O}^{min}\). Recall from Section 5 that \(c_{R,O}^{min}\) considers the minimum confidence of all the radar objects on a spectrogram, whereas \(c_{R,O}^{max}\) (used in \(t_{o}\)) considers the maximum. Using \(t_{o}^{w}\), we perform the object detection and localization on \(\mathbf{P}\) as shown in Fig. 5 with the only difference that the procedure is applied only to radar class as Wavelet-CNN only impacts radar detection.
## IV Evaluations
In this section, first, we describe the datasets and experiments. Next, we present the evaluation metrics and the baseline methods. Finally, we present the evaluation results.
### _Datasets_
**Radar**: For radar signals, we rely on a dataset generated synthetically by NIST [25]. This dataset provides several captures, each of 80 msec, in the form of I, Q values. The captures correspond to a 10 MHz band. Almost half of the captures have no radar signal (receiver noise only), and the remaining ones with radar. Each of the captures containing a radar has at most one radar signal, chosen randomly from the five radar types listed in Table II. The radar parameters are randomly chosen from the ranges specified in Table II. The SNR of the radar signals is chosen randomly from \([10,12,14,16,18,20]\) dB, and no interference signal is present in the captures. For our evaluations, we use 9000 captures with 50:50 split between radar and noise.
**Interference**: For evaluating RadYOLOLet in interference, we generate several interference datasets. As discussed in Section II, the interference signals are assumed to be downlink signals from a BS. For the following datasets, INR is defined as the 'average interference-plus-noise to average noise ratio' over a band of 1 MHz around the interference signal's center frequency. This is done so that the SINR values can be easily computed and to make the SINR values meaningful. (Recall from Section I that radar SNR values are also defined similarly).
_QPSK ON dataset_: Using MATLAB, we generate 2000 captures of QPSK signals with a bandwidth of 9.1 MHz and a center frequency offset of \(\Delta_{CF}\) = 0.35 MHz with respect to the sensor's center frequency. Each capture is 80 msec, and the QPSK signal is always ON within one capture. The QPSK signal changes at the symbol rate. The INR is randomly chosen from [2, 4, 6, 8, 10] dB across captures but is kept constant within a capture.
_QPSK ON-OFF dataset_: Similar to QPSK ON dataset but the QPSK signal turns ON for 3 msec and then off for 2 msec. One such ON-OFF pattern is shown in Fig. 4(b).
_LTE FDD dataset_: Using MATLAB LTE toolbox [26], we generate 2000 captures LTE downlink captures that occupy 50 resource blocks (9 MHz). This dataset's frequency offset, capture duration, and INR values are similar to other datasets. For this dataset, we use LTE frequency division duplexing (FDD) mode [22], where BSs and UEs use different frequencies.
_LTE TDD dataset_: This dataset is similar to the above dataset, but we use LTE time division duplexing (TDD) [22], where BSs and UEs use the same band for their transmissions but take turns in multiples of LTE slot duration (1 msec) defined via the uplink/downlink (UL/DL) configurations. For each capture, we randomly choose one of seven possible UL/DL configurations [22].
### _Experiments_
Using the above datasets, we conduct three experiments.
**Experiment 1**: We create a training dataset of 2500 radar, 2500 AWGN (receiver noise), 2500 interference, and 2500 radar plus interference captures. We randomly select between QPSK ON and QPSK ON-OFF for the interference captures. For the radar plus interference captures, we simply add the I,Q values of radar and interference (randomly chosen between QPSK ON and QPSK ON-OFF) while ensuring that receiver noise is not added twice. For each of the captures, we select a 10 msec long capture from the 80 msec captures in the datasets. The radar and interference start time within the 10 msec capture are randomized. We train different methods (described later in this section), including RadYOLOLet, using these 10,000 examples. For all the deep learning methods, we use the Keras [27] framework. We evaluate the trained models using various metrics (explained later) on the test set. The test set has 4000 examples, almost half radar and the remaining AWGN. Note that the test set has no interference.
**Experiments 2A and 2B**: We use the models trained in experiment 1, but the test sets are different. For both experiments 2A and 2B, the test set contains 4000 examples, with almost half of them radar plus interference and the remaining interference only. The interference signals are QPSK ON and QPSK ON-OFF for experiments 2A and 2B, respectively.
**Experiment 3**: This experiment is similar to experiment 1, but in this case, we use the LTE FDD and TDD interference signals for the training instead of QPSK interference.
**Experiments 4A and 4B**: These experiments are similar to experiments 2A and 2B, but we use the models trained as part of experiment 3. The interference signals are LTE FDD and LTE TDD for experiments 4A and 4B, respectively.
### _Metrics_
We evaluate RadYOLOLet and compare it with other methods using the metrics in Table III.
### _Methods for comparison_
In this section, we describe the methods that we use to compare RadYOLOLet's performance.
**Peak analysis classifier (PAC)**[14]: This method computes the following features and uses them for training an SVM-based binary classifier. The features are mean, variance, maximum of the time intervals between the peaks of the amplitude, and the mean amplitude of the peaks of the captured signal. This method can only distinguish between radar and non-radar signals. Hence, it cannot detect interference signals, and also cannot estimate the signal parameters.
**DeepRadar**[3]: This method treats all the radar pulses on a spectrogram as a single object and applies YOLO for detecting and localizing those objects. As a result, this method can estimate radar center frequency and bandwidth but cannot estimate the temporal parameters. For a fair and meaningful comparison with RadYOLOLet, we make some modifications in DeepRadar. First, DeepRadar considers 100 MHz monitoring bandwidth and uses multiple grid cells in YOLO along the frequency axis of the spectrograms. However, in this paper, we consider the monitoring bandwidth to be 10 MHz. Accordingly, we use only one grid cell in DeepRadar's YOLO along the frequency axis of the spectrograms. Second, we apply the preprocessing proposed in this paper also to the spectrograms fed to DeepRadar.
**RadYOLO**: This is simply our proposed scheme, but without using the second flow of Fig. 1.
### _Results_
#### Iv-E1 Radar vs. non-radar classification in AWGN
Using experiments 1 and 3, we compare the classification (radar versus AWGN) accuracy of different methods in Fig. 9. The results are combined for SNR range \([10-20]\) dB and all radar types. We make the following observations.
First, differences in training data lead to different models for the same method. Thus, the performances of PAC and RadYOLO are different for experiments 1 and 3, which have similar test data but different training data. The trained models for experiment 3 perform better than those for experiment 1 because the interference patterns in experiment 3's training data are less obscuring. Specifically, the LTE TDD interference has a lower average ON time (6 msec) due to the different possible UL/DL configurations than the average ON time of QPSK ON-OFF signals (9 msec). RadYOLO is sensitive to interference as it treats interference as a different class, unlike Wavelet-CNN and DeepRadar. The reason for PAC's sensitivity to interference is different. PAC relies on the amplitude peak statistics, which are affected by interference. This figure also shows the robustness of our overall scheme RadYOLOLet as it is unaffected by the differences in the training data. It appears that DeepRadar is also unaffected by the differences in the training data, but subsequent results will show that this is not the case in all experiments.
Fig. 9: Binary classification between radar and non-radar signals, which for these plots is AWGN, for different methods.
Second, for all the metrics considered in Fig. 9, PAC does not perform as well as the other methods, justifying the need for deep learning-based classification in our problem. The handcrafted feature extraction also limits PAC's pattern recognition capability.
Third, DeepRadar performs better than RadYOLO in Fig. 9(a). This is because our object detection formulation, described in Section III-B, is more complex than that of DeepRadar. We use a complex framework to estimate more signal parameters than DeepRadar. However, the added complexity hurts RadYOLO's radar detection performance to some extent.
Fourth, although DeepRadar performs better than RadYOLO in experiment 1, our overall method, RadYOLOet, outperforms all other methods due to the combined use of RadYOLO and Wavelet-CNN. Thus, by combining the benefits of the two flows, RadYOLOet can achieve both superior classification accuracy and diverse parameter estimation capability.
Fifth, RadYOLO and RadYOLOLet have radar false positive rate below 1% due to careful selection of thresholds, \(t_{o}\) and \(p_{W}^{R,true}\), described in Section III-B and III-D. DeepRadar has low \(p_{f}^{R}\) as it deals with larger objects, unlike RadYOLO, and is less prone to false object detections.
#### Iv-A2 Radar detection accuracy versus SNR
Fig. 9 showed the radar true positive rate, \(p_{d}^{R}\), combined across all SNR values and all radar types. To gain insights about performance across different SNRs, in Fig. 10, we show different methods' \(p_{d}^{R}\) for individual radar types and different values of SNR using the results of experiment 1. Fig. 10 does not show the results for RadYOLO as it was presented before in Section III-B. Hence in Fig. 10(d), we show RadYOLOLet's improvement in \(p_{d}^{R}\) over that of RadYOLO. Our primary observations are the following.
First, for all the methods detecting radar type 1 for low SNR becomes difficult. This can be explained using Table II, which shows that the pulse width of radar type 1 is much smaller than others. A lower value of radar pulse width implies that the sensor must detect the radar using less signal energy. We also see that the performance trends corresponding to different radar types are similar for all the methods. This implies some radar types are inherently more challenging to detect than others. For example, different radar types have different bandwidth. Hence, even for the same per MHz SNR (the way we defined SNR as per CBRS rules), the total SNR can be different for different radar types.
Second, although the \(p_{d}^{R}\) improvement in RadYOLOLet over DeepRadar is 6% in Fig. 9, we see a more important difference from Fig 10(b), (c). RadYOLOLet can achieve 100% \(p_{d}^{R}\) for all radar types up to 16 dB SNR. DeepRadar can achieve high \(p_{d}^{R}\) for all radar types up to 14 dB, but it cannot guarantee 100% \(p_{d}^{R}\) while ensuring a false positive rate below 1%. RadYOLOLet's superior performance in low SNR is due to the robustness of Wavelet-CNN and its preprocessing.
Third, Fig. 10(d) shows that the main \(p_{d}^{R}\) improvement of RadYOLOLet over RadYOLO is in the low SNR regime, especially for radar types 1, 2, and 4, which is precisely what we aimed for in the second flow of RadYOLOLet.
#### Iv-A3 Radar vs. non-radar classification in interference
In Fig. 11, we compare the radar versus non-radar classification accuracy for different methods in the presence of different types of interference. The results are combined for SNR range [10-20] dB and INR range [2-10] dB, i.e., SINR range [0-18] dB. Results in Fig. 11(a), 11(b) are obtained using the model trained in experiment 1. Results in Fig. 11(c), 11(d) are obtained using the model trained in experiment 3.
First, we see that for all four experiments, RadYOLOLet has the best performance in terms of \(p_{c}^{R}\) and \(p_{d}^{R}\). This shows that our proposed method can detect radar accurately, even in interference. Importantly, Fig. 11 also shows that RadYOLYOLet achieves this high radar detection accuracy while ensuring that the radar false positive rate, \(p_{f}^{R}\), is less or equal to 1%. This demonstrates that the measures we took for dealing with interference in Section III-C and III-D are effective.
Second, comparing Fig. 11(a), 11(b), we see that, for all the methods, the classification accuracy, \(p_{c}^{R}\), is higher with QPSK ON-OFF interference than QPSK ON interference. The reason is that QPSK ON interference is always ON, making it more difficult to detect the radar signals. On the other hand, QPSK ON-OFF interference turns ON intermittently. A similar observation can be made by comparing Fig. 11(c), 11(d). Fig. 11(c) corresponds to LTE FDD interference, which is always ON, and Fig. 11(d) corresponds to LTE TDD, which is intermittently ON. However, the performance gap between Fig. 11(c), 11(d) is less than that of Fig. 11(a), 11(b). The explanation for the following observation also answers this.
Third, by comparing Fig. 11(a), 11(b) with Fig. 11(c), 11(d) we observe that RadYOLO and DeepRadar perform better with LTE interference than QPSK interference. The reason for that is the following. When RadYOLO and DeepRadar are trained with QPSK interference (refer to experiment 1), their object detection models have some overfitting because of interference. I.e., the models produce high confidence for the detected radar objects based on training data (the confidences influence the object detection thresholds) but the confidence for test radar objects is lower. This primarily affects the test data in experiment 2A, where the test signals have QPSK ON interference. Also, this problem is more prominent for
Fig. 10: Comparison of different methods in terms of radar detection accuracy, \(p_{d}^{R}\), for fixed AWGN power but varying radar signal power.
RadYOLO because of the small radar pulse objects. On the other hand, for the RadYOLO and DeepRadar models trained with LTE interference, this issue is less severe because the interference patterns in the training data are less obscuring, as discussed in Section IV-E1. However, Fig. 11 also reveals that our overall scheme RadYOLOLet is less affected by the above issue because of the use of Wavelet-CNN to aid RadYOLO. Wavelet-CNN does not use an object detection and is less interference-sensitive. Hence, RadYOLOLet has consistent performance across all four cases in Fig. 11.
#### Iv-B4 Interference vs. non-interference classification
In Fig. 11, we argued that the undetected radar signals in RadYOLO are miss detections, not miss classifications. This is demonstrated in Fig. 12(a) and 12(b), showing that the interference false positive rate is always 0%. Note that the results in Fig. 12 are only for RadYOLO as the other methods cannot classify interference signals. In experiments 1 and 3, we do not have any interference signal in the test set. Hence for these two experiments, we use 'NA' for interference true positive rate, \(p_{d}^{I}\). Fig. 12(a) and 12(b) also show that the \(p_{d}^{I}\) with QPSK ON and LTE FDD interference is very high. However, that is not the case for QPSK ON-OFF and LTE TDD. The reason is interference objects corresponding to QPSK ON and LTE FDD signals are much bigger than that of QPSK ON-OFF and LTE TDD signals. Hence, they are easier to detect with high confidence. Due to similar reasoning, \(p_{d}^{I}\) for QPSK ON-OFF is higher than that of LTE TDD.
Fig. 12(c) shows the interference detection rate, \(p_{d}^{I}\), for different values of INR and different experiments. This figure shows that the interference signals are missed more in the low INR region. This is expected because, at low INR regions, the interference objects are less detectable. Importantly, the missed interference signals are not miss classified as radar as demonstrated by the \(\leq 1\%\) radar false positive rate in Fig. 11.
#### Iv-B5 Radar detection performance at different SINR
Next, we analyze \(p_{d}^{R}\) for experiments 2A, 2B, 4A, and 4B in Fig. 13, 14, 15, and 16, respectively, for different values SINR and the
Fig. 14: Radar detection probability, \(p_{d}^{R}\), for DeepRadar (top), RadYOLO (middle), and RadYOLOLet (bottom) for experiment 2B.
Fig. 12: Classification between interference and non-interference using RadYOLO. The radar SNR and interference INR ranges are the same as in Fig. 11. Fig. 12(c) shows the interference detection rate of RadYOLO for different INR in different experiments.
Fig. 13: Radar detection probability, \(p_{d}^{R}\), for DeepRadar (top), RadYOLO (middle), and RadYOLOLet (bottom) for experiment 2A.
Fig. 11: Classification between radar and non-radar for different methods. Both radar and non-radar signals have interference on top of the noise floor. Results are combined for SNR range [10-20] dB and INR range [2-10] dB, i.e., SINR range [0-18] dB.
different radar types. Since we vary both the radar SNR and the interference INR, the results are shown as images where the pixel values are \(p_{d}^{R}\), the INR increases along the x-axis, and the SNR increases along the y-axis. The primary observation is that irrespective of the interference type, RadYOLOLet can tolerate up to 4 dB INR and still achieve 100% radar detection accuracy when the radar SNR is fixed at 20 dB. On the other hand, if we fix the INR to be 2 dB, then RadYOLOLet can achieve 100% \(p_{d}^{R}\) for SNR up to 18 dB. The above two observations suggest that RadYOLOLet can accurately function up to 16 dB radar SINR. This is not achievable by the other two methods. These figures also show that the second flow of RadYOLOLet assists RadYOLO not only in AWGN but also in different types of interference.
#### Iv-B6 Signal parameter estimation
In Fig. 17, we show the parameter estimation of different methods. Fig. 17(a) is an example estimation of RadYOLO that will help us explain some observations. In Fig. 17, along with RadYOLO and RadYOLOLet, we evaluate DeepRadar for bandwidth estimation but not for the other parameters as DeepRadar cannot estimate them.
First, we see from Fig. 17(b), 17(c) that both in terms of missed and excess bandwidth, DeepRadar performs better than RadYOLO and RadYOLOLet. The difference is marginal in terms of \(b_{M}^{R}\), and all the methods have a low missed bandwidth error. However, \(b_{E}^{R}\) for RadYOLO and RadYOLOLet is at least 1 MHz higher than that of DeepRadar. This happens because RadYOLO attempts to detect each radar pulse individually and decides the signal bandwidth to be the union of the bandwidth of the individual pulses. As we can see from Fig. 17(a), the bandwidth estimation of individual pulses can be different, leading to a wider estimated bandwidth than the actual. Since RadYOLOLet relies on RadYOLO's bandwidth estimation, it also has the same problem. DeepRadar does not have this problem because it treats all the pulses as a single object. However, this also is a limitation of DeepRadar as it cannot estimate the temporal parameters.
Second, Fig. 17(d) shows that both RadYOLO and RadYOLOLet detect 40-75% of the radar pulses. This shows the difficulty of detecting small radar pulse objects. Even after prioritizing the small objects in the loss function in (1), RadYOLO cannot detect all pulses. However, this is not a major problem as RadYOLOLet does not require detecting all the radar pulses for estimating pulse width and interval as explained in Section III-B. Fig. 17(d) also shows that RadYOLO and RadYOLOLet detect more pulses for the model trained in experiment 3. The reason is the difference in training data in experiments 1 and 3, as explained in the context of Fig. 11. However, a higher \(n_{p}^{R}\) also increases the possibility of excess bandwidth estimation. Thus, \(b_{E}^{R}\) for RadYOLO and RadYOLOLet is higher in experiments 4A, 4B as shown in Fig. 17(c).
Third, Fig. 17(e) shows that the pulse width estimation error of RadYOLO is high with respect to the true average radar pulse width. However, Fig. 17(a) shows that the estimated width of the pulses (height of objects on spectrogram) is well aligned with the actual pulses. This incongruence arises from the fact that the radar pulse width is very low, on average 30 \(\mu\)sec, and the errors are also shown in \(\mu\)sec, which is difficult to interpret visually from Fig. 17(a). The error in pulse width can be attributed to the difficulty in estimating the extremely small radar pulse width.
Fourth, we see from Fig. 17(f) that the pulse interval errors are lower when the test set contains no interference signal. This happens because in the presence of interference, RadYOLO misses a higher number of pulses, which is reflected by Fig. 17(d). This, in turn, affects the pulse interval estimation, which is the minimum gap between any pair of detected pulses.
Fifth, for almost all the plots in Fig. 17 RadYOLOLet's performance is comparable to that of RadYOLO. While this is expected as RadYOLOLet relies on RadYOLO's estimations, it is important to note that RadYOLOLet's results are based on more test examples than RadYOLO. This is demonstrated via Fig. 17(g), which shows the percentages of test data on which the methods perform successful radar detection and parameter estimation. RadYOLO's detection and estimation percentage is always same as they are done jointly. RadYOLOLet's detection percentage is strictly better than that of RadYOLO as discussed in Fig. 9, 11. RadYOLOLet's estimation percentage is better than that of RadYOLO due to Wavelet-CNN's parameter estimation approach presented in Section III-D. However, as the detections and estimations are decoupled in Wavelet-CNN,
Fig. 16: Radar detection probability, \(p_{d}^{R}\), for DeepRadar (top), RadYOLO (middle), and RadYOLOLet (bottom) for experiment 4B.
Fig. 15: Radar detection probability, \(p_{d}^{R}\), for DeepRadar (top), RadYOLO (middle) and RadYOL (middle) for experiment 4A
its estimation percentage is always lower than its detection percentage.
Fig. 17(h) shows the parameter estimation for interference signals. Recall that Wavelet-CNN cannot improve interference detection and estimation performance compared to RadYOLO. Hence, the results in Fig. 17(h) are for RadYOLO. We see that the ON time estimation errors are very small for experiments 2A, 4A. This is because the interference objects in these experiments have fixed sizes, i.e., lesser uncertainty. On the other hand, the estimation errors are higher for experiments 2B and 4B because the interference objects in these experiments can have unknown locations and sizes. The missed ON time is higher in experiment 2A because the interference objects for QPSK ON-OFF interference are larger than the interference objects for LTE FDD.
### _Radar signal parameter estimation for different types_
Fig. 17 shows the results for all radar types combined. To provide more insights, we show RadYOLO's radar parameter estimation for different radar types in Fig. 18. The results in this figure are for experiment 1. The observations from this figure are the following. First, the missed bandwidth is higher for radar type 4 as its bandwidth variability is higher than that of other radar types. Second, the excess bandwidth is higher for radar types 1 and 2 because they are narrower than the other radar types. Third, the pulse width estimation error is lower for radar type 5 as its pulse width is higher than the remaining ones. Fourth, the pulse interval estimation error is relatively higher for radar types 2 and 3. This happens because the number of detected pulses affects the pulse interval estimation. We can see from Fig. 18(c) that the fraction of detected pulses is lower for these two radar types.
## V Conclusions and Future Work
We presented RadYOLdet, a novel deep-learning-based versatile spectrum sensing method for detecting radar and estimating their parameters. We developed two different CNNs, RadYOLO and Wavelet-CNN, that are the workhorse for RadYOLdet. Both the CNNs and their inputs and outputs were carefully designed. We thoroughly evaluate RadYOLO-Let using a diverse set of experiments. Our evaluations demonstrate the efficacy of RadYOLdet both in low SNR and low SINR. Specifically, RadYOLdet can achieve 100% radar detection accuracy to 16 dB SNR, as well as 16 dB SINR, which cannot be guaranteed by other comparable methods.
|
2309.07499 | Efficiently Robustify Pre-trained Models | A recent trend in deep learning algorithms has been towards training large
scale models, having high parameter count and trained on big dataset. However,
robustness of such large scale models towards real-world settings is still a
less-explored topic. In this work, we first benchmark the performance of these
models under different perturbations and datasets thereby representing
real-world shifts, and highlight their degrading performance under these
shifts. We then discuss on how complete model fine-tuning based existing
robustification schemes might not be a scalable option given very large scale
networks and can also lead them to forget some of the desired characterstics.
Finally, we propose a simple and cost-effective method to solve this problem,
inspired by knowledge transfer literature. It involves robustifying smaller
models, at a lower computation cost, and then use them as teachers to tune a
fraction of these large scale networks, reducing the overall computational
overhead. We evaluate our proposed method under various vision perturbations
including ImageNet-C,R,S,A datasets and also for transfer learning, zero-shot
evaluation setups on different datasets. Benchmark results show that our method
is able to induce robustness to these large scale models efficiently, requiring
significantly lower time and also preserves the transfer learning, zero-shot
properties of the original model which none of the existing methods are able to
achieve. | Nishant Jain, Harkirat Behl, Yogesh Singh Rawat, Vibhav Vineet | 2023-09-14T08:07:49Z | http://arxiv.org/abs/2309.07499v1 | # Efficiently Robustify Pre-Trained Models
###### Abstract
A recent trend in deep learning algorithms has been towards training large scale models, having high parameter count and trained on big dataset. However, robustness of such large scale models towards real-world settings is still a less-explored topic. In this work, we first benchmark the performance of these models under different perturbations and datasets thereby representing real-world shifts, and highlight their degrading performance under these shifts. We then discuss on how complete model fine-tuning based existing robustification schemes might not be a scalable option given very large scale networks and can also lead them to forget some of the desired charactersics. Finally, we propose a simple and cost-effective method to solve this problem, inspired by knowledge transfer literature. It involves robustifying smaller models, at a lower computation cost, and then use them as teachers to tune a fraction of these large scale networks, reducing the overall computational overhead. We evaluate our proposed method under various vision perturbations including ImageNet-C,R,S,A datasets and also for transfer learning, zero-shot evaluation setups on different datasets. Benchmark results show that our method is able to induce robustness to these large scale models efficiently, requiring significantly lower time and also preserves the transfer learning, zero-shot properties of the original model which none of the existing methods are able to achieve.
## 1 Introduction
Large scale deep neural networks trained on large scale data have revolutionized the modern AI era. They are significantly effective in solving practical problems of high importance. These include object detection, zero-shot classification, image segmentation, image generation, and many other applications [18, 26, 28, 40, 5, 30, 36, 9].
Though the large models have shown impressive results on many vision problems [28, 30], their reliability under distribution shift e.g., under illumination changes, geographical variations, camera properties etc., is still under-explored. In this paper, we fist investigate the behavior of large models under distribution shifts. We analyse popular models under synthetic perturbations to images [11], natural distribution shifts [10, 13], differently styled images [32] and dataset shift [2]. Our analysis of models of various sizes, architecture families (transformers or CNNs) and training modalities (uni or multi-modal) establishes their brittleness under distribution shifts.
This analysis begs the question: can we induce robustness to large vision models without sacrificing their original properties? It is critical to simultaneously maintain _clean_ accuracy on the original datasets, improve _robust_ accuracy on the shifted data and preserve the _transfer learning_ capabilities of the large models. Further, computation efficiency during both training and inference is beneficial.
While several prior works can be used to make large-scale models robust, they do not possess the desired prop
Figure 1: ImageNet-C accuracy v/s training time comparison. Our method is on the pareto-front (achieves better robust accuracy in much lesser time) compared to the state-of-the-art methods Augmix based Complete fine-tuning and WISE-complete fine-tuning. The data points labelled with suffix “C” correspond to CLIP models.
erties discussed above. One direction involves fine-tuning the model [34, 16]. This generally suffers from either poor performance under synthetic perturbations or requires significant training time. Another line of work could be to use advanced augmentation techniques (e.g., aug-mix, pix-mix) [14, 12, 10, 4]. They are effective under synthetic perturbations and natural shifts in the data. However, they require significantly larger compute time and lead to the large models forgetting their original and transfer learning properties. Figure 1 shows this analysis in a pareto-front plot for two of the recently proposed robustness methods.
To this end, we propose a knowledge transfer method to induce robustness to large models that possesses all the desired properties discussed above. It makes large models robust efficiently (refer Fig.1). We take a _plug-and-play_ approach: insert an additional small robust module and update only a very small portion of the existing large models. To achieve robustness, we explore a new direction: a relatively much smaller but robust model inducing robust knowledge to a large model. Though this provides a novel look at the knowledge distillation approach, a straight-forward application leads to the large models forgetting their original properties. For this challenging task of ensuring that clean accuracy is preserved in the clean module, robustness induced into the robust module and correct module selected at test time, we propose a novel uncertainty-aware knowledge distillation technique. This allows us to fulfil all our required objectives. Since our method involves updating only a small chunk of the large network, it achieves low training latency (refer section 5). To the best of our knowledge, this is the first time such a setup has been used involving knowledge transfer from a smaller to a large model. Further, it should be noted that smaller models can be made robust by using prior works like advance augmentation methods [12, 10, 14].
We evaluate our method under various distribution shift on ImageNet data [29] in section 5. It includes ImageNet-C [11], ImageNet-R [10], ImageNet-A [13], ImageNet-sketch [32], ImageNet-V2. We also evaluate on ObjectNet [2] and its perturbed variations ObjectNet-C. We show results for both multi-modal (various CLIP models) and unimodal (various architectures including both ResNets and Vision Transformers). Alongside this, we also test our method on other datasets in the transfer learning setup, to analyze further if the desired properties of the model are restored. In all these cases, our method outperforms prior approaches on robust accuracy while still performing at par on clean accuracy. At the same time, possessing desired characteristics like transfer learning capabilities (refer section 5) and being efficient during training and inference.
## 2 Related Work
Large scale models.In recent years, studies [39, 37, 8] have shown that training large models such as vision transformers [5] on large datasets can improve accuracies significantly. Several works [27, 3] have evaluated these models for robustness lately. Furthermore, these large models can been trained either in a unimodality setup [27] or multi modality setup [28, 40, 36]. Though they achieve good performance on several downstream tasks, any modification of these large models can lead to forgetting of the knowledge contained in them. In contrast, we propose a method that allows to adapt the model parameters without sacrificing their properties.
Finetuning and Augmentation based methods to achieve robustness.Several advanced augmentation techniques have been proposed to improve robustness of the deep learning based models. Examples includes cut-out, cutmix, mixup, augmix, pixmix and others [4, 38, 12, 41]. Further, a recently proposed method WISE interpolates the fine-tuned and original model parameters [34] for robustness [41]. Generally these techniques are a computation heavy process and also leads to modifying the main network parameters that could lead to these large models forgetting in their original properties. In our approach, we use some of these advanced augmentation technique to make our teacher network robust. We ensure that our robust approach does not sacrifice the large models' properties and is also computationally efficient.
Knowledge distillationIt involves transferring knowledge from a large network to a smaller network by minimizing the distance between the predicted logit distribution of the student and teacher networks [15]. It proved to be highly effective on standard downstream datasets [1, 7]. In all these KD applications, knowledge is transferred from a larger network to a smaller network. In contrast, we propose a method to induce robustness to a larger network by transferring knowledge from a smaller (teacher) network.
## 3 Robustness Analysis
In this section, we analyze the image classification performance of models of different shapes and sizes, with different training settings (unimodal or multimodal). We stress test the models under both synthetic and natural perturbations. Especially, contrasting the behaviour of multimodal models (vision-language) _vs._ unimodal (image only).
_Models._ In unimodal setting, we analyse Resnet-50, ResNet-101 and ResNet-150 [9], ViT-small, ViT-base and ViT-large models [5] trained on ImageNet [29]. In multimodal setting, we analyse CLIP [28] model with backbones including ResNets: CLIP-RN50 and CLIP-RN101, and transformers: CLIP-ViT B/16, CLiP-ViT B/32. We also analyze self-supervised unimodalities trained on large
datasets as against only ImageNet pretrained ones. For this, we use the masked autoencoders [8] and DINO V2 [25] models proposed recently, shown to be highly effective in representation learning. We analyze two architecture, ViT-B/16, ViT-B/32 for MAE and ViT-B/14 for DINO V2.
_Datasets._ We evaluate the models on various shifted version of ImageNet [29]: ImageNet-Corrupted (ImageNet-C) [11], ImageNet-Rendition (ImageNet-R) [10] and ImageNet-Sketch (ImageNet-S) [32] datasets. For natural shifts, we use ImageNet-Adversarial (ImageNet-A) comprising natural adversarial examples[13] (results in supplementary). Further, we also evaluate the models on ObjectNet [2] and its perturbed version ObjectNet-C, which we generate by applying all the 15 corruptions of ImageNet-C to ObjectNet data.
_Experimental Setup._ In the unimodal case, we evaluate ImageNet trained models from timm [33] on ImageNet-C, ImageNet-R, ImageNet-S (supplementary), ObjectNet and ObjectNet-C datasets. Further, we evaluate the multimodal models in the linear-probe [28] and zero-shot settings. For the linear probe setup, please refer to the CLIP paper. It is done on the ImageNet dataset and then evaluated on all of these.
_Results._ Fig. 3 (left) shows analysis of all the architectures on ImageNet-C data with varying severity levels, for ImageNet pretrained unimodalities (solid lines) and Zero-Shot CLIP-models (dashed lines). They all suffer similarly when severity is increased and even though start from different accuracies, converge to similar values at high severity. This implies under high perturbations they break-down equally. However, the robustness of CLIP based models is slightly higher. One possible reason is that they start from lower values of clean accuracy attributed to their zero-shot nature. Fig. 2 shows the analysis of various CLIP model architectures on the ImageNet-R, ObjectNet and ObjectNet-C datasets under both Linear Probe and zero-shot settings along with the unimodal (ImageNet pretrained) counterparts. For the linear probe setting, the models maintain accuracy on ImageNet-R and ObjectNet datasets whereas suffer significantly on ObjectNet-C. On the other hand, zero shot models show better accuracy on the ImageNet-R (compared to ImageNet), slightly lower on the ObjectNet dataset and suffer on the ObjectNet-C dataset similar to linear probe setting. From the results, it can be observed that zero-shot CLIP is more robust on ObjectNet-C than linear probe CLIP based on the relative drop on accuracy. Also, the zero-shot CLIP model outperforms the linear probe one on the ImageNet-R dataset, even though the linear probe was on ImageNet. For unimodal case, all models observe significant drop in accuracy and perform poorly (much worse than CLIP Linear Probe and Zero-Shot) under all the shifts.
_Self Supervised Unimodalities_. We further analyse another case where the unimodal models are trained in a self supervised fashion on large datasets as against the ImageNet pretrained models. For this, we use the masked autoencoders [8] and DINO V2 [25] models proposed recently, shown to be highly effective in representation learning. Fig. 3 (mid and right) provides their robustness analysis on the ImageNet-C,R datasets alongside the CLIP models. For ImageNet-C, similar to Fig. 3 (left), these models also converge similar to the multi-modal models at the highest severity levels and their relative drop from severity level 1 to 5 is higher. On ImageNet-R, again, multi-modal models perform significantly better than the uni-modal models. These observations are similar to the case seen in Fig.2.
_Empirical Conclusion._ From all the plots, it can be observed that multi-modal networks are much better than the unimodal counterpart on ImageNet-R, ObjectNet and ObjectNet-C datasets and can be seems as more robust for these settings. However, they also see a significant drop in accuracies under the perturbations in ImageNet-C, especially at higher accuracy levels. Again all of the architectures used show similar drops in accuracy. Also, zero-shot multi-modal networks seem to be more robust than their linear probe counterpart in the presence of distribution shifts (comparing the drop in accuracy from ImageNet to other datasets). On the architectural side, transformer models appear to be more robust than the ResNets for both single and
Figure 2: Analysis of multi-modal linear-probe, multi-modal zero-shot and unimodal networks under various distribution shifts including ImageNet-R, ObjectNet, ObjectNet-C. The x-axis denote the model architecture and y-axis denotes the accuracy.
multi-modalities, given their higher accuracy.
## 4 Methodology
**Problem Description.** The goal of our work is to make large pre-trained computer vision models robust without sacrificing their original properties. This is critical because we want models to work well both in in-domain and out-of-domain settings. Computational efficiency is also important because pre-training already requires massive amounts of compute, so efficient techniques are more versatile.
A popular technique to robustify a model is to use advanced augmentation techniques like aug-mix [12] or pix-mix [14]. This involves fine-tuning model parameters using the augmented dataset and is very effective at improving robustness. However, such fine-tuning is sub-optimal, as models could forget their original representation properties and at the same time require large computation resources.
**Method Overview:** To this end, we propose a novel Plug-and-Play method. First, alongside the original clean classification head, we plug a robust and a combined head into the model. Second, we induce robustness from a _small robust teacher_ (here the small is relative to the large pretrained model) into the robust head. However, this leaves the challenging task of ensuring that clean accuracy is preserved in the clean head and robustness induced into the robust head. More importantly, we need to ensure that these heads can be correctly selected at test time. Third, we propose a novel uncertainty-aware knowledge distillation technique, which allows us to fulfil all our required objectives. The proposed method is discussed below and also shown in the Fig. 4.
### Augmented Architecture
Let us denote the original model as \(\mathcal{M}\). The network can be seen as made of three components: \(\mathcal{M}_{b}\) the _backbone_ network spanning from initial layer to the \(K^{th}\) layer, \(\mathcal{M}_{s}\) the _shared tunable section_ spanning \((K+1)^{th}\) layer to \((N-1)^{th}\) layer, and \(\mathcal{M}_{h}\) the _prediction head section_ from \(N^{th}\) layer till the end (refer figure 4). Thus the overall network can written as:
\[\mathcal{M}=\mathcal{M}_{h}\circ\mathcal{M}_{s}\circ\mathcal{M}_{b}, \tag{1}\]
where \(\theta\), \(\theta_{b}\), \(\theta_{s}\) and \(\theta_{h}\) denote the respective component parameters.
To address the issue of robustness transfer with preservation of desired characteristics like clean accuracy, transfer learning, etc., we plug-in two more prediction head sections on top of the shared tunable section. This results in a total of three classification heads as shown in figure 4.
### Robustness Distillation from a Small Teacher
At the core of our approach lies use of a knowledge distillation (KD) framework to induce robustness to a large network. In standard KD, knowledge is usually transferred from a large network to a small network. _Au contraire_, we provide a novel view of KD. We show that robustness can be transferred from a small robust model to large models. For the small robust teacher (denoted as \(\mathcal{M}_{t}\)), we take small image-classification models and robustify them using standard techniques (a combination of augmentation techniques AugMix [12] and DeepAugment [10]). A _small_ teacher is essential for an efficient method.
### Uncertainty Aware Knowledge Distillation
While we have introduced a robust head and plan to induce robustness from the small model. It is a challenging task to ensure that clean head preserves clean accuracy, robust head learns robustness and the heads are appropriately selected at test time. We next discuss a novel strategy to achieve these goals.
We update the parameters of shared-tunable \(\theta_{s}\) and prediction sections \(\theta_{h}\), keeping the backbone network frozen as shown in figure 4. We use the same augmented training
Figure 3: **Left:** Comparison of accuracy score (y-axis) on ImageNet-C dataset against various severity levels (x-axis) of perturbations, including both Unimodal (solid line) and Multi-Modal CLIP (dashed line) architectures. Unimodal architectures are ImageNet-pretrained and Multi Modal architectures correspond to Zero-shot CLIP. **Mid:** Comparison of Self-Supervised unimodalities and CLIP Zero-Shot models on the ImageNet-C benchmark. **Right:** Comparison of self-supervised unimodalities and CLIP Linear Probe and Zero Shot models on the ImageNet-R dataset.
data here as used for robustifying the small (teacher) model. It is denoted as \(\mathcal{D}^{a}\) and contains both clean \((x^{c},y)\) and augmented samples \((x^{a},y)\) samples. Given the augmented training data and the robust network \(\mathcal{M}_{t}\), the parameter estimation for the model can be written as:
\[\{\theta_{s},\theta_{h}\}\sim\mathcal{P}(\{\theta_{s},\theta_{h}\}|\theta, \mathcal{M}_{t},\mathcal{D}^{a}). \tag{2}\]
#### 4.3.1 Generalized distillation
We next discuss our strategy to optimize for both knowledge (clean accuracy) and robustness distillation (robust accuracy). Note that the teacher for the clean head is a copy of the initial large model to preserve the original properties. This estimation is done using a weighted combination of classification loss \(\mathcal{L}_{c}(x,y,\theta)\) and distillation loss [15]\(\mathcal{L}_{d}(\theta_{T},\theta_{S},x)\), where \((x,y)\in D^{a}\), \(\theta\) denotes parameters of the prediction network, \(\theta_{T}\) denotes teacher model parameters, \(\theta_{S}\) denotes student model parameters.
The head section (with parameters \(\theta_{h}^{c}\)) corresponding to the _clean_ head is updated only using the clean examples. The _combined_ head (\(\theta_{m}\)) uses both clean and unclean examples and the _unclean_ head (\(\theta_{u}\)) uses only the augmented examples. Thus, for clean examples in a randomly sampled batch of data, the set of updated parameters due to clean head section prediction is \(\theta_{c}^{c}=\{\theta_{s},\theta_{h}^{c}\}\) and combined head section is \(\theta_{m}^{c}=\{\theta_{s},\theta_{h}^{m}\}\). Similarly for augmented examples, updated parameter set due to unclean head section prediction is \(\theta_{u}^{u}=\{\theta_{s},\theta_{h}^{u}\}\) and combined head section is \(\theta_{l}^{u}=\{\theta_{s},\theta_{l}^{m}\}\).
Thus, the final loss function to update w.r.t. clean examples is (denoted as \(\mathcal{L}_{clean}\)):
\[\mathcal{L}_{c}(x,y,\theta_{c}^{c})+\mathcal{L}_{d}(x,\theta_{c}^{c},\theta_ {t})+\mathcal{L}_{c}(x,y,\theta_{m}^{c})+\mathcal{L}_{d}(x,\theta_{m}^{c}, \theta_{t}), \tag{3}\]
and similarly for unclean examples (denoted as \(\mathcal{L}_{aug}\)):
\[\mathcal{L}_{c}(x,y,\theta_{u}^{u})+\mathcal{L}_{d}(x,\theta_{u}^{u},\theta_ {t})+\mathcal{L}_{c}(x,y,\theta_{m}^{u})+\mathcal{L}_{d}(x,\theta_{m}^{u}, \theta_{t}). \tag{4}\]
Finally, the cost function \(\mathcal{L}\) for a given batch of data can be written as:
\[\mathcal{L}=\beta\mathcal{L}_{clean}+(1-\beta)\mathcal{L}_{aug}, \tag{5}\]
where \(\beta=1\) for clean and \(\beta=0\) for unclean examples.
#### 4.3.2 Uncertainty aware head selection
We need a reliable head selection, such that clean head is selected for clean examples (to preserve clean accuracy) and unclean head for shifted examples (robustness).
Uncertainty.Modelling uncertainty in predictions corresponding to each of the heads can be a way to select the final head as the most certain head. Clean head should be the most certain on clean examples and similarly unclean for augmented examples. For this, we use Dropout [6] as it gives allows _Bayesian_ approximation. At training time, we update the tunable portion of \(\mathcal{M}\), starting from encoder \(L^{K+1}\) in Fig.4 using _dropout_ regularization. This can be done by just setting the dropout rate to some non-zero fraction in the existing implementation. This dropout is a part of all the three heads.
Figure 4: The end-to-end workflow of our proposed method. It involves firstly robustifying a small teacher model using advanced augmentation methods (lower stream). Then using this model along with augmented data, it tunes a small chunk of the large-scale (student) model. As described in sec. 4, we add two more heads to the student model resulting in total three heads. Yellow colored encoder denotes the (only few) tunable layers and blue colored encoders correspond to frozen layers. Finally, at the inference time, the head used for prediction is selected via estimating uncertainty in predictions and analyzing KL divergence between the distributions predicted by each head. For more details, please refer section 4.
At the inference time, we activate the dropout, for each of the heads, to estimate a predictive distribution from the model as against a point estimate [6, 17]. We take \(K\) forward passes through each of these heads and then for each head we calculate mean and std of the outputs and use them as the mean and std of the predicted distribution from the model. This is referred as Monte Carlo Dropout. Finally, we use std directly as the measure of uncertainty (\(\mathcal{U}_{mc}\)).
KL Divergence.Now, there can be a case where the clean model completely breaks down for a noisy input and predicts a random output with very low \(\mathcal{U}_{mc}\) (highly certain). Given the test-distribution is unknown, this can be a case with significant probability. To handle this, we also calculate the distance between the predicted distributions of each of the clean and unclean head with the combined head using KL divergence only at Inference time. This results in the following objective for selecting the final prediction head \(h_{f}\):
\[h_{f}=\arg\min_{k\in\{c,u\}}\mathcal{U}_{mc}\cdot\text{KL}(\phi_{l}^{m}(x)|| \phi_{l}^{k}(x)), \tag{6}\]
where \(h_{c}\), \(h_{m}\), \(h_{u}\) correspond to clean, combined, unclean heads respectively and \(\phi_{l}^{c}\), \(\phi_{l}^{m}\), \(\phi_{l}^{u}\) are the corresponding prediction functions. Thus, the desired head out of clean/unclean is selected using eq. 6. Note, the third head is here just to select the correct desired head for the input from the clean and unclean head via the KL divergence term in the head selection metric in eq. 6. In supplementary, we have provided a detailed ablation on the utility of each of these components and also the comparison against a naive confidence based head selection baseline.
### Zero-Shot Multi-Modal scenario
The method described above can be directly applicable to uni-modalities and vision encoders of multi-modalities by attaching a classification head, similar to the Linear Probe setup [28]. We further adapt our scheme to zero-shot setup for these multi-modal networks which comprise both vision and text encoders. For this, we apply our scheme in the presence of text encoder and use the dot products between the vision encoder embeddings (\(\phi_{v}(x)\)) and the prompt embedding obtained from the text encoder (\(\phi_{text}(Y)\), where \(Y\) denotes the set of prompts corresponding all classes present in the data), \(\phi_{l}(x)=\phi_{v}(x)\cdot\phi_{text}(Y)\), as the model prediction for both classification and distillation losses.
## 5 Experiments
We evaluate the presented approach in making large models robust on the benchmark datasets and perturbations described in the section 3 for image classification task. We show performance on clean accuracy, robust accuracy and transfer learning properties on downstream datasets.
Experimental Setup.We demonstrate results of our approach under two settings. First corresponds to using visual encoders of uni or multi-modal models (as described in linear probing approach in the Sec. 5.1) and attaching a classification head to the visual encoders. We term this as the Visual Evaluation or _VE_ setup. Next, we also provide results (Sec. 5.1) in multi-modal models settings where both text and vision are used in the zero-shot (or _ZS_) setting under dataset shift. Along with clean and robust accuracy, we also compare different methods on whether they can preserve the transfer learning capabilities of the large models. The robust teacher for both settings is a single modal network trained by finetuning complete model using augmentation based techniques using a combination of augmix and deep augmentation techniques (as described in Sec. 4).
Datasets.We evaluate the presented approach on the ImageNet validation set and its perturbed variations, ImageNet-C [11], ImageNet-R [10], ImageNet-S [32] and ImageNet-A [13] for robust analysis in the VE setup. Further, we use the ObjectNet data [2] and its perturbed variation ObjectNet-C (Corrupted) version for the zero-shot evaluation tasks or ZS setup. For transfer learning experiment, we show results five datasets (Tiny-ImageNet [21], Flowers [24], PLACES025 [22], iNaturalist2021 [31] and SUN397 [35]) in the VE setup. For the ZS setup, we instead show results on dataset shift on six datasets (Cars [19], Flowers [24], CIFAR100 [20], SUN397 [35], ObjectNet [2]), where the zero-shot model is directly evaluated on these datasets. More information about these datasets have been provided in the supplementary material.
Baselines and metrics.We now describe baselines used for the VE and ZS setups. For the VE experiments involving only the visual encoders, we compare against five prior approaches. First approach involves adapting the the same number of parameters as in the presented linear probe approach (Sec. 3). Second baseline involves naive finetuning full network using the current dataset. Third baseline is the visual prompt tuning approach [16] that involves adapting input prompt parameters while fixing the feature network parameters. We also compare against the recently proposed WISE [34] framework for finetuning large networks. Finally, we define a new baseline, Augmentation based Partial Fine-Tuning or _APT_ to show the effectiveness of multi-headed scheme. It involves directly updating the small part of the large network (same number of tunable layers as ours), using the augmentation based technique.
We further define two more baselines, as variations of our proposed scheme, to further highlight its importance. The first baseline, _Only K.D._, involves doing knowledge distillation directly from the Small Teacher Network (STN) teacher to Large Learner Network (LLN) student without using the proposed multi-headed architecture and copy of
initial LLN as teacher for clean examples. The second baseline, _combined head_, involves using copy of initial LLN as teacher for clean examples and STN for augmented, but without the multi-headed architecture, _i.e._ requiring only the combined head.
For the ZS setting, the set of baselines involves the existing zero-shot multi-modal network, along with the APT baseline (ZS-APT) and complete fine-tuning baseline (ZS-Full tuning). Both these baselines are tuned similar to our method, in presence of the text encoder, as described in section 4 (Zero Shot Multi-Modal scenario).
We use the accuracy metric on clean and perturbed dataset to evaluate each of the methods. Furthermore, we also calculate the robustness metrics for evaluating under perturbations, proposed for the ImageNet-C dataset in the supplementary.
### Evaluating Multi Modality Trained Methods
Visual Evaluation.Table 1 shows the comparison of our method with the baselines under various distribution shifts for ImageNet dataset along with the transfer learning capabilities and training time per image. It uses the CLIP ViT-L/14@333px model for all the methods. Our method uses ViT-B/16 as the teacher model. It can be observed that even though WISE-E2E performs best for two shift scenarios, it suffers from high training time and poor performance under transfer learning tasks (accuracy drop of 1.8 and 1.4) which is a bottleneck with E2E fine-tuning methods. On the other hand, the methods like VPT which require low training time perform poorly under distribution shift (average accuracy drop greater than 5% when compared with Zero-Shot model). On the other hand, performs best on four distribution shift tasks boosting up accuracy upto 2% (ImageNet-C) and is able to achieve the same accuracy as the zero-shot model on the transfer learning experiments and also in a significantly lesser time compared to WISE-E2E (approximately 5 times faster).
We further evaluate our method against the APT baseline, we defined, for various model architectures as Teachers and Learners. Figure 5 (top left and bottom left) shows this analysis, with visual encoder of four CLIP models (RN-50, RN101, ViT-B/16 and ViT-B/32) as students and single modal networks as teacher evaluated on ImageNet-R and ImageNet-C datasets. The rows with teacher as _None_ correspond to the APT baseline. For ImageNet-C, accuracy is improved by more than 3% for most cases, and is atleast 2.5 % for all cases. This knowledge transfer doesn't rely much on the architectural family of student or teacher as only marginal difference is there in the improvements offered by ViT or ResNet architectures as teachers on CLIP ViT/ResNet students (less than 0.3% difference observed when updating CLIP RN-50 with ResNet-101 or ViT-Small).
For ImageNet-R, our method provides gains 2.0% for most cases with maximum as 2.9%, compared to APT baseline. For accuracy of the teacher models, please refer supplementary.
Zero Shot.We now apply our scheme on multi-modal CLIP networks using both text and visual encoder for a complete zero-shot setup, as discussed in section 4. Table 2 shows the results for our method and baselines under this setup, for various distribution shifts to the ImageNet data and also for zero-shot evaluation on new datasets (dataset shift). Here again, ViT-B/16 is used as the teacher for our method. It can be observed that our method shows best performance under all the distribution shifts with minimum difference of 1% (IN-R) and maximum of 4.5% (IN-C). Also, it is the best performing zero-shot method for four out of five dataset shifts with the maximum improvement being on ObjectNet-C 3.9%). This shows that it improves zero-shot properties of the model as compared to the complete fine-tuning which instead degrades the performance under dataset shift as observed in table 2. Table 5 (top mid and bottom mid) shows the further accuracy analysis under this setting for various student (single modal) and teacher
\begin{table}
\begin{tabular}{l|c|c c c c c|c c c|c c c|c} \hline \multirow{2}{*}{} & \multirow{2}{*}{IN} & \multirow{2}{*}{IN-V2} & \multirow{2}{*}{IN-R} & \multicolumn{4}{c|}{Distribution Shifts} & \multicolumn{4}{c|}{Transfer Learning} & Avg. & Latency \\ & & & & IN-Sketch & ObjectNet & IN-A & IN-C & Tiny-IN & Flowers & Places & NaT & SUN & shifts & (ms/img) \\ \hline CLIP ViT-L/14@336px & & & & & & & & & & & & & & \\ Zero-Shot [28] & 76.2 & 70.1 & 88.9 & 60.2 & 70.0 \(\pm\) 0.2 & 77.2 & 60.6 & **85.2** & **98.8** & **74.87** & **68.20** & **82.20** & 71.7 & - \\ Fine-Tuning (LP) [28] & 85.4 & 75.8 & 84.1 & 57.4 & 66.3 & 75.3 & 57.9 & **85.2** & **98.8** & - & - & - & 69.5 & 320 \\ Fine-Tuning & 86.1 & 76.6 & 79.7 & 57.7 & 63.4 & 65.4 & 52.1 & 83.5 & 97.6 & 72.12 & 65.70 & 78.89 & 65.8 & 2500 \\ VPT [16] & 85.8 & 74.2 & 80.1 & 56.9 & 63.9 & 66.1 & 52.6 & 85.2 & 98.6 & - & - & 65.5 & 500 \\ WISE-FT (LC) [34] & 85.1 & 76.6 & 85.2 & 63.0 & 71.0 & 79.5 & 62.1 & 85.1 & 98.4 & 73.98 & 67.45 & 81.12 & 72.9 & 570 \\ WISE-FT(E2E) [34] & **86.9** & **79.5** & **90.1** & 65.0 & 72.1 & 80.6 & 62.9 & 83.4 & 97.4 & 72.94 & 66.28 & 80.23 & 75.0 & 3950 \\ \hline \end{tabular} \begin{tabular}{l|c c c c c c|c c c|c c c|c} \hline \multirow{2}{*}{} & \multirow{2}{*}{IN-V2} & \multirow{2}{*}{IN-R} & \multicolumn{4}{c|}{Distribution Shifts} & \multicolumn{4}{c|}{Transfer Learning} & Avg. & Latency \\ & & & & IN-Sketch & ObjectNet & IN-A & IN-C & Tiny-IN & Flowers & Places & NaT & SUN & shifts & (ms/img) \\ \hline CLIP ViT-L/14@336px & & & & & & & & & & & & & & & & \\ Zero-Shot [28] & 76.2 & 70.1 & 88.9 & 60.2 & 70.0 \(\pm\) 0.2 & 77.2 & 60.6 & **85.2** & **98.8** & **74.87** & **68.20** & **82.20** & 71.7 & - \\ Fine-Tuning (LP) [28] & 85.4 & 75.8 & 84.1 & 57.4 & 66.3 & 75.3 & 57.9 & **85.2** & **98.8** & - & - & - & 69.5 & 320 \\ Fine-Tuning & 86.1 & 76.6 & 79.7 & 57.7 & 63.4 & 65.4 & 52.1 & 83.5 & 97.6 & 72.12 & 65.70 & 78.89 & 65.8 & 2500 \\ VPT [16] & 85.8 & 74.2 & 80.1 & 56.9 & 63.9 & 66.1 & 52.6 & 85.2 & 98.6 & - & - & - & 65.5 & 500 \\ WISE-FT (LC) [34] & 85.1 & 76.6 & 85.2 & 63.0 & 71.0 & 79.5 & 62.1 & 85.1 & 98.4 & 73.98 & 67.45 & 81.12 & 72.9 & 570 \\ WISE-FT(E2E) [34] & **86.9** & **79.5** & **90.1** & 65.0 & 72.1 & 80.6 & 62.9 & 83.4 & 97.4 & 72.94 & 66.28 & 80.23 & 75.0 & 3950 \\ \hline \end{tabular}
\begin{tabular}{l|c c c c c|c c c|c c c|c} \hline \multirow{2}{*}{} & \multirow{2}{*}{IN-V2} & \multirow{2}{*}{IN-R} & \multicolumn{4}{c|}{Distribution Shifts} & \multicolumn{4}{c|}{Transfer Learning} & Avg. & Latency \\ & & & & IN-Sketch & ObjectNet & IN-A & IN-C & Tiny-IN & Flowers & Places & NaT & SUN & shifts & (ms/img) \\ \hline CLIP ViT-L/14@336px & & & & & & & & & & & & & & & & \\ Zero-Shot [28] & 76.2 & 70.1 & 88.9 & 60.2 & 70.0 \(\pm\) 0.2 & 77.2 & 60.6 & **85.2** & **98.8** & **74.87** & **68.20** & **82.20** & 71.7 & - \\ Fine-Tuning (LP) [28] & 85.4 & 75.8 & 84.1 & 57.4 & 66.3 & 75.3 & 57.9 & **85.2** & **98.8** & - & - & - & 69.5 & 320 \\ Fine-Tuning & 86.1 & 76.6 & 79.7 & 57.7 & 63.4 & 65.4 & 52.1 & 83.5 & 97.6 & 72.12 & 65.70 & 78.89 & 65.8 & 2500 \\ VPT [16] & 85.8 & 74.2 & 80.1 & 56.9 & 63.9 & 66.1 & 52.6 & 85.2 & 98.6 & - & - & - & - & 65.5 & 500 \\ WISE-FT (LC) [34] & 85.1 & 76.6 & 85.2 & 63.0 & 71.0 & 79.5 & 62.1 & 85.1 & 98.4 & 73.98 & 67.45 & 81.12 & 72
(CLIP) pairs on ImageNet-R, ImageNet-C datasets. The evaluation is done on the ObjectNet dataset and its perturbed version ObjectNet-C. Here, the baseline (rows with teacher as _None_) is just the zero-shot large-scale network (using text and visual encoder) and doesn't use the tuning set. Again, our scheme improves accuracy by a significant amount, atleast 2.6 percent under zero-shot setting for the dataset shift. Again it works irrespective of the architecture or modality of the teacher network (eg. CLIP RN-50 student and RN-101, ViT-S teachers in the figure).
We further analyze the effect of our method and the APT baseline on the transfer learning setup on tiny-ImageNet and Flowers dataset for various teacher-student pairs in the supplementary.
**Inference time Latency**. Since our method attaches extra heads, although quite small, on top of the existing model, we also analysed the inference time overhead it adds on top of the naive model. We observed that this overhead was quite small. For instance, CLIP ViT-L/14 model used in table 1, GFLOPs for baselines is 81.17 and ours is 83.7, (3% overhead).
### Evaluating Unimodally Trained Methods
We now analyze our method for a unimodal _Learner_ network scenario, comparing it with all the baselines on different distribution shifts/transfer learning tasks, as done for the multi-modal VE setup. Table 3 shows this comparative results. Here again, our method emerges as the winner for four out of the six distribution shifts with minimum gain of 0.8% and maximum gain of 3.2%. Furthermore, it is also able to preserve transfer learning capabilities, as shown in table 3 for Tiny-IN, Flowers, PLACES205, iNaturalist2021, SUN397 datasets, of the original model whereas other baselines (VPT and WISE) suffer.
Figure 5 (last column) shows this comparison for various Leaner-Teacher pairs consisting both ViT and ResNet architectures. Again, the rows with Teacher _None_ correspond to the APT baseline. For majority cases on ImageNet-C, our method improves accuracy by greater than 3 percent, when compared to the baseline. Similarly, on the ImageNet-R dataset, it shows greater than 5% for most cases with maximum going to 3.2% for RN-50 teacher and RN-101 student. Furthermore, increasing the size of teacher model (from RN-34 to 50) results in improved performance. Finally, both our method and the baseline we compare to, make significant improvement in performance on the perturbed datasets (especially ImageNet-C), compared to the ImageNet pretrained models (refer section 3 for performance of these ImageNet-pretrained models).
\begin{table}
\begin{tabular}{l c|c c c c c c|c c c c c|c} \hline & & \multicolumn{6}{c|}{Distribution Shifts} & \multicolumn{6}{c|}{Dataset Shift} & Avg. \\ & IN & IN-V2 & IN-R & IN-S & ON & IN-A & IN-C & Cars & Flowers & CIFAR100 & SUN397 & ON-C & shifts \\ \hline CLIP ViT-L/14\(\oplus\)336px & & & & & & & & & & & & & \\ Zero-Shot & 76.2 & 70.1 & 88.9 & 60.2 & 70.0 & 77.2 & 60.6 & 78.8 & **78.3** & 77.5 & 68.4 & 52.2 & 71.1 \\ ZS-Full Tuning & **86.5** & **77.1** & 88.2 & 58.9 & 65.5 & 78.2 & 53.3 & 76.3 & 76.8 & 77.1 & 67.2 & 51.5 & 70.0 \\ \hline ZS-APT & 84.8 & 76.3 & 89.2 & 60.7 & 68.8 & 77.9 & 62.3 & 77.2 & 77.9 & 77.3 & 67.9 & 54.3 & 71.8 \\ Ours (ViT-B/16 teacher) & 86.3 & 76.8 & **90.2** & **62.9** & **71.6** & **74.6** & **78.9** & **78.9** & 78.1 & **77.6** & **69.3** & **58.2** & **73.6** \\ \hline \end{tabular}
\end{table}
Table 2: **Zero Shot results.** Comparison with existing robustification methods and complete fine-tuning on various distribution and dataset shifts under the zero shot setup using CLIP model. Last column shows the average accuracy including all the dataset and distribution shifts. All these results are a result of zero-shot evaluation of the robustly tuned classifier model using only the ImageNet-1K dataset. No further tuning on any of the other datasets. The numbers reported are average over five runs.
\begin{table}
\begin{tabular}{l c|c c c c c c c|c|c|c} \hline & & \multicolumn{6}{c|}{Distribution Shifts} & \multicolumn{1}{c|}{Transfer Learning} & Avg. & Latency \\ & IN & IN-V2 & IN-R & IN-Sketch & ObjectNet & IN-A & IN-C & Tiny-IN & Flowers & shifts & (ms/img) \\ \hline ViT-L/14 & & & & & & & & & & & & & \\ Standard Training & 82.8 & 75.3 & 49.4 & 40.4 & 45.2 & 51.9 & 51.5 & 84.5 & 98.1 & 52.3 & 2200 \\ VPT [16] & 82.1 & 74.4 & 47.2 & 38.1 & 45.3 & 51.2 & 50.3 & 84.1 & 97.9 & 51 & 420 \\ WISE-FT (LC, optimal \(\alpha\)) [34] & 82.6 & 76.2 & 54.2 & 48.3 & 49.2 & 55.3 & 54.2 & 84.0 & 98.1 & 56.2 & 500 \\ WISE-FT (E2E, optimal \(\alpha\)) [34] & **83.6** & **78.4** & 58.3 & 52.1 & **52.3** & 57.7 & 55.1 & 83.8 & 97.5 & 58.9 & 3400 \\ \hline K.D. from ViT-Small teacher & & & & & & & & & & & & \\ Single-Teacher & 81.9 & 75.9 & 52.7 & 46.3 & 47.4 & 53.2 & 53.3 & 83.8 & 97.2 & 54.8 & 470 \\ Only K.D. & 82.2 & 76.7 & 53.4 & 46.9 & 48.1 & 54.2 & 53.2 & 84.3 & 97.9 & 55.4 & 550 \\ Ours & 82.7 & 78.2 & **59.1** & **52.7** & 51.6 & **58.5** & **58.3** & **84.5** & **98.1** & **59.7** & 700 \\ \hline \end{tabular}
\end{table}
Table 3: **Unimodal results.** Comparison with existing robustification methods and complete fine-tuning on various distribution shift and transfer learning benchmarks for single-modal ViT-L/14 model, pretrained on JFT dataset. Second last column shows average accuracy over the distribution shifts and the last one shows training latency per image. The numbers reported are average over five runs.
### Ablations
Robustification schemes.We now compare the effect of using a different robustification schemes for the teacher model, used in our method. We limit ourselves to augmentation based robustification schemes such as AugMix, PixMix, DeepAugment _etc._. Table 4 shows the results for this comparison for the single modal setup when a ResNet-101 student model is using ResNet-50 teacher. Using PixMix or combining with AugMix does improves the accuracy by around 1-1.5 over our current technique (Augmix+DeepAugment). Our scheme shows that that gains in teacher model can be efficently transferred to the student.
Amount of parameters tuned.We further analyze the effect of tuning different proportions of the LLN using our scheme. It can be observed that tuning more parameters increases the accuracy on ImageNet-C and R datasets at the cost of clean accuracy on ImageNet dataset.
Please refer to the supp. material for more description about the ablations for understanding model design choices (KL Div., Uncertainty, Multiple-heads) and other details.
## 6 Conclusion
We first benchmark and discuss existing large pre-trained models under various shifts to the data. Following this, we proposed an effective method to distill robustness to these networks via small robust models, at the same time preserving the characterstics of the large models. Results on various distribution shift settings showed that our method is effective and efficient in making large pretrained models robust to several distribution shifts, and also retaining their transfer learning properties.
**Limitations.** Though we have provided extensive empirical evidence to demonstrate the benefit of our approach, a theoretical underpinning is missing. We leave theoretical analysis as an interesting future work.
|
2308.16865 | Bethe ansatz inside Calogero-Sutherland models | We study the trigonometric quantum spin-Calogero-Sutherland model, and the
Haldane-Shastry spin chain as a special case, using a Bethe-ansatz analysis. We
harness the model's Yangian symmetry to import the standard tools of
integrability for Heisenberg spin chains into the world of integrable
long-range models with spins. From the transfer matrix with a diagonal twist we
construct Heisenberg-style symmetries (Bethe algebra) that refine the usual
hierarchy of commuting Hamiltonians (quantum determinant) of the
spin-Calogero-Sutherland model. We compute the first few of these new conserved
charges explicitly, and diagonalise them by Bethe ansatz inside each
irreducible Yangian representation. This yields a new eigenbasis for the
spin-Calogero-Sutherland model that generalises the Yangian Gelfand-Tsetlin
basis of Takemura and Uglov. The Bethe-ansatz analysis involves non-generic
values of the inhomogeneities. Our review of the inhomogeneous Heisenberg XXX
chain, with special attention to how the Bethe ansatz works in the presence of
fusion, may be of independent interest. | Gwenaël Ferrando, Jules Lamers, Fedor Levkovich-Maslyuk, Didina Serban | 2023-08-31T17:06:26Z | http://arxiv.org/abs/2308.16865v2 | **Bethe ansatz inside Calogero-Sutherland models**
## Abstract
We study the trigonometric quantum spin-Calogero-Sutherland model, and the Haldane-Shastry spin chain as a special case, using a Bethe-ansatz analysis. We harness the model's Yangian symmetry to import the standard tools of integrability for Heisenberg spin chains into the world of integrable long-range models with spins. From the transfer matrix with a diagonal twist we construct Heisenberg-style symmetries (Bethe algebra) that refine the usual hierarchy of commuting Hamiltonians (quantum determinant) of the spin-Calogero-Sutherland model. We compute the first few of these new conserved charges explicitly, and diagonalise them by Bethe ansatz inside each irreducible Yangian representation. This yields a new eigenbasis for the spin-Calogero-Sutherland model that generalises the Yangian Gelfand-Tsetlin basis of Takemura and Uglov. The Bethe-ansatz analysis involves non-generic values of the inhomogeneities. Our review of the inhomogeneous Heisenberg xxx chain, with special attention to how the Bethe ansatz works in the presence of fusion, may be of independent interest.
###### Contents
* 1 Introduction
* 2 How algebraic Bethe ansatz works for inhomogeneous models
* 2.1 Inhomogeneous Heisenberg xx spin chain
* 2.2 Algebraic Bethe ansatz and \(QQ\)-relation
* 2.2.1 Inhomogeneous analogues of translation operator
* 2.2.2 Another family of conserved charges with inhomogeneities
* 2.2.3 Periodic case
* 2.2.4 Extreme twist and Gelfand-Tsetlin basis
* 2.3 Fusion
* 2.3.1 General description
* 2.3.2 Bethe ansatz for fusion into singlet
* 2.3.3 Bethe ansatz for fusion into triplet
* 2.3.4 Repeated fusion
* 3 Fermionic Spin-Calogero-Sutherland model
* 3.1 Dunkl operators and nonsymmetric Jack polynomials
* 4
3.2 Hamiltonian and monodromy matrix 3.3 Effective spin chains
* 4 Bethe-ansatz analysis of the spin-Calogero-Sutherland model
* 4.1 Heisenberg-style symmetries
* 4.2 Internal Bethe ansatz
* 4.3 Limits
* 4.3.1 Free-fermion limit \(\beta\to 0\)
* 4.3.2 Strong-coupling limit \(\beta\to\infty\) and the Haldane-Shastry spin chain
* 4.4 Example: \(N=4\)
* 5 Conclusion
* A Fused \(R\)-matrix
* B On the derivation of the Bethe equations with fusion
* B.1 Derivation from the \(QQ\)-relation
* B.2 Derivation from the algebraic Bethe ansatz
* C Action of the \(B\)-operator at the fixed root
* D Examples of fusion for low length
* D.1 Generic case and fusion for \(L=2\)
* D.2 Fusion into singlet for \(L=4\)
## 1 Introduction
Long-range interacting spin systems naturally appear in a broad range of physical contexts, from experiments with cold atoms to high-energy theory [1, 2]. Yet on the theoretical side they have received much less attention than their nearest-neighbour counterparts.
In this paper we will consider three integrable long-range models which, as we will see, are all interrelated. The first one has actually been studied for nearly half a century, although it is usually not explicitly thought of as a long-range model: the inhomogeneous Heisenberg spin chain. It naturally arises from the viewpoint of the six-vertex model (cf. Baxter's \(Z\)-invariant model [3]) and makes an appearance in the Bethe/gauge correspondence [4], where it corresponds to a certain \(\mathcal{N}=2\) supersymmetric gauge theory (with 'twisted masses' that relate to the inhomogeneities). The inhomogeneous Heisenberg chain is an important example of a quantum-integrable system, with an underlying Yangian structure that provides its commuting charges (from the transfer matrix, or Bethe algebra) as well as a way of diagonalising them (by algebraic Bethe ansatz). Yet, except for some special cases, one usually does not think of it as a bona fide spin chain, and the inhomogeneity parameters are rather seen as a technical tool. For instance, they are crucial for the Izergin-Korepin approach to computing the domain-wall partition functions, Gaudin norms, and Slavnov scalar products of Bethe vectors [5, 6]. Other applications of inhomogeneities are the proofs of completeness of the Bethe ansatz [7, 8, 9] and of some of the Razumov-Stroganov conjectures [10]. In addition, in special semiclassical limits, inhomogenous spin chains give rise to the Gaudin Hamiltonians [11]. For
most other physical applications one eventually takes the homogeneous limit to restore periodicity. One can also consider special repeating values of the inhomogeneities, e.g. staggered (alternating) values [12, 13, 14, 15] or other periodic values [16, 17], which give access to a broader range of conformal field theories in a suitable scaling limit.
The second long-range model will be the main object of our study: the quantum trigonometric Calogero-Sutherland model with particles that have spins, with Hamiltonian [18, 19, 20]
\[\widetilde{H}=-\frac{1}{2}\sum_{i=1}^{N}\partial_{x_{i}}^{2}+\beta\,\sum_{i<j }^{N}\frac{\beta\mp P_{ij}}{4\,\sin^{2}[(x_{i}-x_{j})/2]}\,, \tag{1.1}\]
where the upper (lower) sign corresponds to bosonic (respectively fermionic) statistics, and \(P_{ij}\) is the spin permutation operator. This quantum many-body system is also integrable, has eigenvectors containing Jack polynomials with prescribed (anti)symmetry, and a deep representation-theoretic structure [21, 22] which in particular provides a representation of the Yangian that is very different from the usual one for Heisenberg spin chains. The quantum determinant, i.e. the centre of the Yangian, generates the commuting Hamiltonians of the spin-Calogero-Sutherland model, which means that the spin symmetry is enhanced to Yangian _symmetry_. (In contrast, for the Heisenberg spin chain the Yangian can be used to move between eigenvectors with different energies, as in the algebraic Bethe ansatz.) The Yangian structure was studied in detail by Takemura and Uglov [23, 24, 25]. The spin-Calogero-Sutherland model is superintegrable, see [26], which means that it has (even) more commuting charges than a normal integrable system such as the Heisenberg spin chain or scalar Calogero-Sutherland model. One additional Hamiltonians is Haldane's 'rapidity' operator, see [27]. Further symmetries were constructed by Fowler and Minnahan [27] in a special (strong coupling) case using Polychrankos' exchange-operator formalism [28].
One of our main motivations for studying the spin-Calogero-Sutherland model are its connections to the third long-range model: the Haldane-Shastry spin chain [29, 30]
\[H^{\text{HS}}=\sum_{i<j}^{N}\frac{1+P_{ij}}{4\,\sin^{2}\!\left[\frac{\pi}{N}(i -j)\right]}\,. \tag{1.2}\]
It exhibits fractional (exclusion) statistics [31, 32, 33] and can be viewed as the \(SU(2)_{k=1}\) Wess-Zumino-Witten model on the lattice [34, 35, 36, 37]. The Haldane-Shastry chain arises from (1.1) in a special 'freezing' limit [38, 39, 40] and inherits various special properties along the way. Amongst others, (1.2) has very simple energy eigenvalues and very high degeneracies [32], which are to a large extent due to its Yangian symmetry [34, 40]. Its (Yangian highest-weight) wave functions are given by certain symmetric Jack polynomials, which are indirectly derived from freezing [40]. Higher Hamiltonians follow from freezing too [27, 41]. Although the freezing procedure often starts from the bosonic spin-Calogero-Sutherland model [38, 39, 40], two of us showed [42] that the _fermionic_ case naturally accounts for the form of the Haldane-Shastry wave functions with Yangian highest weight, and used it to prove a claim from [40] about the spin-chain eigenvectors for higher rank. We refer to the introduction of [42] for a more in-depth survey of different connections between the spin-Calogero-Sutherland model and Haldane-Shastry spin chain.
**In this paper** we import the standard toolkit of Heisenberg integrability into the world of the spin-Calogero-Sutherland model and Haldane-Shastry chain. We exploit the Yangian symmetry to construct additional commuting charges of the spin-Calogero-Sutherland model. Namely, we will use a transfer matrix to construct symmetries that form (a representation of) a maximal abelian algebra of the Yangian, called the Bethe algebra. Going beyond the
usual spin-Calogero-Sutherland hierarchy coming from the quantum determinant, these extra Heisenberg-type charges are no longer Yangian invariant. This will enable us to simultaneously diagonalise them by algebraic Bethe ansatz and obtain a new eigenbasis for (1.1). By including a (diagonal) twist, our construction generalises the Yangian Gelfand-Tsetlin eigenbasis constructed by Takemura and Uglov [23]. Via the freezing procedure our results extend to the Haldane-Shastry chain; special cases have been found by Fowler and Minahan [27].
In more detail, the eigenspaces of the spin-Calogero-Sutherland model, labelled by partitions with bounded multiplicities, are irreducible representation of the Yangian studied in detail by Takemura and Uglov [23]. We reinterpret each such eigenspace as an 'effective' inhomogenous Heisenberg spin chain with special values of the inhomogeneities. The Yangian highest-weight vector, which can be described explicitly in the coordinate basis [42], serves as the pseudovacuum of the effective spin chain. We then use the algebraic Bethe ansatz to construct the full spin-Calogero-Sutherland eigenspace. Remarkably, the values of the inhomogeneities that occur force us to consider cases with fusion. Thus we will first review how fusion works and point out various subtleties for the algebraic Bethe ansatz in this situation.
Our new Heisenberg-type symmetries provide a setting for developing separation of variables (SoV) for long-range spin chains that are richer than inhomogeneous Heisenberg spin chains. In addition, while we focus on the simplest case of \(\mathfrak{sl}_{2}\) spins, our approach will extend to higher rank as well.
**Outline.** This paper is organised as follows. In Section 2 we review the algebraic Bethe ansatz framework for Heisenberg xxx spin chains. We discuss the fusion procedure in detail, and pay special attention to nontrivial aspects of the Bethe ansatz in this case. In Section 3 we recall the basics of the spin-Calogero-Sutherland model, its Yangian symmetry and its eigenspaces, reinterpreted as effective Heisenberg spin chains. Section 4 contains our main results: the construction of a refined family of conserved charges for the spin-Calogero-Sutherland model and their diagonalisation by algebraic Bethe ansatz. We analyse our construction in limiting cases, in particular including the Haldane-Shastry spin chain obtained by freezing, and illustrate it in an explicit example. Section 5 contains our conclusions. There are four appendices containing technical details and examples related to fusion for small systems.
## 2 How algebraic Bethe ansatz works for inhomogeneous models
In this section we review the Bethe-ansatz solution for the inhomogeneous xxx spin chain with a spin-\(1/2\) representation at each site. We pay special attention to the subtleties of fusion, which is relevant for the long-range models that we focus on in the rest of the paper.
### Inhomogeneous Heisenberg xxx spin chain
The Hamiltonian of the Heisenberg xxx spin chain for \(L\) spin-\(1/2\) sites is
\[H^{\text{\tiny H}}=\sum_{i=1}^{L}(1-P_{i,i+1})\qquad\text{on}\qquad\mathcal{H} =(C^{2})^{\otimes L}\,, \tag{2.1}\]
where \(P_{ij}=(1+\vec{\sigma}_{i}\cdot\vec{\sigma}_{j})/2\) is the permutation operator for the spins at sites \(i\) and \(j\). It commutes with all of the global \(\mathfrak{sl}_{2}\), which acts on the spin chain by
\[\begin{split} S^{\pm}=\sum_{i=1}^{L}\sigma_{i}^{\pm}\,,& \qquad S^{\varepsilon}=\frac{1}{2}\sum_{i=1}^{L}\sigma_{i}^{ \varepsilon}\,,\\ [S^{\varepsilon},S^{\pm}]=\pm S^{\pm}\,,&\qquad[S^{ +},S^{-}]=2S^{\varepsilon}\,.\end{split} \tag{2.2}\]
We denote the coordinate basis vectors of \(\mathcal{H}\) by
\[|i_{1},\dots,i_{M}\rangle\rangle\equiv\sigma^{-}_{i_{1}}\cdots\sigma^{-}_{i_{M}} \,|\uparrow\cdots\uparrow\rangle\,. \tag{2.3}\]
Bethe's exact characterisation of the spectrum of (2.1) [43] is one of the cornerstones of integrability. It admits an algebraic reformulation in the framework of the quantum inverse-scattering method developed by Faddeev _et al_[44]. One of the many benefits of this framework is that it allows for the construction of _inhomogeneous_ generalisations of (2.1) depending on inhomogeneity parameters \(\theta_{1},\dots,\theta_{L}\) that break translational invariance (homogeneity) without spoiling integrability. Let us briefly review how this goes. We start from the rational \(R\)-matrix [45, 46]
\[\overline{R}(u)=u+{\rm i}\,P\,,\qquad\overline{R}(u-v)=\raisebox{-14.226378pt}{ \includegraphics[width=14.226378pt]{./figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures
### Algebraic Bethe ansatz and \(Qq\)-relation
The aim of the algebraic Bethe ansatz is to construct the eigenvectors of the transfer matrix using the Yangian generators (2.6). Let us first review the case when the inhomogeneities \(\theta_{i}\) and the twist \(\kappa\) are in generic position. As a starting point we take a reference or '(pseudo)vacuum' state \(\ket{0}\), annihilated by the \(C\)-operator from (2.6), \(C(u)\ket{0}=0\). Later we will encounter more complicated (pseudo)vacua, but for now we have
\[\ket{0}=\ket{\uparrow\uparrow\cdots\uparrow}, \tag{2.10}\]
which is an eigenstate of \(\overline{t}(u;q)\) since the latter preserves the number of \(\downarrow\)s. To build the other states, we act on it with the \(B\)-operator, which serves as a creation operator,
\[\overline{B}(u_{1})\cdots\overline{B}(u_{M})\ket{0}. \tag{2.11}\]
This is an eigenstate of the transfer matrix with eigenvalue
\[\overline{\tau}(u;\kappa)=\overline{Q}_{\theta}^{+}\,\frac{\overline{Q}^{--}}{ \overline{Q}}+\overline{Q}_{\theta}^{-}\,\frac{\overline{Q}^{++}}{\overline{ Q}}=\kappa\,\overline{Q}_{\theta}^{+}(u)\prod_{m=1}^{M}\frac{u-u_{m}-\mathrm{i}}{u-u_{ m}}+\kappa^{-1}\,\overline{Q}_{\theta}^{-}(u)\prod_{m=1}^{M}\frac{u-u_{m}+ \mathrm{i}}{u-u_{m}}, \tag{2.12}\]
provided the parameters \(u_{m}\), known as Bethe roots, satisfy the Bethe-ansatz equations
\[\kappa^{2}\,\prod_{i=1}^{L}\frac{u_{m}-\theta_{i}+\mathrm{i}/2}{u_{m}-\theta_{ i}-\mathrm{i}/2}=\prod_{n(\neq m)}^{M}\frac{u_{m}-u_{n}+\mathrm{i}}{u_{m}-u_{n}- \mathrm{i}}\,,\qquad 1\leqslant m\leqslant M\,. \tag{2.13}\]
Note that these equations ensure that \(\overline{\tau}\) is a polynomial of degree \(L\) in \(u\), in accordance with the definition (2.5) of the monodromy matrix, whose coefficients depend on the Bethe roots \(u_{m}\) as well as the inhomogeneities \(\theta_{i}\) and twist \(\kappa\). Note also that the Bethe vectors only depend on the twist through the Bethe roots. Further observe that the Bethe equations are _symmetric_ in the inhomogeneities. For generic values of the parameters, i.e. \(\theta_{i}\neq\theta_{j}+\mathrm{i}\) for all \((i,j)\) and \(\kappa\notin\{0,\pm 1,\infty\}\), one should take all the solutions \(\{u_{1},\dots,u_{M}\}\) of (2.13) for \(M\in\{0,1,\dots,L\}\) that do not contain any coincident Bethe roots, i.e. \(u_{m}\neq u_{n}\) for all \(m\neq n\). It is known that there are precisely \(\binom{L}{M}\) distinct solutions [7, 8, 9], accounting for all \(M\)-magnons eigenstates of the transfer matrix via the algebraic Bethe ansatz (2.11).
The Bethe equations admit several reformulations. We will use the standard shorthand
\[f^{\pm}(u)\equiv f(u\pm\mathrm{i}/2)\,,\qquad f^{\pm\pm}\equiv(f^{\pm})^{\pm}\,, \tag{2.14}\]
and define the polynomial
\[\overline{Q}_{\theta}(u)\equiv\prod_{i=1}^{L}(u-\theta_{i})\,, \tag{2.15}\]
so that \(\overline{Q}_{\theta}^{\pm}\) are the eigenvalues of \(\overline{A}\) and \(\overline{D}\) on \(\ket{0}\), respectively. Further introduce Baxter's \(Q\)-_function_ as the 'twisted polynomial' whose zeroes are the Bethe roots
\[\overline{Q}(u)\equiv\kappa^{\mathrm{i}u}\prod_{m=1}^{M}(u-u_{m})\,. \tag{2.16}\]
The Bethe equations (2.13) now take the concise form
\[\frac{\overline{Q}_{\theta}^{+}}{\overline{Q}_{\theta}^{-}}=-\frac{\overline{ Q}^{++}}{\overline{Q}^{--}}\quad\text{at}\quad u=u_{m}\,,\qquad 1\leqslant m \leqslant M\,. \tag{2.17}\]
A convenient alternative to (2.17) is their _Wronskian form_. For spin \(1/2\) this is a functional equation called the _QQ-relation_:
\[\big{(}\kappa-\kappa^{-1}\big{)}\mskip 1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu } _{\theta}=\mskip 1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu }^{-}-\mskip 1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu }^{+}-\mskip 1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu }^{-}\,. \tag{2.18}\]
The left-hand side is the (known) function (2.15), \(\mskip 1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu }\mskip 1.5mu\) is given by (2.16), and \(\mskip 1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu }\mskip 1.5mu\) is the counterpart of \(\mskip 1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu }\mskip 1.5mu\) 'beyond the equator', with opposite twist and of degree \(L-M\),
\[\mskip 1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu }\mskip 1.5mu (u)\equiv\kappa^{- \mathrm{i}u}\prod_{n=1}^{L-M}(u-v_{n})\,. \tag{2.19}\]
Demanding that \(\mskip 1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu }\mskip 1.5mu\) and \(\mskip 1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu }\mskip 1.5mu\) be of the form (2.16) and (2.19) and solving the _QQ_-relation (2.18) yields a discrete set of solutions for the Bethe roots. One can show from (2.18) that the roots \(u_{m}\) of \(\mskip 1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu }\mskip 1.5mu\) satisfy the Bethe equations, see Appendix B.1.2 Note that the right-hand side of (2.18) depends on the twist as \(\mskip 1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu }^{\mp}\mskip 1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu }^{\mp}\propto\kappa^{\pm 1}\) due to the exponential prefactors of \(\mskip 1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu }\mskip 1.5mu\). (For the periodic case with \(\kappa=1\) see Section 2.2.3.)
Footnote 2: Likewise, the ‘dual Bethe roots’ \(v_{n}\) of \(\mskip 1.5mu \overline{\mskip-1.5mu \overline{\mskip-1.5mu Q\mskip-1.5mu }\mskip 1.5mu }\mskip 1.5mu\) satisfy the Bethe equations ‘beyond equator’, with \(M\rightsquigarrow L-M\), \(\kappa\rightsquigarrow\kappa^{-1}\).
While for generic values of \(\theta_{i},\kappa\) the Bethe equations and _QQ_-relation are equivalent, this is not always the case, cf. [47, 48, 49, 50]. The _QQ_-relation is often more useful as its solutions are in bijection with the transfer-matrix eigenstates. This _completeness_ of the Bethe ansatz was proven for almost all values of the inhomogeneities (including all \(\theta_{i}=0\)) [7, 8, 9]; see Section 2.3 where we study precisely the cases where the proofs fail.
#### 2.2.1 Inhomogeneous analogues of translation operator
One way to calculate elements of the Bethe algebra is the inhomogeneous analogue of the standard approach: evaluating the transfer matrix and its logarithmic derivative(s) at a special point \(u_{*}\) where at least one of the \(R\)-matrices in the product (2.5) simplifies. Since \(\mskip 1.5mu \overline{\mskip-1.5mu R\mskip-1.5mu }\mskip 1.5mu (0)=\mathrm{i}P\) any \(u_{*}=\theta_{j}+\mathrm{i}/2\) will do the job. We compute (as in Section 2.1, 'H' is for 'Heisenberg')
\[\begin{split} G_{j}^{\mathrm{H}}(\kappa)&\equiv- \mathrm{i}\mskip 1.5mu \overline{\mskip-1.5mu \mskip-1.5mu }\mskip 1.5mu (\theta_{j}+\mathrm{i}/2;\kappa)\\ &=-\mathrm{i}\mskip 1.5mu \mathrm{i}\mskip 1.5mu \mathrm{r}_{0}\big{[} \kappa^{\sigma_{0}^{2}}\mskip 1.5mu R_{01}(\theta_{j}-\theta_{1})\cdots\mskip 1.5mu \overline{\mskip-1.5mu R\mskip-1.5mu }_{0L}(\theta_{j}-\theta_{L})\big{]}=\
commuting operators has long-range interactions involving multiple spins at a time, with terms that resemble the interactions of the \(q\)-deformed (xxz-type) Haldane-Shastry spin chain [53, 54]. For our purposes, however, they are not suitable.3
Footnote 3: The values \(u_{*}=\theta_{j}+\mathrm{i}/2\) break the symmetry between the inhomogeneities, and are not compatible with the fermionic condition \(s_{ij}\,P_{ij}=-1\,\,(i\neq j)\) from Section 3.2.
#### 2.2.2 Another family of conserved charges with inhomogeneities
Let us instead expand the transfer matrix at \(u=0\) or \(u\to\infty\). We pick the latter option and expand it as a (formal) power series in \((-\mathrm{i}u)^{-1}\). In order to remove some trivial contributions to the charges, we find it convenient to expand (cf. the shifts in (2.5))
\[\frac{\overline{t}(u+\mathrm{i}/2;\kappa)}{\overline{Q}_{\theta}(u)}=\kappa+ \kappa^{-1}+\sum_{n=1}^{\infty}\overline{t}_{n}(\kappa)(-\mathrm{i}u)^{-n}\,. \tag{2.23}\]
Here \(\kappa+\kappa^{-1}\) is the quantum dimension of the auxiliary space. The next few coefficients are
\[\begin{split}\overline{t}_{1}(\kappa)&=\sum_{i=1} ^{L}\kappa^{\sigma_{i}^{z}}=\frac{\kappa+\kappa^{-1}}{2}\,L+(\kappa-\kappa^{ -1})S^{z}\,,\\ \overline{t}_{2}(\kappa)&=\sum_{i<j}\kappa^{\sigma _{i}^{z}}\,P_{ij}-\mathrm{i}\sum_{i=1}^{L}\theta_{i}\,\kappa^{\sigma_{j}^{z}} \,,\\ \overline{t}_{3}(\kappa)&=\sum_{i<j<k}\kappa^{ \sigma_{k}^{z}}\,P_{jk}\,P_{ij}-\mathrm{i}\sum_{i<j}\,(\theta_{i}+\theta_{j}) \,\kappa^{\sigma_{j}^{z}}\,P_{ij}-\sum_{i=1}^{L}\theta_{j}^{2}\,\kappa^{ \sigma_{j}^{z}}\,.\end{split} \tag{2.24}\]
Here and below, by \(\sum_{i<j}\) we mean the sum over all \(1\leqslant i<j\leqslant L\), and similarly for \(\sum_{i<j<k}\).
The eigenvalues of (2.24) are obtained by analogously expanding (2.12):
\[\frac{\overline{\tau}(u+\mathrm{i}/2;\kappa)}{\overline{Q}_{\theta}(u)}= \kappa+\kappa^{-1}+\sum_{n=1}^{\infty}\overline{\tau}_{n}(\kappa)(-\mathrm{i }u)^{-n}\,. \tag{2.25}\]
The first two coefficients read
\[\overline{\tau}_{1}(\kappa) =\frac{\kappa+\kappa^{-1}}{2}\,L+(\kappa-\kappa^{-1})\bigg{(} \frac{L}{2}-M\bigg{)}\,, \tag{2.26}\] \[\overline{\tau}_{2}(\kappa) =\frac{\kappa+\kappa^{-1}}{2}\,\overline{\tau}_{2}(1)+\frac{ \kappa-\kappa^{-1}}{2}\bigg{[}-\mathrm{i}\sum_{i=1}^{L}\theta_{i}+2\, \mathrm{i}\sum_{m=1}^{M}u_{m}+(L-1)\bigg{(}\frac{L}{2}-M\bigg{)}\bigg{]}\,, \tag{2.27}\]
where in the untwisted case
\[\overline{\tau}_{2}(1)=-\mathrm{i}\sum_{i=1}^{L}\theta_{i}+\bigg{(}\frac{L}{2 }-M\bigg{)}\bigg{(}\frac{L}{2}-M+1\bigg{)}+\frac{1}{4}\,L(L-4)\,. \tag{2.28}\]
#### 2.2.3 Periodic case
Removing the twist by setting \(\kappa=1\) enhances the \(S^{z}\)-symmetry of the transfer matrix \(\overline{t}(u;\kappa)\) to global \(\mathfrak{sl}_{2}\) symmetry under (2.2). Its eigenspaces become degenerate, forming spin multiplets (irreducible \(\mathfrak{sl}_{2}\)-representations) with the same eigenvalue of \(\overline{t}(u)\). As a matter of fact, \(\overline{t}_{2}(1)\) is essentially the quadratic Casimir
\[\vec{S}\cdot\vec{S}=\frac{1}{2}(S^{+}S^{-}+S^{-}S^{+})+S^{z}\,S^{z}=\sum_{i<j}P _{ij}\,-\frac{1}{4}\,L(L-4)\,, \tag{2.29}\]
where the constant \(-\frac{1}{4}\,L(L-4)=\frac{L}{2}\big{(}\frac{L}{2}+1\big{)}-\frac{1}{2}\,L(L-1)\) accounts for the difference in eigenvalues on \(|\uparrow\cdots\uparrow\rangle\). The first non-trivial charge is then \(\bar{\tau}_{3}(1)\), whose eigenvalue is
\[\bar{\tau}_{3}(1)=-\sum_{i=1}^{L}\theta_{i}^{2}-\mathrm{i}\,(L-M- 1)\sum_{i=1}^{L}\theta_{i}+2\mathrm{i}\,\bigg{(}\frac{L}{2}-M+1\bigg{)}\sum_{m= 1}^{M}u_{m}\\ +\frac{1}{3}\bigg{(}\frac{L}{2}-M-1\bigg{)}\left[L\,(L-M-1)+M\,(M -1)\right]\,. \tag{2.30}\]
As long as the inhomogeneities are generic (including the homogeneous case) the Bethe-ansatz construction (2.11) of the eigenstates provides the highest-weight vector in each multiplet, by solving the Wronskian Bethe equations up to the equator \(M\leqslant\lfloor L/2\rfloor\).4 The descendants in the multiplet can be obtained from it by acting with the global lowering operator \(S^{-}\). This fits in the framework of the algebraic Bethe ansatz: the leading term of the expansion of \(\bar{B}(u)\) in \(u\) occurs at order \(L-1\) with coefficient \(S^{-}\) up to a constant, so acting with \(S^{-}\) can be viewed as adding a magnon with an infinite Bethe root. Indeed, when \(\kappa=1\) then from any solution to the Bethe equations with given \(M\) one formally obtains another solution to the Bethe equations by adding \(u_{M+1}=\infty\) to the solution. For the \(Q\)-functions the absence of a twist means that \(\overline{Q}\) and \(\overline{\widetilde{Q}}\) become (monic) polynomials, the factor \(\kappa-\kappa^{-1}\) in the \(QQ\)-relation (2.18) is replaced by \(L-2M+1\), and \(\overline{\widetilde{Q}}\) now has \(L-M+1\) roots instead of \(L-M\).
Footnote 4: At \(\kappa=1\) subtleties occur that require care. For singular solutions, containing exact strings (i.e. \(u_{m}-u_{n}=\mathrm{i}\)), the eigenvector and eigenvalues need to be regularised. This already happens at \(M=2\) when \(L\) is even.
#### 2.2.4 Extreme twist and Gelfand-Tsetlin basis
For extreme twist \(\kappa\to\infty\) the twisted transfer matrix simplifies to \(\bar{\tau}(u;\kappa)\sim\kappa\overline{A}(u)\). Of the global \(\mathfrak{sl}_{2}\) only \(S^{\varepsilon}\) remains as a symmetry. Together with the quantum determinant (which is independent of the twist), the \(A\)-operator generates a subalgebra of the Yangian called the _Gelfand-Tsetlin subalgebra_, and the Bethe vectors reduce to the _Gelfand-Tsetlin basis_ for the spin chain [55, 56].5 This limit provides a useful combinatorial model for the Yangian representation. The \(QQ\)-relation (2.18) in this case takes the factorised form
Footnote 5: If \(\kappa\to 0\) then \(\bar{\tau}(u;\kappa)\sim\kappa^{-1}\,\overline{\mathcal{D}}(u)\) yields another Gelfand–Tsetlin subalgebra, and Bethe roots \(u_{m}=\theta_{\alpha_{m}}+\mathrm{i}/2\). Physically, the twist is \(\kappa=\mathrm{e}^{\mathrm{i}\,\varphi/2}\), so these limits correspond to extreme _imaginary_ twists.
\[\overline{Q}_{\theta}(u)=\prod_{m=1}^{M}\left(u-u_{m}-\frac{\mathrm{i}}{2} \right)\prod_{n=1}^{L-M}\left(u-v_{n}+\frac{\mathrm{i}}{2}\right)\,, \tag{2.31}\]
with the left-hand side given by (2.15). Comparing zeroes gives explicit values for the Bethe roots, sticking to the inhomogeneities as \(u_{m}=\theta_{i_{m}}-\mathrm{i}/2\) for \(I=\{i_{1},\ldots,i_{M}\}\subset\{1,\ldots,L\}\) in the \(M\)-magnon sector, and the \(2^{L}\) spin-chain states are given by all possible choices of such subsets \(I\). The corresponding eigenvalues of the transfer matrix \(\overline{A}(u)\) factorise as well,
\[\alpha_{I}(u)=\prod_{i\in I}\left(u-\theta_{i}-\frac{\mathrm{i}}{2}\right) \prod_{j\neq I}\left(u-\theta_{j}+\frac{\mathrm{i}}{2}\right)\,. \tag{2.32}\]
Note that the Bethe roots do not 'interact': from a solution \(\{\theta_{i}-\mathrm{i}/2\}_{i\in I}\) we get another solution by just adding any \(\theta_{j}-\mathrm{i}/2\) with \(j\notin I\). Moreover, the corresponding eigenvectors are simply related by acting with \(\bar{B}(\theta_{j}-\mathrm{i}/2)\) (the eigenvalue of the \(A\)-operator then changes by exchanging the factor \(u-\theta_{j}+\mathrm{i}/2\to u-\theta_{j}-\mathrm{i}/2\) in (2.32)). This should be contrasted with the usual situation at finite twist, where adding a root to a solution affects all other roots (except for infinite roots, corresponding to descendants, at \(\kappa=\pm 1\)), and one _first_ has to construct
the 'off-shell' Bethe vector \(\overline{B}(u_{1})\cdots\overline{B}(u_{M})\,|0\rangle\) and _then_ plug in a solution to get a transfer-matrix eigenvector 'on-shell'. Let us also point out that the Bethe states in this case coincide, up to normalisation, with the vectors of Sklyanin's separation-of-variables (SoV) basis for the Heisenberg spin chain with anti-periodic boundary conditions, see equation (3.10) in [57]. The Yangian Gelfand-Tsetlin basis also plays a central role in the SoV approach with more general twist and higher rank [58].
### Fusion
Now we turn to the role of inhomogeneities and fusion of Yangian representations, developed originally by Kulish, Reshetikhin and Sklyanin [59].
#### 2.3.1 General description
While the monodromy matrix and its four operator entries depend on the ordering of the inhomogeneities \(\theta_{i}\), as long as the inhomogeneities are generic they can be reordered between the sites of the spin chain via a similarity transformation on the Hilbert space. To see this, note that the Yang-Baxter equation
\[\overline{R}_{0i}(u-v)\,\overline{R}_{0j}(u-w)\,\overline{R}_{ij}(v-w)= \overline{R}_{ij}(v-w)\,\overline{R}_{0j}(u-w)\,\overline{R}_{0i}(u-v)\,, \tag{2.33}\]
implies that the operator
\[\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
As long as \(\overset{\kappa}{R}_{j+1,j+1}(\theta_{j}-\theta_{j+1})\) is invertible, i.e. \(\theta_{j}-\theta_{j+1}\neq\pm\mathrm{i}\), (2.36) gives
\[\overline{T}_{0}(u;\ldots,\theta_{j+1},\theta_{j},\ldots)=\overset{\kappa}{R}_ {j+1,j}(\theta_{j+1}-\theta_{j})\,\overline{T}_{0}(u;\ldots,\theta_{j},\theta_ {j+1},\ldots)\overset{\kappa}{R}_{j+1,j}(\theta_{j+1}-\theta_{j})^{-1}\,. \tag{2.38}\]
Thus, any two inhomogeneities can be exchanged by a similarity transformation on \(\mathcal{H}\) consisting of a sequence of conjugations by \(R\)-matrices that are all invertible provided \(\theta_{i}-\theta_{j}\neq\pm\mathrm{i}\) for all \(i,j\). This property is inherited by the transfer matrix \(\overline{t}(u)\), whose spectrum is independent of the order of the inhomogeneities. This is reflected in the symmetry in the \(\theta_{i}\) of the Bethe equations (2.13) and Baxter equation (2.12). Here the algebraic Bethe ansatz can be used to construct the whole spectrum from \(|0\rangle=|\uparrow\cdots\uparrow\rangle\) (Figure 1). This includes the homogeneous limit where all \(\theta_{i}=0\) are equal, yielding the ordinary Heisenberg xxx spin chain (2.1).
In terms of representation theory, an inhomogeneity \(\theta_{i}\) is the parameter of an evaluation representation of the Yangian, and the spin-chain Hilbert space is a tensor product of such evaluation representations. For generic values of the inhomogeneities (\(\theta_{i}-\theta_{j}\neq\pm\mathrm{i}\) for all \(i,j\)) the Hilbert space is _irreducible_, and the Yangian representation is called _tame_. In this case, (2.38) says that \(\overset{\kappa}{R}_{j,j+1}(\theta_{j}-\theta_{j+1})\) intertwines the Yangian irreps on \(\mathcal{H}\) with inhomogeneities (\(\ldots,\theta_{j},\theta_{j+1},\ldots\)) and (\(\ldots,\theta_{j+1},\theta_{j},\ldots\)). The completeness of the Bethe ansatz for generic inhomogeneities was proven in [7, 8], see also [9]. We remark that the exchange relation (2.38) also appears as the 'local condition' of the Knizhnik-Zamolodchikov (KZ) system [51, 60].
If the \(R\)-matrix in (2.36) is not invertible we cannot write (2.38), but instead there is an _invariant_ subspace. This can be seen as follows. When the two inhomogeneities differ by \(\pm\mathrm{i}\), the \(R\)-matrix (2.34) becomes proportional to an (anti)symmetriser,
\[\overset{\kappa}{R}(\pm\mathrm{i})=2\,\mathrm{i}\,\Pi^{\pm}\,,\qquad\Pi^{\pm }=(1\pm P)/2\,. \tag{2.39}\]
Let us write \(V_{d}\) for the spin-\(s\) irrep of \(\mathfrak{sl}_{2}\), which has dimension \(d=2s+1\). The operators (2.39) are orthogonal projectors decomposing two spin-\(1/2\) sites into a triplet and singlet,
\[V_{2}\otimes V_{2}\supset\Pi^{+}(V_{2}\otimes V_{2})\cong V_{3}\,,\qquad V_{ 2}\otimes V_{2}\supset\Pi^{-}(V_{2}\otimes V_{2})\cong V_{1}\,, \tag{2.40}\]
Figure 1: Structure of \(\mathcal{H}\) for generic inhomogeneities (\(\theta_{i}-\theta_{j}\neq\pm\mathrm{i}\) for all \(i,j\)). Each dot represents an eigenstate, organised into \(\mathfrak{sl}_{2}\)-irreps \(V_{d}\) as shown by the black vertical lines, with vertical axis recording \(M=L/2-S^{*}\). The dotted lines indicate (the algebraic Bethe ansatz) part of the Yangian action. The ‘off shell’ Bethe vectors \(\overset{\kappa}{R}(u_{1})\cdots\overset{\kappa}{R}(u_{M})\,|0\rangle\) span the entire \(M\)-particle sector, as sketched by the gray horizontal lines (lighter for the \(\mathfrak{sl}_{2}\)-descendants, obtained by acting with \(S^{-}\sim\overset{\kappa}{R}(\infty)\) in the periodic case). The Bethe equations for \(u_{1},\ldots,u_{M}\) single out the points in these subspaces that are eigenvectors of the transfer matrix.
corresponding to the (Clebsch-Gordan) decomposition
\[V_{2}\otimes V_{2}\cong V_{3}\,\oplus\,V_{1}\qquad\text{for $\mathfrak{sl}_{2}$}\,. \tag{2.41}\]
Focussing on sites \(j\) and \(j+1\) of the Hilbert space, this decomposition gives two orthogonal subspaces of \(\mathcal{H}\) of dimension \(3\times 2^{L-2}\) and \(2^{L-2}\), respectively:
\[\begin{split}\mathcal{H}=&\,V_{2}^{\otimes L}=\Pi_{j,j+1}^{+}(\mathcal{H})\,\oplus\,\Pi_{j,j+1}^{-}(\mathcal{H})\\ &\cong\left(V_{2}^{\otimes(j-1)}\otimes V_{3}\otimes V_{2}^{ \otimes(L-j-1)}\right)\,\oplus\,\left(V_{2}^{\otimes(j-1)}\otimes V_{1} \otimes V_{2}^{\otimes(L-j-1)}\right)\qquad\text{for $\mathfrak{sl}_{2}$}\,.\end{split} \tag{2.42}\]
While these two subspaces are generically mixed by the monodromy matrix, in special cases one of them is preserved. To see this we return to the relation (2.36). When \(\theta_{j+1}=\theta_{j}\mp\mathfrak{i}\), the right-hand side of (2.36) annihilates any vector in \(\ker\Pi_{j,j+1}^{\mp}\). Yet on the left-hand side the projector acts after the monodromy matrix, so (2.36) implies that \(\overline{T}_{0}(u;\theta_{1},\dots,\theta_{j},\theta_{j}\mp\mathfrak{i}, \dots,\theta_{L})\) must preserve \(\ker\Pi_{j,j+1}^{\mp}=\Pi_{j,j+1}^{\pm}(\mathcal{H})\). Given a choice of inhomogeneities with an adjacent pair differing by \(\mp\mathfrak{i}\) we can thus restrict the monodromy matrix to the subspace \(\Pi_{j,j+1}^{\pm}(\mathcal{H})\) to get a copy of an inhomogeneous spin chain of length \(L-1\), containing \(L-2\) sites of spin \(1/2\) plus a spin \(1\) (triplet) or \(0\) (singlet) at site \(j\), cf. (2.42). In Appendix A we show that the factors \(\overline{R}_{0j}(u-\theta_{j}-\mathfrak{i}/2)\overline{R}_{0,j+1}(u-\theta_ {j+1}-\mathfrak{i}/2)\) from the monodromy matrix yield a single \(R\)-matrix acting on (the spin-\(1/2\) auxiliary space and) site \(j\) with spin \(1\) or \(0\). This construction is called _fusion_[59].
There is no reason for the monodromy matrix \(\overline{T}_{0}(u;\theta_{1},\dots,\theta_{j},\theta_{j}\mp\mathfrak{i}, \dots,\theta_{L})\) to preserve the complementary space \(\Pi_{j,j+1}^{\mp}(\mathcal{H})\) as well -- and indeed it does not, as we will illustrate shortly. In terms of the transfer matrix, general complex values of inhomogeneities spoil hermiticity, so its eigenspaces are not orthogonal.
In terms of representation theory, the preceding says that if \(\theta_{j+1}-\theta_{j}=\mp\mathfrak{i}\) then the Yangian representation on \(\mathcal{H}\) with inhomogeneities \(\theta_{1},\dots,\theta_{j},\theta_{j+1},\dots,\theta_{L}\) is _reducible_. However, since the orthogonal complement is not preserved by the Yangian, this reducible representation is _indecomposable_.7
Footnote 7: Another situation where reducible but indecomposable representations appear is for \(U_{\mathfrak{sl}_{2}}(\mathfrak{sl_{2}})\) with \(q\) a root of unity, see e.g. [61]. In the mathematical literature this situation is often described via non-split short exact sequences. In brief, the coinage \(\operatorname{coim}(\Pi_{j,j+1}^{\mp})\equiv\mathcal{H}/\ker(\Pi_{j,j+1}^{ \mp})\) by definition fits in the short exact sequence
\[0\longrightarrow\ker(\Pi_{j,j+1}^{\mp})\longrightarrow\mathcal{H} \longrightarrow\operatorname{coim}(\Pi_{j,j+1}^{\mp})\longrightarrow 0\,, \tag{2.43}\]
where _extremes_ means that the image each map is the kernel of the next one. As \(\mathfrak{sl_{2}}\)-modules, this sequence _splits_ by (2.42). This is closely related to the fact that \(\operatorname{coim}(\Pi_{j,j+1}^{\mp})\cong\operatorname{coim}(\Pi_{j,j+1}^{ \mp})\cong\ker(\Pi_{j,j+1}^{\pm})\cong\operatorname{im}(\Pi_{j,j+1}^{\mp})\) as \(\mathfrak{sl_{2}}\)-modules. For the Yangian, acting by \(\overline{T}_{0}(u;\theta_{1},\dots,\theta_{j},\theta_{j})\mp\mathfrak{i}, \dots,\theta_{L}\), the sequence (2.43) remains exact: \(\operatorname{V}_{\text{inv}}=\ker(\Pi_{j,j+1}^{\mp})\subset\mathcal{H}\) is a Yangian submodule, and the quotient \((\Pi_{j,j+1}^{\mp})\) is also a Yangian module. However, this time (2.43) is not split: \(\ker(\Pi_{j,j+1}^{\mp})\oplus\operatorname{coim}(\Pi_{j,j+1}^{\mp})\) is not isomorphic to \(\mathcal{H}\) as a \(\operatorname{Ygl_{2}}\)-module. The above equivalence between \(\operatorname{coim}(\Pi_{j,j+1}^{\mp})\) and \(\operatorname{im}(\Pi_{j,j+1}^{\mp})\) does not respect the Yangian, and \(\operatorname{im}(\Pi_{j,j+1}^{\mp})\) is not even a \(\operatorname{Ygl_{2}}\)-module.
Considering the periodic case \(\kappa=1\) for simplicity, we have the following features, as is illustrated in Appendix D for examples with low \(L\).
* As we have just discussed, fixing the singlet state \(|\!\uparrow\!\downarrow\rangle-|\!\downarrow\!\uparrow\rangle\) at sites \(j\) and \(j+1\) and allowing any spin configuration at the \(L-2\) remaining sites together form a Yangian-invariant subspace of \(\mathcal{H}\) of dimension \(2^{L-2}\), \[V_{\rm inv}=\Pi_{j,j+1}^{-}(\mathcal{H})\,\cong\,V_{2}^{\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
* The remaining eigenstates, i.e. those inside the invariant subspace (2.45), can be constructed using the special root \[u_{0}\equiv\theta_{j}+\mathrm{i}/2\,,\] (2.48) to get from \(\ket{0}\) into the invariant subspace. Indeed, we have (see Appendix C) \[\ket{0^{\prime}}\equiv\overline{B}(u_{0})\ket{0}\ \propto\ \ket{j}-\ket{j+1}=(\sigma_{j}^{-}-\sigma_{j+1}^{-})\ket{ \uparrow\cdots\uparrow}\ \in\ V_{\mathrm{inv}}\,.\] (2.49) The result is a suitable vacuum for the algebraic Bethe ansatz inside \(V_{\mathrm{inv}}\) by invariance \(\ket{0^{\prime}}\) has Yangian highest weight, i.e. is an eigenvector of \(\overline{A},\overline{D}\) and killed by \(\overline{C}\). Now take any solution of the form \((u_{0},u_{1},\ldots,u_{M})\), where the \(u_{1},\ldots,u_{M}\) solve the'reduced' Bethe equations with \(M\leqslant(L-2)/2\) \[\prod_{i(\neq j,j+1)}^{L}\frac{u_{m}-\theta_{i}+\mathrm{i}/2}{u_{m}-\theta_{i }-\mathrm{i}/2}=\prod_{n(\neq m)}^{M}\frac{u_{m}-u_{n}+\mathrm{i}}{u_{m}-u_{n }-\mathrm{i}}\] (2.50) for a spin chain with \(L-2\) sites to construct all vectors in the invariant subspace as \[\overline{B}(u_{1})\cdots\overline{B}(u_{M})\ket{0^{\prime}}=\overline{B}(u_ {0})\,\overline{B}(u_{1})\cdots\overline{B}(u_{M})\ket{0}\qquad\text{in}\quad V _{\mathrm{inv}}=\Pi^{-}_{j,j+1}(\mathcal{H})\,.\] (2.51) The observation 8 that all states are of the usual Bethe-ansatz form is interesting because the proofs of completeness [7, 8, 9] do not apply to this (non-generic) case. Footnote 8: We stress that our observation is based on numerics for small lengths, supplemented with the partial proofs from Appendices B and C.
Let us elaborate on the special Bethe root (2.48). One way to understand its appearance is from the algebraic Bethe ansatz, see Appendix B.2. For our purposes another proof is more convenient, using the \(QQ\)-relation (2.18). Due to the special values of inhomogeneities, it admits a class of solutions where \(\overline{Q}\) and \(\overline{\widetilde{Q}}\) both have \(u_{0}\) as a root:
\[\overline{Q}(u)=(u-\theta_{j}-\mathrm{i}/2)\,\overline{Q}_{\mathrm{red}}(u)\, \qquad\overline{\widetilde{Q}}(u)=(u-\theta_{j}-\mathrm{i}/2)\,\overline{ \widetilde{Q}}_{\mathrm{red}}(u)\,, \tag{2.52}\]
where \(\overline{Q}_{\mathrm{red}}\) and \(\overline{\widetilde{Q}}_{\mathrm{red}}\) are of the form (2.16) and (2.19) with \(\kappa=1\). All three terms of the \(QQ\)-relation now have a factor \((u-\theta_{j})(u-\theta_{j}-\mathrm{i})\). After removing it, we are left with
\[\overline{Q}_{\mathrm{red}}^{+}\,\overline{\widetilde{Q}}_{\mathrm{red}}^{-}- \overline{Q}_{\mathrm{red}}^{-}\,\overline{\widetilde{Q}}_{\mathrm{red}}^{+}= \overline{Q}_{\theta,\mathrm{red}}\,, \tag{2.53}\]
where
\[\overline{Q}_{\theta,\mathrm{red}}(u)=\prod_{i(\neq j,j+1)}^{L}(u-\theta_{i})\,. \tag{2.54}\]
But this is just the \(QQ\)-relation of a spin chain of length \(L-2\). Thus solutions consist of the fixed root \(u_{0}=\theta_{j}+\mathrm{i}/2\) together with \(u_{1},\ldots,u_{M}\) solving Bethe equations for an effective spin chain of effective length \(L-2\) and inhomogeneities \(\{\theta_{1},\ldots,\theta_{L}\}\setminus\{\theta_{j},\theta_{j+1}\}\).
Notice that the transfer-matrix eigenvalue factorises for states with the special Bethe root (2.48): plugging (2.52) into (2.12) we find
\[\overline{\tau}(u)=(u-\theta_{j}+\mathrm{i}/2)\,(u-\theta_{j}-3\mathrm{i}/2) \,\frac{\overline{Q}_{\mathrm{red}}^{++}\,\overline{Q}_{\theta,\mathrm{red}}^{ -}+\overline{Q}_{\mathrm{red}}^{--}\,\overline{Q}_{\theta,\mathrm{red}}^{+}}{ \overline{Q}_{\mathrm{red}}}\,. \tag{2.55}\]
The fraction is a polynomial on shell, i.e. on solutions of the Bethe equations. Thus for states inside \(V_{\mathrm{inv}}\) the eigenvalue of the transfer matrix consists of a simple factor, corresponding to
the singlet at two sites, times a nontrivial part coming from an 'effective' spin chain with \(L-2\) sites.
The reason why the fixed root is not visible in the Bethe equations (2.46) is that it corresponds to the vanishing of the factor \(u_{m}-\theta_{j}-\mathrm{i}/2\) that we have cancelled on the left-hand side in going from (2.13) to (2.46). In the \(QQ\)-relation, however, this root is not missed and can be treated on equal footing with the other Bethe roots. This discrepancy between the \(QQ\)-relation and solutions of the Bethe equations is explained by the fact that the usual derivation of the Bethe equations from the \(QQ\)-relation fails in this case, because \(\overline{Q}\) and \(\widetilde{\overline{Q}}\) have a common root. For more details see Appendix B, where we also illustrate a subtlety in the proof of the construction (2.11) of the eigenstates for the case with the fixed root \(u_{0}\) -- namely, the 'unwanted' terms in the standard proof of the algebraic Bethe ansatz do not cancel but rather vanish individually, providing another explanation why the explicit root is absent in the Bethe equations (see also section 6 in [64] for related discussions).
Finally, for later use we record that the special root (2.48) admits the symmetric expression
\[u_{0}=\frac{\theta_{j}+\theta_{j+1}}{2}\,. \tag{2.56}\]
In Appendix D we illustrate various features of fusion into a singlet in the examples of spin chains of length \(L=2,4\).
#### 2.3.3 Bethe ansatz for fusion into triplet
Now we consider the case of fusion into a spin-1 (triplet) representation, with
\[\theta_{j+1}=\theta_{j}-\mathrm{i}\,. \tag{2.57}\]
Here the situation is trickier. The main features are as follows.
* The triplet in combination with the other \(L-2\) sites form a Yangian-invariant subspace of \(\mathcal{H}\) of dimension \(3\times 2^{L-2}\), \[V_{\mathrm{inv}}=\Pi_{j,j+1}^{+}(\mathcal{H})\;\cong\;V_{2}^{\otimes(j-1)} \otimes V_{3}\otimes V_{2}^{\otimes(L-j-1)}\,.\] (2.58)
* Since the spectrum of the transfer matrix is symmetric in the inhomogeneities (see Section 2.3.1), the eigenvalues are the same as for the case with fusion into a singlet.9 Footnote 9: To be precise, the sets of all eigenvalues of \(\overline{\mathrm{i}}(u;x;\dots,\theta_{j},\theta_{j}-\mathrm{i},\dots)\) and \(\overline{\mathrm{i}}(u;x;\dots,\theta_{j}-\mathrm{i},\theta_{j},\dots)\) coincide. Of course this is not true for their restrictions to the corresponding invariant subspaces.
* The eigenstates inside \(V_{\mathrm{inv}}\), which contains the reference state \(|0\rangle=|\uparrow\cdots\uparrow\rangle\), are given by Bethe ansatz as usual. The Bethe equations (2.13) become \[\frac{u_{m}-\theta_{j}+3\mathrm{i}/2}{u_{m}-\theta_{j}-\mathrm{i}/2}\;\prod_{ i(i\neq j,j+1)}^{L}\frac{u_{m}-\theta_{i}+\mathrm{i}/2}{u_{m}-\theta_{i}- \mathrm{i}/2}=\prod_{n(\neq m)}^{M}\frac{u_{m}-u_{n}+\mathrm{i}}{u_{m}-u_{n}- \mathrm{i}}\,,\] (2.59) and we consider all \(M\leqslant L/2\), yielding states via the algebraic Bethe ansatz (2.11). The prefactor \[\frac{u_{m}-\theta_{j}+3\mathrm{i}/2}{u_{m}-\theta_{j}-\mathrm{i}/2}=\frac{u_ {m}-(\theta_{j}-\mathrm{i}/2)+\mathrm{i}}{u_{m}-(\theta_{j}-\mathrm{i}/2)- \mathrm{i}}\] (2.60) corresponds to a spin \(s=1\) site like before, but with 'effective inhomogeneity' shifted in the other way. For both types of fusion, the effective inhomogeneity is the average \((\theta_{j}+\theta_{j+1})/2\) of the original inhomogeneities (cf. Appendix A).
* Crucially, the eigenstates of the transfer matrix outside the invariant space cannot be generated from \(|0\rangle\in V_{\text{inv}}\) by applying \(B\)-operators. Thus the algebraic Bethe ansatz (2.11) misses \(2^{L-2}\) eigenstates: unlike for fusion into a singlet, it is _not_ complete. Although the special Bethe root (2.56) still appears among the solutions of the \(QQ\)-relations, it is not useful this time: the \(B\)-operator at this value now kills the vacuum (Appendix C), \[\overline{B}\left(\frac{\theta_{j}+\theta_{j+1}}{2}\right)|0\rangle=0\,.\] (2.61) There does not seem to be a simple way to build the eigenstates outside \(V_{\text{inv}}\).10 Luckily, the applications of fusion that we will need in in Section 3 only involve vectors in the invariant subspace, so we will never need to worry about the quotient. Footnote 10: One way to describe them is to form the quotient space \(\mathcal{H}/V_{\text{inv}}\). As a vector space it is isomorphic to \(\Pi^{-}_{j,j+1}(\mathcal{H})\). Unlike the latter, the quotient \(i\)s a well-defined Yangian representation by invariance of \(V_{\text{inv}}=\Pi^{+}_{j,j+1}(\mathcal{H})\). This quotient corresponds to a spin chain with a spin-0 site and \(L-2\) spin-\(1/2\) sites, just like the invariant space was for fusion into a singlet. Inside the quotient one can build all eigenstates via the algebraic Bethe ansatz as usual. These states are in one-to-one correspondence with the remaining eigenvectors of our original spin chain in \(\mathcal{H}\), yet actually reconstructing them inside \(\mathcal{H}\) is tricky in practice. For each state from the quotient one then needs to find an appropriate correction by a vector in \(V_{\text{inv}}\). It seems rather nontrivial to do it in a systematic way.
In appendix D.1 we illustrate some features of the fusion into a triplet on the simple example of a length \(L=2\) spin chain.
#### 2.3.4 Repeated fusion
So far we have only looked at fusion of two sites, but fusion can happen at multiple sites. This in particular provides a way to construct an integrable Heisenberg spin chain with (possibly varying) higher-spin sites using many spin-\(1/2\) representations and fusion [55, 59, 65]. More generally, fusion may lead to rather intricate combinations of invariant subspaces. Since we will only be interested later in some simple cases, and a general discussion would lead us too far from our goal, we merely illustrate the various possibilities for fusion when \(\theta_{i}=\theta_{j}+\text{i}\) for only two pairs \((i,j)\). Up to a similarity transformation using (2.38), we may assume that these pairs are \((1,2)\) and either \((2,3)\) or \((3,4)\) -- provided the chain has length \(L\geqslant 4\).
In all scenarios, by the discussion in Section 2.3.1 there are at least two Yangian-invariant subspaces: \(V_{\text{inv}}^{(1)}=\Pi_{12}^{\pm}(\mathcal{H})\) and either \(V_{\text{inv}}^{(2)}=\Pi_{23}^{\pm^{\prime}}(\mathcal{H})\) or \(V_{\text{inv}}^{(2)}=\Pi_{34}^{\pm^{\prime}}(\mathcal{H})\) (independent signs \(\pm\)
Figure 3: Structure of \(\mathcal{H}\) (cf. Figure 1) with \(\theta_{j+1}=\theta_{j}-\text{i}\) and other \(\theta_{i}\) generic. The Yangian action preserves \(V_{\text{inv}}=\Pi_{j,j+1}^{+}(\mathcal{H})\), in which the algebraic Bethe ansatz works as usual, but may send vectors in \(\Pi_{j,j+1}^{-}(\mathcal{H})\) anywhere in \(\mathcal{H}\).
and \(\pm^{\prime}\)), corresponding to fusion for either pair of sites. This time, however, these invariant spaces are not necessarily irreducible, since one could fuse both pairs at the same time. The intersection \(V^{(1)}_{\text{inv}}\cap V^{(2)}_{\text{inv}}\) is either trivial or a Yangian-irreducible subspace. Here are the various possibilities:
* **Independent fusion.** When \(\theta_{2}=\theta_{1}\mp\mathrm{i}\) and \(\theta_{4}=\theta_{3}\mp^{\prime}\mathrm{i}\) (independent signs), we get fusion separately at sites \(1,2\) and at sites \(3,4\). Here \(V^{(i)}_{\text{inv}}\) are reducible but indecomposable, since both contain the nontrivial subspace \[\Pi^{\pm}_{12}\,\Pi^{\pm^{\prime}}_{34}(\mathcal{H})=\Pi^{\pm}_{12}(\mathcal{H })\cap\Pi^{\pm^{\prime}}_{34}(\mathcal{H})\,.\] (2.62) This Yangian-invariant subspace is irreducible and can be viewed as a spin chain of length \(L-2\) containing two sites with spin \(1\) or \(0\), depending on the signs. In particular, \(\theta_{2}=\theta_{1}+\mathrm{i}\), \(\theta_{4}=\theta_{3}+\mathrm{i}\) leaves us with (two spin-\(0\) sites and) \(L-4\) spin-\(1/2\) sites. This is essentially what will happen for the fermionic spin-Calogero-Sutherland model in the following sections. (The case \(\theta_{2}=\theta_{1}-\mathrm{i}\), \(\theta_{4}=\theta_{3}-\mathrm{i}\) would instead appear if one were to consider the bosonic spin-Calogero-Sutherland model, cf. [23].)
* **Three-site antisymmetric fusion.** The case \(\theta_{3}=\theta_{2}+\mathrm{i}=\theta_{1}+2\mathrm{i}\) corresponds to singlet fusion at sites \(1,2\) as well as at sites \(2,3\). Both \(V^{(i)}_{\text{inv}}\) are irreducible as their intersection \[\Pi^{-}_{12}(\mathcal{H})\cap\Pi^{-}_{23}(\mathcal{H})=\{0\}\] (2.63) is trivial because there is no completely antisymmetric tensor with three indices taking only two values.11 Such a situation will occur in Section 4.3.2. Footnote 11: This is a peculiarity of the low-rank chain we are considering. If we were studying an \(s_{\mathrm{L}}\) spin chain for \(r>2\), then the intersection (2.63) would be a non-trivial Yangian-irreducible subspace. It would be the space of a spin chain of length \(L-2\) in which the first site carries the (third fundamental) \(s_{\mathrm{L}}\)-irrep whose highest-weight vector is a completely antisymmetric \(3\)-tensor. For \(r=3\) it is the trivial representation of \(s_{\mathrm{L}}\).
* **Three-site symmetric fusion.** The case \(\theta_{3}=\theta_{2}-\mathrm{i}=\theta_{1}-2\mathrm{i}\) corresponds to triplet fusion at sites \(1,2\) as well as \(2,3\). Both \(V^{(i)}_{\text{inv}}\) are reducible and indecomposable, since the intersection \[\Pi^{+}_{12}(\mathcal{H})\cap\Pi^{+}_{23}(\mathcal{H})\cong V_{4}\otimes V_{ 2}^{\otimes(L-3)}\] (2.64) is a non-trivial irreducible Yangian submodule. It is the space of a spin chain of length \(L-2\) with one spin-\(3/2\) site. (This scenario would also show up for the bosonic spin-Calogero-Sutherland model.)
* **Three-site mixed fusion.** For \(\theta_{3}=\theta_{2}\mp\mathrm{i}=\theta_{1}\) both \(V^{(i)}_{\text{inv}}\) are irreducible, as \[\Pi^{\pm}_{12}(\mathcal{H})\cap\Pi^{\mp}_{23}(\mathcal{H})=\{0\}\] (2.65) is again trivial, because there is no \(3\)-tensor that is symmetric in its first (last) two indices and antisymmetric in the last (first) two. Since \(\dim\bigl{(}V^{(1)}_{\text{inv}}\bigr{)}+\dim\bigl{(}V^{(2)}_{\text{inv}}\bigr{)} =\dim\bigl{(}\mathcal{H}\bigr{)}\), now \(\mathcal{H}\) is actually completely reducible: \(\mathcal{H}=\Pi^{\pm}_{12}(\mathcal{H})\oplus\Pi^{\mp}_{23}(\mathcal{H})\) as Yangian modules.
One can continue in this way to get more and more complicated constellations of invariant subspaces. As the scenarios with three-site fusion illustrate, one can use this to construct a spin chain with sites of varying spins, see [55, 59, 65] for more details and examples. For us only the case of independent fusion will be relevant in what follows. As the preceding illustrates, for any number of nonoverlapping pairs of neighbouring sites one can essentially omit the fused singlets to get a spin chain of length \(L-2n\) for some \(n\). As we will see, the fermionic spin-Calogero-Sutherland model contains _infinitely_ many of these spin chains.
Equipped with these preliminaries we are ready to move on to our main subject.
## 3 Fermionic Spin-Calogero-Sutherland model
The (trigonometric, quantum) spin-Calogero-Sutherland model is a quantum many-body system describing particles that carry a spin and move around on a circle while interacting in pairs. It is a (quantum) integrable mode with extraordinary properties, including extremely simple eigenvalues that are highly degenerate because of a Yangian _symmetry_. This should be contrasted with the Heisenberg spin, whose Yangian does not commute with the spin-chain Hamiltonian and instead allows one to move between different eigenspaces, as in the algebraic Bethe ansatz. The algebraic origin of the spin-Calogero-Sutherland model and its properties lies in a family of commuting differential-difference operators known as the Dunkl operators.
**Conventions.** We will clean up our notation a little from now on. Let us summarise the changes for easy reference. To remove the factors of \(\mathrm{i}\) that were floating around in Section 2 we reparametrise the spectral parameter as \(u=\mathrm{i}\,x\) and henceforth use the (slightly differently normalised) \(R\)-matrix
\[R(x)\equiv 1+x^{-1}\,P=1+\mathrm{i}\,u^{-1}\,P=u^{-1}\,\overline{R}(u)\,. \tag{3.1}\]
We reparametrise the inhomogeneities as \(\theta_{i}=-\mathrm{i}\,\delta_{i}\); in practice, \(\delta_{i}\) will either be a Dunkl operator or its (real) eigenvalue. The mondromy matrix thus takes the form of a product of \(R_{0i}(x+\delta_{i}-1/2)=\overline{R}_{0i}(u-\theta_{i}-\mathrm{i}/2)/(u- \theta_{i}-\mathrm{i}/2)\), cf. (3.20) below. Note that the fusion condition \(\theta_{j+1}-\theta_{j}=\mp\mathrm{i}\) from Section 2.3 now reads \(\delta_{j+1}-\delta_{j}=\pm 1\) (triplet/singlet). The Bethe roots become \(u_{m}=\mathrm{i}\,x_{m}\). We reserve \(L\) for the lengths of the 'effective spin chains' to appear in Section 3.3, and start with \(N\) particles for our quantum many-body system.
### Dunkl operators and nonsymmetric Jack polynomials
Let \(\mathbb{C}[z_{1}^{\pm},\dots,z_{N}^{\pm}]\) be the space of complex Laurent polynomials in \(N\) variables, which will be the coordinates of the particles on the circle in multiplicative notation, \(z_{j}=\mathrm{e}^{\mathrm{i}x_{j}}\). We denote the operators of coordinate permutation \(z_{i}\leftrightarrow z_{j}\) by \(s_{ij}\) for \(i\neq j\) in \(\{1,\dots,N\}\). For \(\beta\in\mathbb{C}\setminus\{0\}\), the Dunkl operators are
\[d_{i}=\frac{1}{\beta}\,z_{i}\,\widehat{\alpha}_{z_{i}}-\sum_{j=1}^{i-1}\frac{ z_{i}}{z_{ji}}\,(1-s_{ij})+\sum_{j=i+1}^{N}\frac{z_{j}}{z_{ij}}\,(1-s_{ij})+ \frac{N+1-2i}{2}\,, \tag{3.2}\]
where we use the abbreviation \(z_{ij}\equiv z_{i}-z_{j}\). Their key properties are the (degenerate affine Hecke algebra) relations
\[d_{i}\,d_{j}=d_{j}\,d_{i}\,,\qquad d_{i}\,s_{i,i+1}=s_{i,i+1}\,d_{i+1}+1\,, \qquad d_{i}\,s_{jk}=s_{jk}\,d_{i}\quad\text{for}\quad i\neq j,k\,. \tag{3.3}\]
Dunkl's operators have a simple joint spectrum, with simultaneous eigenfunctions that are called nonsymmetric Jack polynomials \(E_{\mathbf{\mu}}(\mathbf{z})=E_{\mathbf{\mu}}^{(\alpha)}(\mathbf{z})\), with Jack parameter \(\alpha=1/\beta\) that we suppress. These polynomials are indexed by 'compositions' \(\mathbf{\mu}=(\mu_{1},\dots,\mu_{N})\in\mathbb{Z}^{N}\), and defined by the conditions
\[\begin{split} d_{i}\,E_{\mathbf{\mu}}(\mathbf{z})&=\delta_{i }(\mathbf{\mu})\,E_{\mathbf{\mu}}(\mathbf{z})\,,\\ E_{\mathbf{\mu}}(\mathbf{z})&=z_{1}^{\mu_{1}}\cdots z_{N}^{ \mu_{N}}+\text{lower}\,,\\ \delta_{i}(\mathbf{\mu})&\equiv\frac{1}{\beta}\,\mu_{i}+ \frac{1}{2}\big{(}N+1-2\,\sigma^{\mu}(i)\big{)}\,.\end{split} \tag{3.4}\]
Here 'lower' indicates monomials that are lower in the dominance order on compositions (see e.g. Section 2.1 of [42]). The eigenvalues \(\delta_{j}(\mathbf{\mu})\) of the Dunkl operators contain the integers
\[\sigma^{\mathbf{\mu}}(i)\equiv\,\#\big{\{}1\leqslant j\leqslant N\,\big{|}\,\, \mu_{j}>\mu_{i}\big{\}}\,+\,\#\big{\{}1\leqslant j\leqslant i\,\big{|}\,\, \mu_{j}=\mu_{i}\big{\}}\,, \tag{3.5}\]
Note that \(\sigma^{H}(i)=i\) if \(\mu_{1}\geqslant\dots\geqslant\mu_{N}\). We will further need the property that, for all \(\boldsymbol{\mu}\!\in\mathbb{Z}^{N}\),
\[\delta_{i}(\boldsymbol{\mu})=\delta_{i+1}(\boldsymbol{\mu})+1\quad\text{if} \quad\mu_{i}=\mu_{i+1}\,. \tag{3.6}\]
The Dunkl operators give rise to the spin-Calogero-Sutherland model through intermediate operators defined as symmetric combinations of the \(d_{j}\). These in particular include the 'gauge-transformed (total) momentum operator'
\[P^{\prime}\equiv\beta\sum_{i=1}^{N}d_{i}=\sum_{i=1}^{N}z_{i}\,\partial_{z_{i}} \tag{3.7}\]
and the 'gauge-transformed Hamiltonian'
\[\begin{split} H^{\prime}&\equiv\frac{\beta^{2}}{2 }\biggl{(}\sum_{i=1}^{N}d_{i}^{2}\ -E^{0}\biggr{)}\\ &=\frac{1}{2}\sum_{i=1}^{N}\bigl{(}z_{i}\,\partial_{z_{i}}\bigr{)} ^{2}+\frac{\beta}{2}\sum_{i<j}\frac{z_{i}+z_{j}}{z_{i}-z_{j}}\bigl{(}z_{i}\, \partial_{z_{i}}-z_{j}\,\partial_{z_{j}}\bigr{)}+\beta\sum_{i<j}\frac{z_{i}\, z_{j}}{z_{ij}\,z_{ji}}\,(1-s_{ij})\,,\end{split} \tag{3.8}\]
where we defined the constant
\[E^{0}\equiv\frac{1}{4}\sum_{i=1}^{N}\bigl{(}N-2i+1\bigr{)}^{2}=\frac{1}{12} \,N\,\bigl{(}N^{2}-1\bigr{)}\,. \tag{3.9}\]
The reason for the adjective 'gauge-transformed' is that they are related to the true (continuum) momentum operator and Hamiltonian by conjugation: 12
Footnote 12: We avoid the adjective ‘effective’ that is often used instead of ‘gauge transformed’ to prevent any confusion with our (unrelated) term ‘effective spin chain’ to appear in Section 3.3.
\[\begin{split} P^{\prime}&=\Phi_{0}^{-1}\biggl{(} \sum_{i=1}^{N}z_{i}\,\partial_{z_{i}}\biggr{)}\Phi_{0}\,,\qquad\Phi_{0}( \boldsymbol{z})\equiv\prod_{i\neq j}^{N}(1-z_{i}/z_{j})^{\beta/2}\,,\\ H^{\prime}+\frac{\beta^{2}}{2}\,E^{0}&=\Phi_{0}^{- 1}\biggl{(}\frac{1}{2}\sum_{i=1}^{N}\bigl{(}z_{i}\,\partial_{z_{i}}\bigr{)} ^{2}+\sum_{i<j}\frac{z_{i}\,z_{j}}{z_{ij}\,z_{ji}}\,\beta\,(\beta-s_{ij}) \biggr{)}\Phi_{0}\,.\end{split} \tag{3.10}\]
The eigenvalues of these operators only depend on the partition \(\boldsymbol{\lambda}\) obtained by sorting the parts of \(\boldsymbol{\mu}\) into (weakly) decreasing order. From the definition (3.8) of the gauge-transformed operators, it is clear that these eigenvalues can only be of the form
\[\begin{split} P^{\prime}(\boldsymbol{\mu})&=\beta \sum_{i=1}^{N}\delta_{i}(\boldsymbol{\mu})=\sum_{i=1}^{N}\lambda_{i}\,,\\ E^{\prime}(\boldsymbol{\mu})&=\frac{\beta^{2}}{2} \biggl{(}\sum_{i=1}^{N}\delta_{i}(\boldsymbol{\mu})^{2}\ -E^{0}\biggr{)}=\frac{1}{2}\sum_{i=1}^{N}\lambda_{i}^{2}+\frac{\beta}{2}\sum_{ i=1}^{N}\bigl{(}N-2i+1\bigr{)}\,\lambda_{i}\,.\end{split} \tag{3.11}\]
The integers \(\mu_{i}\) can be interpreted as 'quantum numbers' for the (quasi)momenta of the quasiparticles.
So far we have worked at the nonsymmetric level, corresponding to _di_tinguishable particles. The spectrum of this model is highly degenerate: the eigenvalues (3.11) only depend on the partition \(\boldsymbol{\lambda}\). By prescribing the symmetry of the eigenvectors one obtains _in_ distinguishable particles. For example, for spinless bosons (or fermions) the wave functions are completely
(anti)symmetric, and on the subspaces of totally (anti)symmetric Laurent polynomials one recovers the scalar bosonic (fermionic) trigonometric Calogero-Sutherland model,
\[\begin{split} P=\Phi_{0}\,P^{\prime}\,\Phi_{0}^{-1}&=- \mathrm{i}\sum_{i=1}^{N}\partial_{x_{i}}\,,\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad z_{i}=\mathrm{e}^{\mathrm{i}x_{i}}\,,\\ H=\Phi_{0}\,H^{\prime}\,\Phi_{0}^{-1}&=-\frac{1}{ 2}\sum_{i=1}^{N}\partial_{x_{i}}^{2}+\beta\,(\beta\mp 1)\sum_{i<j}\frac{1}{4 \,\sin^{2}[(x_{i}-x_{j})/2]\,,\qquad s_{ij}=\pm 1\,,\end{split} \tag{3.12}\]
where we passed to additive coordinates. The eigenfunctions can be obtained from the non-symmetric theory too. We return to the gauge-transformed setting, where we can work with wave functions that are Laurent polynomials. Up to normalisation, the total (anti)symmetrisation of \(E_{\mathbf{\mu}}(\mathbf{z})\) only depends on the partition \(\mathbf{\lambda}\) corresponding to \(\mathbf{\mu}\), yielding single wave function with momentum and energy (3.11). For bosons the symmetrisation gives the (symmetric) Jack polynomial \(P_{\mathbf{\lambda}}(\mathbf{z})\) with parameter \(\alpha=1/\beta\). For fermions the result is only nonzero if all parts of \(\mathbf{\lambda}\) are different, which is because the non-symmetric Jack polynomials obey
\[s_{i,i+1}\,E_{\mathbf{\lambda}}=E_{\mathbf{\lambda}}\quad\text{if}\quad\lambda_{i}= \lambda_{i+1}\,. \tag{3.13}\]
If \(\mathbf{\lambda}\) is a _strict_ partition, i.e. if \(\lambda_{1}>\cdots>\lambda_{N}\), then it is of the form \(\mathbf{\lambda}=\mathbf{\nu}+\mathbf{\delta}_{N}\) for some (not necessarily strict) partition \(\mathbf{\nu}\), where \(\mathbf{\delta}_{N}\equiv(N-1,\dots,1,0)\) is the staircase partition. For strict partitions the result of antisymmetrisation is a Vandermonde polynomial times a Jack polynomial with shifted parameter,
\[\text{Vand}(z_{1},\dots,z_{N})\,P^{\prime}_{\mathbf{\nu}}(\mathbf{z})\,,\qquad\text{ Vand}(z_{1},\dots,z_{N})\equiv\prod_{1<i<j<N}(z_{i}-z_{j})\,,\quad P^{\prime}_{ \mathbf{\nu}}\equiv P_{\mathbf{\nu}}\big{|}_{\beta\mapsto\beta+1}\,. \tag{3.14}\]
In the spinless case, then, each energy in (3.11) occurs in the bosonic case, while in the fermionic case only strict partitions are allowed. See e.g. SS2.2 in [42] and references therein for more. We will instead be interested in a generalisation with fermions that each carry a spin as well as a coordinate.
### Hamiltonian and monodromy matrix
The Hilbert space for \(N\) spin-\(1/2\) fermions moving on a circle is
\[\mathcal{F}=\left\{\ket{\Psi}\in\left(C^{2}\right)^{\otimes N}\otimes C\! \left[z_{1}^{\pm},\dots,z_{N}^{\pm}\right]\,\right|\,P_{ij}\,s_{ij}\ket{\Psi}= -\ket{\Psi}\right\}. \tag{3.15}\]
It consists of the vectors that are completely antisymmetric in spin and coordinates, and coincides with the image of the projector
\[\Pi_{-}^{\text{tot}}=\frac{1}{N!}\sum_{\sigma\in S_{N}}\text{sgn}(\sigma)\,P_ {\sigma}\,s_{\sigma}\,,\qquad\left(\Pi_{-}^{\text{tot}}\right)^{2}=\Pi_{-}^{ \text{tot}}\,. \tag{3.16}\]
On the fermionic space space, the gauge-transformed Hamiltonian (3.8) takes the form
\[\vec{H}^{\prime}=\frac{1}{2}\sum_{i=1}^{N}\left(z_{i}\,\partial_{z_{i}}\, \partial_{z_{i}}\right)^{2}+\frac{\beta}{2}\sum_{i<j}\frac{z_{i}+z_{j}}{z_{i}- z_{j}}\left(z_{i}\,\partial_{z_{i}}-z_{j}\,\partial_{z_{j}}\right)+\beta\sum_{i<j} \frac{z_{i}\,z_{j}}{z_{ij}\,z_{ji}}\left(1+P_{ij}\right). \tag{3.17}\]
It is related by conjugation as in (3.12) to the fermionic spin-Calogero-Sutherland Hamiltonian (1.1). The momentum operator (3.7), on the other hand, is the same as in the spinless case. Let us emphasise that this operator only acts on the coordinates \(z_{j}\) of the particles, not on their spins. The cyclic translation \(P_{12}\cdots P_{N-1,N}\) of the spins does not even act on the fermionic
space, so the spin-chain notion of (crystal) momentum is irrelevant for the spin-Calogero-Sutherland model.
The spectrum of (1.1) is given by (3.11) with the restriction that \(\mathbf{\lambda}\) be a partition with multiplicities \(\leqslant 2\). We denote the set of these allowed partitions by 13
Footnote 13: Such partitions are called ‘\(3\)-regular’ in representation theory – not to be confused with the different but related meaning of that term in combinatorics, cf. [https://mathoverflow.net/q/438228](https://mathoverflow.net/q/438228).
\[\mathcal{P}=\{\mathbf{\lambda}\in\mathbb{Z}^{N}\mid\lambda_{1}\geqslant\cdots \geqslant\lambda_{N}\,,\;\lambda_{i}>\lambda_{i+2}\}\,. \tag{3.18}\]
Indeed, by the property (3.13) of nonsymmetric Jack polynomials, repetitions in \(\mathbf{\lambda}\) require antisymmetry in the corresponding spins because of the fermionic condition (3.15). For our case of spin \(1/2\) (i.e. \(\mathfrak{sl}_{2}\)), this means that the multiplicities are at most \(2\). (For spinless fermions the multiplicities are at most \(1\).)
The fermionic space comes equipped with (an action of the Yangian of \(\mathfrak{gl}_{2}\) given by) the monodromy matrix [40]
\[\begin{split} T_{0}(x)=&\,R_{01}\big{(}x+d_{1}- \tfrac{1}{2}\big{)}\cdots R_{0N}\big{(}x+d_{N}-\tfrac{1}{2}\big{)}\\ =&\left(1+\frac{P_{01}}{x+d_{1}-\tfrac{1}{2}}\right) \cdots\left(1+\frac{P_{0N}}{x+d_{N}-\tfrac{1}{2}}\right)\,.\end{split} \tag{3.19}\]
Here we use the \(R\)-matrix (3.1), and Dunkl operators play the role of inhomogeneities: in terms of the conventions of Section 2 one has
\[T_{0}(x)=\frac{\overline{T}_{0}(\mathrm{i}\,x)}{\overline{Q}_{\theta}\left( \mathrm{i}\,x-\tfrac{\mathrm{i}}{2}\right)}\Bigg{|}_{\theta=-\mathrm{i}\, \mathbf{d}}\,. \tag{3.20}\]
In [40] the term 'quantised inhomogeneities' was used to emphasise that the inhomogeneities are now nontrivial operators (on polynomials). The relations (3.3) guarantee it preserves the fermionic space (see e.g. SC of [42]) and obeys the \(RTT\) relations. The proper representation-theoretic meaning of (3.19) stems from affine Schur-Weyl duality [21].
There are several ways to make sense of the Dunkl operators in the denominators in (3.19): (i) expanding as a formal power series in \(x^{-1}\), (ii) using the nonsymmetric Jack basis for the polynomial factor of the fermionic space to replace the Dunkl operators by their eigenvalues, (iii) removing the denominator \(\prod_{j}(x+d_{j}-1/2)\), which is central and acts in a simple way. The third point is related to the following important property of the monodromy matrix.
The spin-Calogero-Sutherland model commutes with the Yangian action given by (3.19). Indeed, the hierarchy of spin-Calogero-Sutherland Hamiltonians [41, 66, 34] are generated by the quantum determinant [40, 41]
\[\begin{split}\Delta(x)&=\mathrm{qdet}_{0}T_{0}(x)= \prod_{i=1}^{N}\frac{x+d_{i}+\tfrac{1}{2}}{x+d_{i}-\tfrac{1}{2}}\\ &=1+N\,x^{-1}+\left(\frac{N^{2}}{2}-\frac{p^{\prime}}{\beta} \right)x^{-2}+\left(\frac{N^{3}}{4}-\frac{N\,p^{\prime}}{\beta}+\frac{2H^{ \prime}}{\beta^{2}}\right)x^{-3}+O\big{(}x^{-4}\big{)}\,,\end{split} \tag{3.21}\]
and the quantum determinant of the Yangian generates its centre.
Let us finally mention that for \(\beta>0\) one can define a scalar product on \(\mathbb{C}[z_{1}^{\pm},\dots,z_{N}^{\pm}]\) for which the Dunkl operators are Hermitian, see Proposition 3.8 in [67] and SS2 of [68]. The natural extension of this scalar product to the fermionic space \(\mathcal{F}\) defined in (3.15) is such that the Yangian algebra is stable under Hermitian conjugation.14 This implies in particular that the Yangian representation on the fermionic space is completely reducible. The decomposition of \(\mathcal{F}\) into irreducible components is our next topic.
Footnote 14: More precisely, if we expand the \(A_{-},\dots,\)\(D\)-operators in (3.19) as \(A(x)=1+\sum_{n=1}^{+\infty}A_{n}\,x^{-n}\), \(B(x)=\sum_{n=1}^{+\infty}B_{n}\,x^{-n}\), etc., then the coefficients obey \(A_{n}^{\dagger}=A_{n}\), \(D_{n}^{\dagger}=D_{n}\) and \(B_{n}^{\dagger}=C_{n}\)[23].
### Effective spin chains
In [23] it was shown that the Hilbert space of the fermionic spin-Calogero-Sutherland model decomposes into a sum of irreducible representations of the Yangian:
\[\mathcal{F}=\bigoplus_{\boldsymbol{\lambda}\in\mathcal{P}}\mathcal{F}_{ \boldsymbol{\lambda}}\,. \tag{3.22}\]
The summands are also eigenspaces for the spin-Calogero-Sutherland model. The momentum operator (3.7) and gauge-transformed Hamiltonian (3.17) still have eigenvalues (3.11).
The eigenspace \(\mathcal{F}_{\boldsymbol{\lambda}}\) is the image by the projector (3.16) of the subspace which, in the polynomial factor, is spanned by all nonsymmetric Jack polynomials \(E_{\boldsymbol{\mu}}(\boldsymbol{z})\) with composition \(\boldsymbol{\mu}\in\mathbb{Z}^{N}\) differing from the partition \(\boldsymbol{\lambda}\) by reordering:
\[\mathcal{F}_{\boldsymbol{\lambda}}=\Pi_{\text{tot}}^{-}\bigg{(}\bigoplus_{ \boldsymbol{\mu}\in S_{N}\cdot\boldsymbol{\lambda}}E_{\boldsymbol{\mu}}( \boldsymbol{z})\otimes(C^{2})^{\mathfrak{S}N}\bigg{)}\subset\mathcal{F}\,. \tag{3.23}\]
Following [23], this subspace can be equivalently viewed as an 'effective spin chain' of some length \(L_{\boldsymbol{\lambda}}\leqslant N\) with particular (scalar) inhomogeneities. This goes as follows.
Let us set up some notation. For each allowed partition \(\boldsymbol{\lambda}\in\mathcal{P}\), we define sets \(I_{\boldsymbol{\lambda}}\) and \(J_{\boldsymbol{\lambda}}\) that enumerate its unique and repeated parts, respectively:
\[I_{\boldsymbol{\lambda}}\equiv\left\{1\leqslant i\leqslant N\,\left|\,\lambda _{i-1}>\lambda_{i}>\lambda_{i+1}\right.\right\},\qquad J_{\boldsymbol{ \lambda}}\equiv\left\{1\leqslant j<N\,\left|\,\lambda_{j}=\lambda_{j+1} \right.\right\}, \tag{3.24}\]
with the convention that \(\lambda_{0}\equiv+\infty\) and \(\lambda_{N+1}\equiv-\infty\). If \(\boldsymbol{\lambda}=(7,6,6,2,2,-5,-6,-8)\), for instance, then \(I_{\boldsymbol{\lambda}}=\{1,6,9\}\) and \(J_{\boldsymbol{\lambda}}=\{2,4,7\}\). The set \(J_{\boldsymbol{\lambda}}\) is called a _motif._\(I_{\boldsymbol{\lambda}}\) will label the sites of the effective chain, while \(J_{\boldsymbol{\lambda}}\) will record pairs of sites of the original chain that are fused into singlets. In particular, the effective length will be
\[L_{\boldsymbol{\lambda}}\equiv\#I_{\boldsymbol{\lambda}}=N-2\,\#J_{ \boldsymbol{\lambda}}\,. \tag{3.25}\]
We start with the Yangian highest-weight vector in \(\mathcal{F}_{\boldsymbol{\lambda}}\). It contains \(M_{\boldsymbol{\lambda}}\equiv\#J_{\boldsymbol{\lambda}}\) magnons, cf. (3.13), and can be written as
\[\left|0_{\boldsymbol{\lambda}}\right\rangle\propto\Pi_{\text{tot}}^{-}\left( E_{\boldsymbol{\lambda}}(\boldsymbol{z})\left|1,\ldots,M_{\boldsymbol{ \lambda}}\right\rangle\right), \tag{3.26}\]
where allowed for a normalising constant, and \(\boldsymbol{\tilde{\lambda}}\) is any rearrangement of \(\boldsymbol{\lambda}\) such that the result of antisymmetrising is nonzero.15 Like for any \(M_{\boldsymbol{\lambda}}\)-magnon fermionic vector, (3.26) can be recast in the form (see SS2.3.1 of [42])
Footnote 15: In particular, this requires \(\{\tilde{\lambda}_{1},\ldots,\tilde{\lambda}_{M_{\boldsymbol{\lambda}}}\}=J_ {\boldsymbol{\lambda}}\). One choice is to order \(\tilde{\boldsymbol{\lambda}}\) such that \(\tilde{\lambda}_{1}>\cdots>\tilde{\lambda}_{M_{\boldsymbol{\lambda}}}\) and \(\tilde{\lambda}_{M_{\boldsymbol{\lambda}}+1}>\cdots>\tilde{\lambda}_{N}\), as in [42]. Another one instead has \(\tilde{\lambda}_{1}<\cdots<\tilde{\lambda}_{M_{\boldsymbol{\lambda}}}\) and \(\tilde{\lambda}_{M_{\boldsymbol{\lambda}}+1}<\cdots<\tilde{\lambda}_{N}\), which is a little more efficient (cf. Section 4.4). In any case, different choices of \(\tilde{\boldsymbol{\lambda}}\) only affect the normalisation of (3.26).
\[\left|0_{\boldsymbol{\lambda}}\right\rangle=\!\!\!\!\sum_{j_{1}<\cdots<j_{M_{ \boldsymbol{\lambda}}}}\!\!
and should be contrasted with equation (5.31) in [23]: our expression is given in the coordinate basis of \(V_{2}^{\otimes N}\) and has nontrivial polynomial coefficients, whilst Takemura-Uglov use the nonsymmetric Jack basis of \(\mathbb{C}[z_{1}^{\pm},\dots,z_{N}^{\pm}]\) and has a nontrivial spin coefficient.
Here are some examples of the highest-weight vectors in the fermionic space. If \(\mathbf{\lambda}\) is a strict partition, i.e. \(\lambda_{i}>\lambda_{i+1}\) for all \(i\), so that \(J_{\mathbf{\lambda}}=\emptyset\) and \(M_{\mathbf{\lambda}}=0\), then the highest-weight vector acquires the simple form \(|0_{\mathbf{\lambda}}\rangle=\text{Vand}(z_{1},\dots,z_{N})\,P^{\prime}_{\mathbf{ \gamma}}(\mathbf{z})\,|\!\uparrow\cdots\uparrow\!\rangle\) because of (3.14). For this example the effective spin chain will have length \(L_{\mathbf{\lambda}}=N\) and can be viewed as \(2^{N}\) copies of the spinless fermionic Calogero-Sutherland model, with degeneracies due to the Yangian symmetry. Another class of easy examples occurs when \(I_{\mathbf{\lambda}}=\emptyset\), i.e. when \(N\) is even and \(\lambda_{i}=\lambda_{i+1}\) for all odd \(i\). The corresponding effective spin chain has length \(L_{\mathbf{\lambda}}=0\) (after fusion), i.e. a one-dimensional Hilbert space. For instance, \(\mathbf{\lambda}=(N/2-1,N/2-1,\dots,1,1,0,0)\) gives a vector of the form (3.27) at the equator \(M_{\mathbf{\lambda}}=N/2\) with \(f_{\mathbf{\lambda}}=\text{Vand}(z_{1},\dots,z_{N/2})\text{Vand}(z_{N/2+1},\dots,z _{N})\) as can be seen by counting the degree.
From the highest-weight vector \(|0_{\mathbf{\lambda}}\rangle\) one obtains the rest of the fermionic eigenspace \(\mathcal{F}_{\mathbf{\lambda}}\) by acting with the monodromy matrix (3.19). Takemura and Uglov [23] gave an explicit description of the Yangian structure of \(\mathcal{F}_{\mathbf{\lambda}}\).16 Namely, first consider a chain with \(N\) spin-\(1/2\) sites, with ('ambient') Hilbert space \(V_{2}^{\otimes N}\) and inhomogeneities \(\delta_{1}(\mathbf{\lambda}),\dots,\delta_{N}(\mathbf{\lambda})\) equal to the eigenvalues (3.4) of the Dunkl operators, determined by \(\mathbf{\lambda}\) and depending on \(\beta\). Thus the monodromy matrix reads
Footnote 16: In representation-theoretic terms, any finite-dimensional Yangian irrep is isomorphic to a tensor product of evaluation modules (see e.g. §12.1.E in [61]). Here we interpret this in physical terms as an inhomogeneous Heisenberg chain as in Section 2.
\[R_{01}\big{(}x+\delta_{1}(\mathbf{\lambda})-1/2\big{)}\cdots R_{0N}\big{(}x+ \delta_{N}(\mathbf{\lambda})-1/2\big{)}\,. \tag{3.28}\]
By Sections 2.3.2 and 2.3.4, singlet fusion happens whenever \(\mathbf{\lambda}\) has repeats. The invariant subspace thus has \(M_{\mathbf{\lambda}}=\#J_{\mathbf{\lambda}}\) sites with spin \(0\), and \(L_{\mathbf{\lambda}}=N-2\,M_{\mathbf{\lambda}}\) spin-\(1/2\) sites. The highest-weight vector
\[\prod_{j\in J_{\mathbf{\lambda}}}(\sigma_{j}^{-}-\sigma_{j+1}^{-})\,|0\rangle\in V _{2}^{\otimes N} \tag{3.29}\]
has singlets at sites \(j,j+1\) for \(j\in J_{\mathbf{\lambda}}\), and \(\uparrow\) at all remaining sites \(i\in I_{\mathbf{\lambda}}\). By Section 2.3.2, see (2.49), the vector (3.29) can be written in algebraic Bethe-ansatz form by acting on \(|0\rangle\in V_{2}^{\otimes N}\) with \(B\)-operators from (3.28) at the fixed Bethe roots \(x_{0}^{(j)}\equiv\big{(}\delta_{j}(\mathbf{\lambda})+\delta_{j+1}(\mathbf{\lambda}) \big{)}/2\) for \(j\in J_{\mathbf{\lambda}}\). Takemura-Uglov [23] constructed an isomorphism of Yangian modules between \(\mathcal{F}_{\mathbf{\lambda}}\) and this invariant subspace. The highest-weight vector \(|0_{\mathbf{\lambda}}\rangle\in\mathcal{F}_{\mathbf{\lambda}}\) from (3.27) corresponds to (3.29) under this isomorphism. Note that the remaining inhomogeneities \(\theta_{i}\) (\(i\in I_{\mathbf{\lambda}}\)) are generic.
We can simplify the setting a little further by omitting the singlets, which brings us to our _effective spin chain_. Its Hilbert space is
\[\mathcal{H}_{\mathbf{\lambda}}\equiv V_{2}^{\otimes I_{\mathbf{\lambda}}}\,, \tag{3.30}\]
which serves as a'model space' for \(\mathcal{F}_{\mathbf{\lambda}}\subset\mathcal{F}\). The highest-weight vector \(|0_{\mathbf{\lambda}}\rangle\in\mathcal{F}_{\mathbf{\lambda}}\) from (3.27) now simply corresponds to \(|\!\uparrow\rangle^{\otimes I_{\mathbf{\lambda}}}\in\mathcal{H}_{\mathbf{\lambda}}\). The space (3.30) is isomorphic to the invariant subspace of \(V_{2}^{\otimes N}\) as a (irreducible) representation for the Yangian. If we denote the elements of the set \(I_{\mathbf{\lambda}}\) by \(i_{1}<\dots<i_{L_{\mathbf{\lambda}}}\), then the Yangian acts on \(\mathcal{H}_{\mathbf{\lambda}}\) via the monodromy matrix
\[(T_{\mathbf{\lambda}})_{0}(x)=\prod_{j\in I_{\mathbf{\lambda}}}\frac{x+\delta_{j}(\bm {\lambda})+\tfrac{1}{2}}{x+\delta_{j}(\mathbf{\lambda})-\tfrac{1}{2}}\,\times R_{01 }\big{(}x+\delta_{i_{1}}(\mathbf{\lambda})-\tfrac{1}{2}\big{)}\cdots R_{0\,L_{\mathbf{ \lambda}}}(x+\delta_{i_{L_{\mathbf{\lambda}}}}(\mathbf{\lambda})-\tfrac{1}{2}\big{)} \tag{3.31}\]
with prefactor coming from the \(R\)-matrices in (3.28) that have been fused into singlets (cf. Appendix A). Observe that \(I_{\mathbf{\lambda}}\) (and \(J_{\mathbf{\lambda}}\)) was defined in (3.24) from the (quasi)_momentum_ quantum
numbers \(\lambda\), but labels the _sites_ (positions) of the effective chain on \(\mathcal{H}_{\lambda}\) (and its ambient space). We stress that the spin-Calogero-Sutherland model contains infinitely many different effective spin chains, one for each allowed \(\lambda\in\mathcal{P}\).
## 4 Bethe-ansatz analysis of the spin-Calogero-Sutherland model
We can import the standard toolkit of Heisenberg integrability from Section 2 into the world of spin-Calogero-Sutherland models from Section 3 thanks to the Takemura-Uglov isomorphism from Section 3.3. As we have seen in Section 3.2, the spin-Calogero-Sutherland Hamiltonian is invariant under the whole Yangian (3.19). In particular it commutes with the (twisted) transfer matrix
\[t(x;\kappa)=\mathrm{Tr}_{0}\big{[}\kappa^{\sigma_{0}^{z}}\,T_{0}(x)\big{]}. \tag{4.1}\]
This provides a refinement of the spin-Calogero-Sutherland hierarchy: since the transfer matrix does not commute with the Yangian (just like for the Heisenberg chain in Section 2), the Heisenberg-style Hamiltonians generated by the transfer matrix are nontrivial on \(\mathcal{F}_{\lambda}\), lifting the degeneracies of the spin-Calogero-Sutherland model. In representation-theoretic language we pass from the quantum determinant (centre) to a Bethe subalgebra (maximal abelian subalgebra) of the Yangian that depends on the twist \(\kappa\). The only spin symmetry that remains from the Yangian is \(\mathfrak{sl}_{2}\), which is further broken down to the (Cartan sub)algebra \(\mathfrak{u}_{1}\) generated by \(S^{z}\) when \(\kappa\neq\pm 1\).
Since the usual hierarchy (3.21) is proportional to the identity on each Yangian irrep in the fermionic space, any basis of \(\mathcal{F}_{\lambda}\) provides eigenvectors of the spin-Calogero-Sutherland model. One distinguished basis is the (Yangian) Gelfand-Tsetlin basis [55, 56], which was constructed for the spin-Calogero-Sutherland model by Takemura-Uglov [23]. By diagonalising the Heisenberg-style Hamiltonians we will construct a new Bethe-ansatz eigenbasis of the spin-Calogero-Sutherland model, which reduces to the Gelfand-Tsetlin basis in the limit of extreme twist.
### Heisenberg-style symmetries
Let us first extract some of the refined Hamiltonians from the transfer matrix (4.1). The operators constructed in Section 2.2.1 are not compatible with the fermionic condition (3.15). Thus we proceed as in Section 2.2.2 and expand the transfer matrix as \(x\to\infty\). Replacing \(\theta_{i}\to-\mathrm{i}\,d_{i}\) and \(u\to\mathrm{i}\,x\) in the results of Section 2.2.2, we obtain
\[t\bigg{(}x+\frac{1}{2};\kappa\bigg{)}=\kappa+\kappa^{-1}+\bigg{(} \kappa+\kappa^{-1}\big{)}\,\frac{N}{2}+\big{(}\kappa-\kappa^{-1}\big{)}\,S^{ z}\bigg{)}\,x^{-1}+\bigg{(}\sum_{i<j}\kappa^{\sigma_{j}^{z}}\,P_{ij}-\sum_{i=1}^{N} \kappa^{\sigma_{i}^{z}}\,d_{i}\bigg{)}\,x^{-2}\\ +\bigg{(}\sum_{i<j<k}\kappa^{\sigma_{k}^{z}}\,P_{jk}\,P_{ij}-\sum _{i<j}\kappa^{\sigma_{j}^{z}}\,P_{ij}\,(d_{i}+d_{j})+\sum_{i=1}^{N}\kappa^{ \sigma_{i}^{z}}\,d_{i}^{2}\bigg{)}\,x^{-3}+O\big{(}x^{-4}\big{)}. \tag{4.2}\]
As mentioned at the end of Section 3.2, there exists a scalar product such that all the coefficients in this expansion are Hermitian provided \(\kappa\) is real. Hence, their eigenvalues must be real. When \(\kappa\neq 1\), the coefficient in front of \(x^{-2}\) is already a non-trivial operator acting on both the spins and the coordinates of the particles. It can be rewritten as
\[t_{2}(\kappa)=\sum_{i<j}\kappa^{\sigma_{j}^{z}}P_{ij}-\sum_{i=1}^{N} \kappa^{\sigma_{i}^{z}}d_{i}=\frac{\kappa+\kappa^{-1}}{2}\biggl{(}\sum_{i<j}P_{ ij}-P^{\prime}\biggr{)}\\ +\frac{\kappa-\kappa^{-1}}{2}\biggl{(}-\frac{1}{\beta}\sum_{i=1} ^{N}\sigma_{i}^{z}\,z_{i}\,\partial_{z_{i}}+\sum_{i<j}\frac{z_{i}\,\sigma_{j}^ {z}-z_{j}\,\sigma_{i}^{z}}{z_{i}-z_{j}}\,P_{ij}+\frac{1}{2}\sum_{i\neq j}\frac {z_{i}+z_{j}}{z_{i}-z_{j}}\,\sigma_{j}^{z}\biggr{)} \tag{4.3}\]
since we are interested in the fermionic sector, i.e. the space of vectors on which the action of \(s_{ij}\) and \(-P_{ij}\) coincide for all \(i\neq j\). We recall that \(P^{\prime}\) is the total momentum operator (3.7).
In the untwisted case, the transfer matrix simplifies to
\[t\biggl{(}x+\frac{1}{2};1\biggr{)}=2+N\,x^{-1}+\left(t_{2}-\beta^{-1}\,P^{ \prime}\right)x^{-2}+\left(t_{3}+2\,\beta^{-2}H^{\prime}+E^{0}\right)\,x^{-3} +O\bigl{(}x^{-4}\bigr{)}, \tag{4.4}\]
where we recognised the spin-Calogero-Sutherland momentum and Hamiltonian (which are central, i.e. lie in the center of the Yangian, and therefore commute with the transfer matrix and all operators obtained from it), the quadratic Casimir \(t_{2}(1)=\sum_{i<j}P_{ij}\), and
\[t_{3}(1)= \sum_{i<j<k}P_{jk}\,P_{ij}-\sum_{i<j}(d_{i}+d_{j})\,P_{ij}\] \[= -\frac{1}{\beta}\sum_{i<j}(z_{i}\,\partial_{i}+z_{j}\,\partial_{ j})\,P_{ij}+\sum_{i<j}\sum_{k\neq i,j}\biggl{(}1-\frac{z_{i}}{z_{ik}}-\frac{z_{j} }{z_{jk}}\biggr{)}P_{ij} \tag{4.5}\] \[+\sum_{i<j<k}\biggl{[}\biggl{(}2-\frac{z_{i}}{z_{ij}}-\frac{z_{j }}{z_{jk}}-\frac{z_{k}}{z_{ki}}\biggr{)}P_{ij}\,P_{jk}+\biggl{(}2-\frac{z_{j}} {z_{ji}}-\frac{z_{k}}{z_{kj}}-\frac{z_{i}}{z_{ik}}\biggr{)}P_{jk}\,P_{ij}\, \biggr{]}\,,\]
where we once again replaced \(s_{ij}\) with \(-P_{ij}\). These are the Heisenberg-style symmetries, which we will diagonalise next.
### Internal Bethe ansatz
It remains to diagonalise our Heisenberg-style symmetries by algebraic Bethe ansatz. Using the decomposition (3.22), we can restrict ourselves to a spin-Calogero-Sutherland eigenspace \(\mathcal{F}_{\lambda}\) labelled by an allowed partition \(\lambda\in\mathcal{P}\). In this subspace, the spectrum of the transfer matrix \(t(x;\kappa)\) from (4.1) coincides with the spectrum of the transfer matrix
\[t_{\lambda}(x;\kappa)=\mathrm{Tr}_{0}\bigl{[}\kappa^{\sigma_{0}^{z}}(T_{ \lambda})_{0}(x)\bigr{]} \tag{4.6}\]
of the effective spin chain, which is just an inhomogeneous Heisenberg spin chain. Therefore we can use the results of Section 2.
We can view the algebraic Bethe ansatz in three ways. First, inside the effective spin chain \(\mathcal{H}_{\lambda}\) with monodromy (3.31), the algebraic Bethe ansatz has the standard form from Section 2,
\[B_{\lambda}(x_{1})\cdots B_{\lambda}(x_{M})\ket{\uparrow\cdots\uparrow}\in \mathcal{H}_{\lambda}\,. \tag{4.7}\]
Second, thinking of the effective spin chain as the Yangian-invariant subspace inside \(V_{2}^{\mathfrak{e}N}\) with inhomogeneities \(\delta_{1}(\lambda),\ldots,\delta_{N}(\lambda)\), the algebraic Bethe ansatz uses the \(B\)-operator contained in the monodromy matrix (3.28) and pseudovacuum (3.29). Third, inside the fermionic eigenspace \(\mathcal{F}_{\lambda}\) we start from \(\ket{0_{\lambda}}\) given by (3.27), and use the monodromy matrix (3.19) with Dunkl operators to perform the algebraic Bethe ansatz,
\[B(x_{1})\cdots B(x_{M})\ket{0_{\lambda}}\in\mathcal{F}_{\lambda}\,. \tag{4.8}\]
Since \(\mathcal{H}_{\lambda}\), its image as invariant subspace of \(V_{2}^{\mathfrak{e}N}\) and \(\mathcal{F}_{\lambda}\) are isomorphic as Yangian modules, the three perspectives are equivalent. We emphasise that the \(M\)-magnon sector of the effective spin chain \(\mathcal{H}_{\lambda}\) (of length \(L_{\lambda}\)) corresponds to \(M_{\lambda}+M\) magnons inside \(V_{2}^{\mathfrak{e}N}\) and \(\mathcal{F}_{\lambda}\).
According to (2.12) and (3.20) the eigenvalue of the transfer matrix \(t(x;\kappa)\) on the Bethe vector in \(\mathcal{F}_{\lambda}\) with Bethe roots \((x_{1},\dots,x_{M})\) reads
\[\tau(x;\kappa)=\prod_{j\in J_{\lambda}}\frac{x+\delta_{j}(\lambda)+\frac{1}{2}}{ x+\delta_{j}(\lambda)-\frac{1}{2}}\left(\kappa\,\frac{Q(x-1)}{Q(x)}\prod_{i\in I _{\lambda}}\frac{x+\delta_{i}(\lambda)+\frac{1}{2}}{x+\delta_{i}(\lambda)- \frac{1}{2}}+\kappa^{-1}\,\frac{Q(x+1)}{Q(x)}\right), \tag{4.9}\]
where the Bethe roots \(x_{1},\dots,x_{M}\) solve the Bethe equations (2.13), which here read
\[\kappa^{2}\prod_{i\in I_{\lambda}}\frac{x_{m}+\delta_{i}(\lambda)+\frac{1}{2} }{x_{m}+\delta_{i}(\lambda)-\frac{1}{2}}=-\frac{Q(x_{m}+1)}{Q(x_{m}-1)}, \tag{4.10}\]
with \(Q(x)\equiv\prod_{m=1}^{M}(x-x_{m})\). For \(\kappa^{2}\neq 1\) their Wronskian form is the \(QQ\)-relation (2.18), i.e.
\[\kappa\,Q\!\left(x-\frac{1}{2}\right)\widetilde{Q}\!\left(x+\frac{1}{2} \right)-\kappa^{-1}\,Q\!\left(x+\frac{1}{2}\right)\widetilde{Q}\!\left(x- \frac{1}{2}\right)=\left(\kappa-\kappa^{-1}\right)\prod_{i\in I_{\lambda}} \!\left(x+\delta_{i}(\lambda)\right), \tag{4.11}\]
for some degree \(L_{\lambda}-M\) polynomial \(\widetilde{Q}\). By Section 2.2.3, in the periodic case it instead reads
\[Q\!\left(x-\frac{1}{2}\right)\widetilde{Q}\!\left(x+\frac{1}{2}\right)-Q\! \left(x+\frac{1}{2}\right)\widetilde{Q}\!\left(x-\frac{1}{2}\right)=\left(L_{ \lambda}+1-2\,M\right)\prod_{i\in I_{\lambda}}\!\left(x+\delta_{i}(\lambda) \right), \tag{4.12}\]
and \(\widetilde{Q}\) has degree \(L_{\lambda}+1-M\). The transfer matrix being Hermitian provided \(\beta>0\) and \(\kappa\in\mathbb{R}\), its spectrum must be real. Hence, \(Q\) is a real polynomial and its roots can either be real or contain complex conjugate pairs. In the conventions of Section 2, the inhomogeneities are imaginary, and the solutions of the Bethe equations here have a very different structure than for the usual (homogeneous) Heisenberg spin chain. In particular, \(\{u_{1},\dots,u_{M}\}=\{i\,x_{1},\dots,i\,x_{M}\}\) is not necessarily stable under complex conjugation (although \(\{x_{1},\dots,x_{M}\}\) is for real \(\kappa\)). In Sections 4.3.2-4.4 we will give some simple examples of Bethe roots.
Expanding the transfer-matrix eigenvalue (4.9) around \(x\to+\infty\) and comparing with (4.2) and (4.4) we obtain the eigenvalues of the conserved charges. In the untwisted case, the eigenvalue of \(t_{2}\) is
\[\tau_{2}(1)=\left(\frac{L_{\lambda}}{2}-M\right)\!\left(\frac{L_{\lambda}}{2}- M+1\right)+\frac{N(N-4)}{4}\,. \tag{4.13}\]
This simply means that the eigenstate is in an irreducible \(\mathfrak{sl}_{2}\)-module of spin \(\frac{L_{\lambda}}{2}-M\). The eigenvalue of \(t_{3}\) is
\[\tau_{3}(1)=-\!\left(\frac{L_{\lambda}}{2}-M+1\right)\!\left(2\sum _{m=1}^{M}x_{m}+\sum_{i\in I_{\lambda}}\delta_{i}(\lambda)\right)+\left(2- \frac{N}{2}\right)\!\sum_{i=1}^{N}\!\delta_{i}(\lambda)\\ +\frac{N-2}{2}\!\left(\tau_{2}-\frac{N(N-1)}{6}\right). \tag{4.14}\]
In the twisted case, the transfer matrix eigenvalue behaves as
\[\tau(x;\kappa)=\kappa+\kappa^{-1}+\left[\frac{\kappa+\kappa^{-1} }{2}N+(\kappa-\kappa^{-1})\!\left(\frac{L_{\lambda}}{2}-M\right)\right]x^{-1} +\!\left[\frac{\kappa+\kappa^{-1}}{2}\!\left(\tau_{2}(1)-\!\sum_{i=1}^{N}\! \delta_{i}(\lambda)\right)\right.\\ +\frac{\kappa-\kappa^{-1}}{2}\!\left((N-1)\left(\frac{L_{\lambda} }{2}-M\right)-2\sum_{m=1}^{M}x_{m}-\sum_{i\in I_{\lambda}}\delta_{i}(\lambda) \right)\right]x^{-2}+O(x^{-3}). \tag{4.15}\]
These are the energies of our Heisenberg-style symmetries.
### Limits
To illustrate our construction we consider some limits. As we saw in Subsection 2.2.4, for extreme twist \(\kappa\to\infty\) (\(\kappa\to 0\)), for each irreducible submodule -- or, equivalently, effective spin chain -- the Bethe states approach the Yangian Gelfand-Tsetlin basis diagonalising the \(A\)- (respectively \(D\)-)operator contained in the twisted transfer matrix (4.1). Let us here study the behaviour of the Bethe roots and the spectrum in two other interesting limits \(\beta\to 0\) (\(\beta\to+\infty\)) of the coupling constant, in which the kinetic energy dominates (is dominated by) the potential energy.
#### 4.3.1 Free-fermion limit \(\beta\to 0\)
When the coupling constant \(\beta\) vanishes, the spin-Calogero-Sutherland model becomes a free-fermion model. The rescaled Dunkl operators reduce to the particle momentum operators \(\beta\,d_{i}\to z_{i}\,\partial_{z_{i}}\) as \(\beta\to 0\), and nonsymmetric Jack polynomials boil down to monomials \(E_{\mathbf{\mu}}(\mathbf{z})\to\mathbf{z}^{\mathbf{\mu}}\equiv z_{1}^{\lambda_{1}}\cdots z_{N} ^{\lambda_{N}}\) (i.e. plane waves), with rescaled eigenvalues \(\beta\,\delta_{i}(\mathbf{\mu})\to\mu_{i}\) equal to their degrees (wave numbers) in the \(z_{i}\) (\(\mathbf{\mu}\in\mathbb{Z}^{N}\)). The spin-Calogero-Sutherland eigenvectors can be described elegantly in terms of the wedge basis [24], see also SS2.3.2 in [42]. The solutions to the Bethe equations in \(\mathcal{F}_{\mathbf{\lambda}}\) with allowed partition \(\mathbf{\lambda}\in\mathcal{P}\) are also particularly simple in this limit: the rescaled Bethe roots \(\{\beta x_{1},\ldots,\beta x_{M}\}\) form a subset of the distinct parts \(\{-\lambda_{i}\}_{i\in\{1,\ldots,N\}\cup\lambda_{1}}\) of \(-\mathbf{\lambda}\). The monodromy matrix (2.5) can be expanded in \(\beta\) in the following way:
\[\begin{split} T_{0}\left(\frac{x}{\beta}+\frac{1}{2}\right)=1& +\beta\,\sum_{i=1}^{N}\frac{P_{0i}}{x+z_{i}\,\partial_{i}}\\ &+\beta^{2}\left(\sum_{i<j}\frac{P_{0i}\,P_{0j}}{(x+z_{i}\, \partial_{i})(x+z_{j}\,\partial_{j})}-\sum_{i=1}^{N}\frac{P_{0i}\,d_{i}^{ \circ}}{(x+z_{i}\,\partial_{i})^{2}}\right)+O\left(\beta^{3}\right).\end{split} \tag{4.16}\]
Here \((x+z_{i}\partial_{i})^{-1}\) acts on monomials as \((x+z_{i}\partial_{i})^{-1}\,\mathbf{z}^{\mathbf{\lambda}}=(x+\lambda_{i})^{-1}\,\mathbf{z }^{\mathbf{\lambda}}\), and at order \(\beta^{2}\) we picked up a contribution from the subleading part of the Dunkl operator,
\[d_{i}^{\circ}\equiv-\sum_{j=1}^{i-1}\frac{z_{i}}{z_{ji}}\,(1+P_{ij})+\sum_{j= i+1}^{N}\frac{z_{j}}{z_{ij}}\,(1+P_{ij})+\frac{N+1-2\,i}{2}\,, \tag{4.17}\]
where we used the fermionic condition to replace \(s_{ij}\) by \(-P_{ij}\). Hence the transfer matrix is
\[\begin{split} t\left(\frac{x}{\beta}+\frac{1}{2};\kappa\right)= \kappa+\kappa^{-1}&+\beta\sum_{i=1}^{N}\frac{\kappa^{\sigma_{i} ^{2}}}{x+z_{i}\,\partial_{i}}\\ &+\beta^{2}\left(\sum_{i<j}\frac{\kappa^{\sigma_{j}^{2}}\,P_{ij}} {(x+z_{i}\,\partial_{i})(x+z_{j}\,\partial_{j})}-\sum_{i=1}^{N}\frac{\kappa^{ \sigma_{i}^{2}}\,d_{i}^{\circ}}{(x+z_{i}\,\partial_{i})^{2}}\right)+O\left( \beta^{3}\right).\end{split} \tag{4.18}\]
To linear order in \(\beta\) the eigenvalues of the transfer matrix are of the form
\[\tau\left(\frac{x}{\beta}+\frac{1}{2};\kappa\right)=\kappa+\kappa^{-1}+\beta \left(\kappa\sum_{m=1}^{M}\frac{1}{x+\lambda_{i_{m}}}+\kappa^{-1}\sum_{i\notin I }\frac{1}{x+\lambda_{i}}\right)+O\left(\beta^{2}\right). \tag{4.19}\]
Had we not imposed any (anti)symmetry on the eigenvectors, these values would occur in the spectrum for all \(\mathbf{\lambda}\in\mathbb{Z}^{N}\) and \(I=\{i_{1},\ldots,i_{M}\}\) any subset of \(\{1,\ldots,N\}\). However, for fermionic eigenvectors only some of these eigenvalues are valid. To see this, it is convenient to start from the exact spectrum at small, but finite \(\beta\). Examining the Bethe equations (4.10) in this limit,
one realises that in this limit the inhomogeneities become large and the Bethe roots have to stick to them (up to a finite \(\kappa\)-dependent shift). For \(\mathbf{\lambda}\in\mathcal{P}\), the solutions to the Bethe equations can be indexed by \(I=\{i_{1},\ldots,i_{M}\}\subset I_{\mathbf{\lambda}}\). Solving the Bethe equations perturbatively, one finds that the rescaled Bethe roots are
\[\begin{split}\beta x_{m}=&-\lambda_{i_{m}}-\frac{ \beta}{2}\left(N+1-2\,i_{m}+\frac{\kappa+\kappa^{-1}}{\kappa-\kappa^{-1}} \right)\\ &+\frac{\beta^{2}}{(\kappa-\kappa^{-1})^{2}}\bigg{(}\sum_{j\in I_ {\mathbf{\lambda}}\backslash I}\frac{1}{\lambda_{j}-\lambda_{i_{m}}}-\sum_{j\in I \backslash\{i_{m}\}}\frac{1}{\lambda_{j}-\lambda_{i_{m}}}\bigg{)}+O\big{(} \beta^{3}\big{)}\,.\end{split} \tag{4.20}\]
This relies on the fact that the inhomogeneities are far away from one another: as we noted, for \(i\neq j\) in \(I_{\mathbf{\lambda}}\) we have
\[\beta\big{(}\delta_{i}(\mathbf{\lambda})-\delta_{j}(\mathbf{\lambda})\big{)}=\lambda _{i}-\lambda_{j}+\beta(j-i)\longrightarrow\lambda_{i}-\lambda_{j}\neq 0\,, \qquad\beta\to 0\,. \tag{4.21}\]
Plugging the values of the Bethe roots into the expression (4.9) for the transfer matrix eigenvalue, one finds that it simplifies to
\[\tau\bigg{(}\frac{x}{\beta};\kappa\bigg{)}=\kappa\,\alpha_{\mathbf{ \lambda},I}\bigg{(}\frac{x}{\beta}\bigg{)}+\kappa^{-1}\alpha_{\mathbf{\lambda},I_{ \mathbf{\lambda}}\backslash I}\bigg{(}\frac{x}{\beta}\bigg{)}\\ +\frac{\beta^{3}}{\kappa-\kappa^{-1}}\sum_{m=1}^{M}\sum_{j\in I_{ \mathbf{\lambda}}\backslash I}\frac{1}{(x+\lambda_{i_{m}})(x+\lambda_{j})(\lambda _{i_{m}}-\lambda_{j})}+O\big{(}\beta^{4}\big{)}\,, \tag{4.22}\]
where
\[\alpha_{\mathbf{\lambda},I}(x)=\prod_{i\in(I_{\mathbf{\lambda}}\backslash I)\cup J_{ \mathbf{\lambda}}}\frac{x+\delta_{i}(\mathbf{\lambda})+\frac{1}{2}}{x+\delta_{i}(\bm {\lambda})-\frac{1}{2}} \tag{4.23}\]
is an eigenvalue of the element \(A(x)\) of the monodromy matrix. The first line can be expanded further in \(\beta\). Notice, however, that the transfer-matrix eigenvalues start differing from the sum of the eigenvalues of \(\kappa A\) and \(\kappa^{-1}D\) only at order \(\beta^{3}\).
Finally observe that in the infinite twist limit when \(\kappa\to+\infty\), the Bethe roots become equal to \(-\delta_{i_{m}}(\mathbf{\lambda})-1/2+O(\beta^{2})\) while the transfer matrix eigenvalue becomes \(\kappa\,\alpha_{\mathbf{\lambda},I}\big{(}\beta^{-1}x\big{)}+O(\beta^{4})\). As discussed in Section 2.2.4, these should actually be the exact values at all orders in \(\beta\), when \(\kappa\to+\infty\). A similar observation can be made for the limit \(\kappa\to 0\).
#### 4.3.2 Strong-coupling limit \(\beta\to\infty\) and the Haldane-Shastry spin chain
Now consider the opposite limit, \(\beta\to\infty\), which is dominated by the potential energy. In this limit some of the spaces \(\mathcal{F}_{\mathbf{\lambda}}\cong\mathcal{H}_{\mathbf{\lambda}}\) turn into reducible, indecomposable representations of the Yangian. This is because the differences between eigenvalues (3.4) of the Dunkl operators become integer-valued: when \(\mathbf{\lambda}\) is a partition, one has
\[\delta(\mathbf{\lambda})\longrightarrow\frac{1}{2}(N-1,N-3,\ldots,1-N)\,,\qquad \beta\to\infty\,. \tag{4.24}\]
From Section 2.3.4 we know that here _all_ pairs \(j,j+1\) of neighbouring sites are fused into singlets, leading to many invariant subspaces, and that at the same time the algebraic Bethe ansatz allows us to generate all eigenstates in \(\mathcal{F}_{\mathbf{\lambda}}\). By taking the limit \(\beta\to\infty\) of the equations (4.9)-(4.11) one finds the corresponding transfer-matrix eigenvalues and the Bethe roots. This is the strong-coupling limit of the spin-Calogero-Sutherland model. We are most interested in going one step further and reducing the infinite-dimensional space of states to a finite-dimensional Hilbert space.
In the freezing procedure we supplement the strong-coupling limit \(\beta\to\infty\) by applying
\[\text{ev}\colon(z_{1},\ldots,z_{N})\longmapsto\left(1,e^{\frac{2i\pi}{N}},\ldots,e^{\frac{2(N-1)\pi}{N}}\right) \tag{4.25}\]
to evaluate the polynomials at consecutive \(N\)th roots of unity. Then the (fermionic) Calogero-Sutherland Hamiltonian reduces to that of the (antiferromagnetic) Haldane-Shastry spin chain [38, 40, 42],
\[\beta^{-1}\,\bar{H}^{\prime}\to H^{\text{\tiny HS}}=\sum_{i<j}\frac{1+P_{ij}}{4 \sin^{2}\left[\frac{\pi}{N}(i-j)\right]}\,. \tag{4.26}\]
In the freezing limit most of the eigenvectors vanish. We describe the result without proofs, to which we will return in a separate publication. If \(\lambda_{1}-\lambda_{N}\geqslant N\) then the evaluation projects \(\mathcal{F}_{\lambda}\) to \(\{0\}\). Otherwise, the evaluation of \(\mathcal{F}_{\lambda}\) is non-trivial and completely determined by the motif \(J_{\lambda}\). It is described as follows: in the limit \(\beta\to\infty\), the inhomogeneities of the effective spin chain are (4.24) where for each \(j\in J_{\lambda}\) the \(j\) and \(j+1\)st elements are dropped. This means that the inhomogeneities can be separated into (maximal) groups of consecutive half-integers decreasing in steps of \(1\) within each group. Each such group of \(p\) successive inhomogeneities (or '\(p\)-string') corresponds to a copy of the spin-\(p/2\) representation \(V_{p+1}\) of \(\mathfrak{sl}_{2}\) (see SS12.1.E in [61]). After evaluation, if nonzero, the space \(\mathcal{F}_{\lambda}\) is isomorphic to the product of all these \(V_{p+1}\)[40, 54]. The freezing procedure actually amounts to quotienting out all the invariant subspaces. We emphasise that the resulting space only depends on the motif \(J_{\lambda}\) recording the repeats in \(\lambda\), and is insensitive to the precise values \(\lambda_{i}\) that occur. For instance, if \(N=6\) and we start from the motif \(J_{\lambda}=\{4\}\), then the evaluation of \(\mathcal{F}_{\lambda}\) will be isomorphic to \(V_{4}\otimes V_{2}\). Similarly, if \(N=11\), the motif \(\{2,5\}\) will correspond to a subspace isomorphic to \(V_{2}\otimes V_{3}\otimes V_{5}\). The Calogero-Sutherland eigenvectors that survive evaluation become eigenvectors of the Haldane-Shastry spin chain. For highest-weight vectors the result can be described in terms of a symmetric Jack polynomial in \(M\) variables at the (zonal spherical) point \(\alpha=1/2\)[40], see also [42].
In the freezing limit the derivatives in \(z_{i}\) in our Heisenberg-type symmetries disappear. The twisted charge (4.3) becomes
\[t_{2}(\kappa)\ \to\ t_{2}^{\text{\tiny HS}}(\kappa)=\frac{\kappa+\kappa^{-1}}{2} \sum_{i<j}P_{ij}+\frac{\kappa-\kappa^{-1}}{4\,\mathrm{i}}\sum_{i<j}\frac{ \mathrm{e}^{\mathrm{i}\pi(i-j)/N}\sigma_{j}^{\pm}-\mathrm{e}^{\mathrm{i}\pi(j -i)/N}\sigma_{i}^{z}}{\sin[\frac{\pi}{N}(i-j)]}\,P_{ij} \tag{4.27}\]
while the periodic charge (4.5) yields
\[t_{3}^{\text{\tiny HS}}(1)=\frac{1}{2}\sum_{i<j<k}\Big{[}P_{ij}\, P_{jk}+P_{jk}\,P_{ij} \tag{4.28}\] \[\qquad\qquad\qquad\qquad+\mathrm{i}\left(\cot\!\left[\tfrac{\pi}{ N}(i-j)\right]+\cot\!\left[\tfrac{\pi}{N}(j-k)\right]+\cot\!\left[\tfrac{\pi}{N}(k-i) \right]\right)\!\left(P_{ij}\,P_{jk}-P_{jk}\,P_{ij}\right)\Big{]}.\]
These operators can be viewed as refinements of the standard hierarchy of the Haldane-Shastry spin chain [40, 41], as they commute with each other and, even in the twisted case, with the (periodic) spin-chain translation operator. Moreover, (4.27) commutes with \(S^{z}\), and (4.28) is \(\mathfrak{sl}_{2}\) invariant; but, unlike the Hamiltonian (4.26), neither commutes with the Yangian. For \(\kappa=1\), the Heisenberg-style charges were obtained by Fowler and Minahan [27].
The spectrum of our Heisenberg-style symmetries, e.g. (4.14), is determined by the transfer-matrix eigenvalue (4.9) once one solves the Bethe equations (4.11) or (4.12). Only those solutions for which \(Q\) and \(\widetilde{Q}\) have no common root will correspond to eigenvectors that do not belong to an invariant subspace (cf. Appendix B), and hence survive freezing. Examples of explicit sets of Bethe roots are:
\[N=7\,,\quad J_{\lambda}=\{4\}\,,\quad M=2\,:\qquad\{x_{1},x_{2}\}=\left\{1- \mathrm{i}\frac{\sqrt{3}}{2},1+\mathrm{i}\frac{\sqrt{3}}{2}\right\} \tag{4.29}\]
and
\[N=8\,,\ \ \ J_{\boldsymbol{\lambda}}=\{4\}\,,\ \ \ \ M=3\,:\qquad\{x_{1},x_{2},x_{3} \}=\{-{\rm i}\sqrt{5},\,0,\,{\rm i}\sqrt{5}\}. \tag{4.30}\]
The resulting Bethe vectors provide a new eigenbasis for the Haldane-Shastry spin chain that reduce to the Gelfand-Tsetlin basis in the limit of extreme twist.
### Example: \(N=4\)
Let us illustrate our constructions in an example where we can explicitly build the Bethe-ansatz eigenvectors of our Heisenberg-type symmetries such as (4.3) or (4.5). We first consider the spin-Calogero-Sutherland model, and then the Haldane-Shastry spin chain by freezing.
We consider the case of \(N=4\) particles, and focus on the partition \(\boldsymbol{\lambda}=(2,1,1,0)\), with motif \(J_{\boldsymbol{\lambda}}=\{2\}\). By (3.11) the momentum and energy are \(P^{\prime}(\boldsymbol{\lambda})=4\) and \(E^{\prime}(\boldsymbol{\lambda})=3\,(1+\beta)\). Let us construct our Bethe-ansatz basis for \(\mathcal{F}_{\boldsymbol{\lambda}}\). The corresponding effective spin chain has length \(L_{\boldsymbol{\lambda}}=2\). In the periodic case \(\kappa=\pm 1\) the Bethe states are determined by the highest-weight state in \(\mathcal{F}_{\boldsymbol{\lambda}}\) together with \({\rm s}1_{2}\) symmetry, but for general twist the two states at the equator (with one magnon in the language of the effective spin chain) are nontrivial, and it is this case we will focus on.
The highest-weight vector inside \(\mathcal{F}_{(2,1,1,0)}\) occurs at \(M=1\) and is of the form (3.27), i.e.
\[|0_{(2,1,1,0)}\rangle= f(z_{1};z_{2},z_{3},z_{4})\,|\downarrow\uparrow\uparrow \uparrow\rangle-f(z_{2};z_{1},z_{3},z_{4})\,|\uparrow\downarrow\uparrow \uparrow\rangle \tag{4.31}\] \[+f(z_{3};z_{1},z_{2},z_{4})\,|\uparrow\uparrow\downarrow \rangle-f(z_{4};z_{1},z_{2},z_{3})\,|\uparrow\uparrow\uparrow\downarrow \rangle\,,\]
because it should be totally antisymmetric. For the same reason, the polynomial \(f\) must be antisymmetric in the last three variables, \(f=-z_{23}\,f=-s_{34}\,f\). This does not allow equal exponents for \(z_{2},z_{3},z_{4}\), so for \(\boldsymbol{\lambda}=(2,1,1,0)\) we have \(f=z_{1}\,z_{2}^{2}\,z_{3}+\) lower, where the remaining terms are lower in the dominance order. The partial antisymmetry then requires a partial Vandermonde factor \((z_{2}-z_{3})(z_{2}-z_{4})(z_{3}-z_{4})=z_{2}^{2}\,z_{3}+\) lower, which fixes the remaining symmetric part as 17
Footnote 17: As \(\boldsymbol{\hat{\lambda}}=(1,0,1,2)\) is the lowest amongst all reorderings of \(\boldsymbol{\lambda}\) with \(\tilde{\lambda}_{1}=1\), \(E_{\boldsymbol{\lambda}}\) is the simplest amongst the corresponding nonsymmetric Jack polynomials. Explicitly, \(E_{\boldsymbol{\lambda}}=z_{1}\,z_{3}\,z_{4}^{2}+\frac{\beta}{2\beta+1}(z_{2} \,z_{3}\,z_{4}^{2}+z_{1}\,z_{2}\,z_{3}\,z_{4})\).
\[f(z_{1};z_{2},z_{3},z_{4}) =z_{1}(z_{2}-z_{3})(z_{2}-z_{4})(z_{3}-z_{4}) \tag{4.32}\] \[=-(1-s_{23}-s_{34}+s_{23}\,s_{34}+s_{34}\,s_{23}-s_{24})\,E_{(1, 0,1,2)}\,,\]
in accordance with (3.27).
Having constructed the vacuum state, we now need to solve the twisted Bethe equations. The eigenvalues of the Dunkl operators are read off from (3.4) as
\[\delta_{1}=\frac{2}{\beta}+\frac{3}{2}\,,\ \ \delta_{2}=\frac{1}{\beta}+\frac{1}{2 }\,,\ \ \delta_{3}=\frac{1}{\beta}-\frac{1}{2}\,,\ \ \delta_{4}=-\frac{3}{2}. \tag{4.33}\]
Out of these, only \(\delta_{1}\) and \(\delta_{4}\) enter the Bethe equations (4.10) since \(I_{\boldsymbol{\lambda}}=\{1,4\}\). As explained above we are interested in the \(1\)-magnon states, and we find the following two values of the Bethe root:
\[x_{1,\pm}=-\frac{(\beta+2)\,\kappa+(\beta-2)\,\kappa^{-1}\pm\kappa^{-1}\sqrt{ (3\,\beta+2)^{2}\,\kappa^{4}-2(7\,\beta^{2}+12\,\beta+4)\,\kappa^{2}+(3\beta +2)^{2}}}{2\,\beta\big{(}\kappa-\kappa^{-1}\big{)}}\,. \tag{4.34}\]
Note that the expansion of (4.34) for \(\beta\to 0\) matches (4.20) that we obtained in the free fermion limit.
Now we consider the freezing limit as described in the previous subsection. If we evaluate \(|0_{(2,1,1,0)}\rangle\) using \(\mathrm{ev}:(z_{1},z_{2},z_{3},z_{4})\mapsto(1,\mathrm{i},-1,-\mathrm{i})\), we obtain a Yangian-highest-weight eigenvector of the Haldane-Shastry spin chain:
\[\begin{split}\mathrm{ev}\big{[}|0_{(2,1,1,0)}\rangle\big{]}& =-4\mathrm{i}\,[|\downarrow\uparrow\uparrow\rangle-|\uparrow \downarrow\uparrow\rangle+|\uparrow\uparrow\downarrow\uparrow\rangle-| \uparrow\uparrow\downarrow\rangle]\\ &=-4\mathrm{i}\sum_{i=1}^{4}\mathrm{ev}\big{[}P_{(2)}^{*}(z_{i}) \big{]}\,|i\rangle\big{\rangle}\,,\qquad P_{(2)}^{*}(z)=z^{2}\,,\end{split} \tag{4.35}\]
where the second line contains the case \(M=1\) of the standard Haldane-Shastry (Yangian highest-weight) wave function \(\mathrm{Vand}(z_{1},\ldots,z_{M})^{2}\,P_{(\pi)}^{*}(z_{1},\ldots,z_{M})\) with \(P_{\pi}^{*}(\mathbf{z})\) a Jack polynomial at \(\alpha^{*}=1/2\). The vector (4.35) is just a magnon with (lattice) momentum \(p=\pi\). Note that it is _not_ the same as the highest-weight vector (3.29) of the effective spin chain embedded in the 'ambient' space \(V_{2}^{\mathbf{\otimes}N}\) with special inhomogeneities, even though the latter has the same dimension as the Hilbert space of the Haldane-Shastry spin chain.
Next, the Bethe roots in the freezing limit are found from their original values (4.34) by taking \(\beta\to\infty\), which gives
\[x_{1,\pm}^{\circ}=-\frac{\kappa+\kappa^{-1}\pm\kappa^{-1}\sqrt{9\,\kappa^{4}- 14\,\kappa^{2}+9}}{2\left(\kappa-\kappa^{-1}\right)}\,. \tag{4.36}\]
Writing \(B^{\circ}(x)\equiv\lim_{\beta\to\infty}B(x)\), we obtain the Bethe states by acting on the vacuum with the B-operator. We find
\[\mathrm{ev}\big{[}B^{\circ}(x_{1,\pm}^{\circ})\,|0_{(2,1,1,0)}\rangle\big{]}= c_{\pm}\,\nu_{\pm}\,,\quad c_{\pm}=\pm 2\sqrt{2}\,\mathrm{i}\,\big{(}1\pm \kappa^{-1}\big{)}\left(\frac{1-\kappa^{-1}}{\sqrt{2}}\,\frac{x_{1,-}^{ \circ}-2}{x_{1,+}^{\circ}}\right)^{\pm 1}\,, \tag{4.37}\]
where the two linearly independent one-magnon eigenstates read
\[\nu_{\pm}=\mathrm{i}\,(|\uparrow\uparrow\downarrow\downarrow\rangle-|\uparrow \downarrow\downarrow\uparrow\rangle-|\downarrow\uparrow\downarrow\rangle+| \downarrow\downarrow\uparrow\uparrow\rangle)-x_{1,\pm}^{\circ}(|\downarrow \uparrow\downarrow\uparrow\rangle-|\uparrow\downarrow\uparrow\downarrow \rangle)\,. \tag{4.38}\]
The respective eigenvalues of the operator \(t_{2}^{\mathrm{HS}}\) from (4.27) are \(-(\kappa-\kappa^{-1})\,x_{1,\pm}^{\circ}\) in accordance with the coefficient of \(x^{-2}\) in equation (4.15). These are nontrivial new eigenvectors for the Haldane-Shastry chain with motif \(\{2\}\) that moreover are eigenvectors of our Heisenberg-style symmetries for any twist \(\kappa\).
## 5 Conclusion
In this paper we showed how the commuting family of spin-Calogero-Sutherland Hamiltonians can be refined using a transfer matrix. This gives new Heisenberg-type symmetries as well as a new Bethe-ansatz eigenbasis for the spin-Calogero-Sutherland model. Along the way we reviewed and explored nontrivial features of the spin chains arising in this construction, which involve fusion. One salient feature is the description of the Yangian highest-weight vector in the invariant subspace for singlet fusion in algebraic Bethe-ansatz form, using \(B\)-operators at special fixed Bethe roots. Via freezing, our results also provide a new Bethe-ansatz eigenbasis for the Haldane-Shastry chain. We illustrate our framework in several special cases, including its reduction to the Yangian Gelfand-Tsetlin basis in the limit of extreme twist, and a number of nontrivial examples for small system size.
There are several interesting directions left for the future.
* Following [42, 69] we considered the fermionic spin-Calogero-Sutherland model. One can analogously use a Bethe-ansatz analysis in the bosonic case, which was also studied by Takemura-Uglov [23].
* Our results should naturally generalise to higher-rank \(\mathfrak{sl}_{r}\) spin-Calogero-Sutherland models beyond the case \(r=2\) considered here. We also expect that our construction can be extended to xxz-type models, i.e. the spin-Ruijsenaars-Macdonald model as well as the \(q\)-deformed Haldane-Shastry spin chain [40, 53, 54, 70].
* Another interesting direction is extending our results to other Yangian-invariant spin chains, like the (rational) Polychronakos-Frahm model [71, 38] or the (hyperbolic) Frahm-Inozemtsev system [72].
* We plan to expand on and prove our claims from Section 4.3.2 about freezing at the level of the eigenvectors and representation theory (in particular, the differences between bosonic and fermionic cases reflected in different kinds of fusion).
* Finally, our Heisenberg-type symmetries provide a promising arena to develop Sklyanin's separation of variables (SoV) [73] for long-range models with spins. A key motivation for this comes from integrability in gauge/string (AdS/CFT) duality, where long-range spin chains feature prominently [1, 2], and SoV methods are starting to bring about powerful new results [74, 75, 76, 77]. The advantages of our Hamiltonians include the presence of true long-range interactions (unlike in the standard Heisenberg chains), absence of Yangian symmetry (unlike in the spin-Calogero-Sutherland model or Haldane-Shastry chain), and availability of all standard algebraic tools (unlike in models such as the Inozemtsev chain). In combination with recent progress in SoV for higher rank models, see e.g. [58, 78, 79, 80, 81], SoV methods for long-range systems should help to develop new ways for computing correlators in AdS/CFT and might shed further light on the mathematical structures behind SoV in general.
## Acknowledgements
We thank V. Pasquier for discussions. JL thanks A. Ben Moussa for discussing fusion and short exact sequences, and K. Takemura for interest and discussions.
Funding information.GF is grateful to the Azrieli Foundation for the award of an Azrieli Fellowship. The work of JL was funded by Labex Mathematique Hadamard (LMH). Part of this work was carried out during the stay of three of us (JL, FLM and DS) at the NCCR SwissMAP workshop _Integrability in Condensed Matter Physics and QFT_ (3-12 February 2023) at the SwissMAP Research Station in Les Diablerets. These authors would like to thank the Swiss National Science Foundation, which funds SwissMAP (grant number 205607) and, in addition, supported the event via the grant IZSEZ0_215085.
## Appendix A Fused \(R\)-matrix
To see explicitly what is happening with the monodromy matrix when \(\theta_{j+1}=\theta_{j}\mp\mathfrak{i}\) we focus on the factors \(\overline{R}_{0j}(u-\theta_{j},-\mathfrak{i}/2)\overline{R}_{0,j+1}(u-\theta _{j+1}-\mathfrak{i}/2)\) in \(\overline{T}_{0}(u)\). It suffices to consider the factors \(V_{2}^{\mathfrak{w}3}\) of \(V_{2}\otimes\mathcal{H}\) corresponding to the auxiliary space and sites \(j,j+1\). We are interested in the operator (2.35) at \(\theta_{j+1}=\theta_{j}\mp\mathfrak{i}\). Let us remove the factor \(2\mathfrak{i}\) coming from (2.39) and renormalise the \(R\)-matrix to \(\underline{R}(u)\equiv\overline{R}(u)/(u+\mathfrak{i})\), which obeys the unitarity condition \(\underline{R}_{12}(u)\underline{R}_{21}(-u)=1\) and
initial condition \(\underline{R}(0)=P\). Thus we consider the operator
\[\begin{split}\Pi^{\mp}_{j+1,j}&\,\underline{R}_{0j}(u- \theta_{j}-\mathrm{i}/2)\,\underline{R}_{0,j+1}(u-\theta_{j+1}-\mathrm{i}/2)\\ &=\underline{R}_{0j}(u-\theta_{j+1}-\mathrm{i}/2)\,\underline{R} _{0,j+1}(u-\theta_{j}-\mathrm{i}/2)\,\Pi^{\mp}_{j+1,j}\,,\end{split}\] (A.1)
Consider the basis \(|1,1\rangle\equiv|\uparrow\uparrow\rangle\), \(|1,0\rangle\equiv(|\uparrow\downarrow\rangle+|\downarrow\uparrow\rangle)/ \sqrt{2}\), \(|1,-1\rangle\equiv|\downarrow\downarrow\rangle\) for the copy of \(V_{3}\) and \(|0,0\rangle\equiv(|\uparrow\downarrow\rangle-|\downarrow\uparrow\rangle)/ \sqrt{2}\) for the copy of \(V_{1}\) in \(V_{2}\otimes V_{2}\) from \(\mathcal{H}\).
When \(\theta_{j+1}=\theta_{j}+\mathrm{i}\) (fusion into triplet) its matrix has zeroes in the rows and columns corresponding to \(|\uparrow\rangle\otimes|0,0\rangle,|\downarrow\rangle\otimes|0,0\rangle\). It equals \(R_{0j}^{(1/2,1)}(u-\theta_{j}^{\prime}-\mathrm{i}/2)\), where the fused site has inhomogeneity \(\theta_{j}^{\prime}=(\theta_{j}+\theta_{j+1})/2=\theta_{j}+\mathrm{i}/2\) and
\[\begin{split}\underline{R}^{(1/2,1)}(u)=\begin{pmatrix}1&0&0&0&0\\ 0&\frac{u+\mathrm{i}/2}{u+\mathrm{i}/3\overline{2}}&0&\frac{\sqrt{2}\,\mathrm{i }}{u+\mathrm{i}/3\overline{2}}&0&0\\ 0&0&\frac{u-\mathrm{i}/2}{u+\mathrm{i}/3\overline{2}}&0&\frac{\sqrt{2}\, \mathrm{i}}{u+\mathrm{i}/3\overline{2}}&0\\ 0&\frac{\sqrt{2}\,\mathrm{i}}{u+\mathrm{i}/3\overline{2}}&0&\frac{u-\mathrm{i}/ 2}{u+\mathrm{i}/3\overline{2}}&0&0\\ 0&0&\frac{\sqrt{2}\,\mathrm{i}}{u+\mathrm{i}/3\overline{2}}&0&\frac{u+ \mathrm{i}/2}{u+\mathrm{i}/3\overline{2}}&0\\ 0&0&0&0&0&1\end{pmatrix}\quad\text{on}\quad V_{2}\otimes V_{3}\\ \end{split}\] (A.2)
is the \(R\)-matrix with spin-\(1/2\) in the auxiliary space and spin \(1\) at site \(j\)[82] with respect to the basis \((|\uparrow\rangle\otimes|1,1\rangle,|\uparrow\rangle\otimes|0,0\rangle,| \uparrow\rangle\otimes|1,-1\rangle,|\downarrow\rangle\otimes|1,1\rangle,| \downarrow\rangle\otimes|1,0\rangle,|\downarrow\rangle\otimes|1,-1\rangle)\) of \(V_{2}\otimes V_{3}\subset V_{2}^{\otimes 3}\).
When instead \(\theta_{j+1}=\theta_{j}-\mathrm{i}\) (fusion into singlet) the matrix of (A.1) has zeroes everywhere except for the \(2\times 2\) block spanned by \(|\uparrow\rangle\otimes|0,0\rangle,|\downarrow\rangle\otimes|0,0\rangle\): it equals \(\underline{R}_{0j}^{(1/2,0)}(u-\theta_{j}^{\prime}-\mathrm{i}/2)\) where now \(\theta_{j}^{\prime}=(\theta_{j}+\theta_{j+1})/2=\theta_{j}-\mathrm{i}/2\) and 18
Footnote 18: The _quantum determinant_ of the Yangian is the central element obtained by singlet fusion in auxiliary space:
\[\begin{split}\mathrm{qdet}_{\overline{0}}\,\overline{T}_{0}(u)& \equiv\Pi^{-}_{0\omega}\,\overline{T}_{0}(u+\mathrm{i})\,\overline{T}_{0 ^{\prime}}(u)=\overline{A}(u+\mathrm{i})\,\overline{D}(u)-\overline{B}(u+ \mathrm{i})\,\overline{C}(u)=\overline{D}(u+\mathrm{i})\overline{A}(u)- \overline{C}(u+\mathrm{i})\,\overline{B}(u)\\ &=\overline{T}_{0}(u)\,\overline{T}_{0^{\prime}}(u+\mathrm{i})\, \Pi^{-}_{0^{\prime}}=\overline{A}(u)\,\overline{D}(u+\mathrm{i})-\overline{C}( u)\,\overline{B}(u+\mathrm{i})=\overline{D}(u)\overline{A}(u+\mathrm{i})- \overline{B}(u)\,\overline{C}(u+\mathrm{i})\,.\end{split}\]
For \(L=1\) this yields \((u-\theta_{1}^{\prime}-\mathrm{i})(u-\theta_{1}^{\prime}+\mathrm{i})\) (times the identity), or (A.3) for the normalised \(R\)-matrix \(\underline{R}(u)\).
\[\underline{R}^{(1/2,0)}(u)=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\cdot\mathrm{qdet}\,\underline{R}(u)\quad\text{on}\quad V_{2} \otimes V_{1}\,,\qquad\mathrm{qdet}\,\underline{R}(u)=\frac{u+\mathrm{i}/2}{u -\mathrm{i}/2}\,.\] (A.3)
## Appendix B On the derivation of the Bethe equations with fusion
### Derivation from the \(\boldsymbol{Q}\)-relation
Let us discuss a subtlety in the presence of fusion in the derivation of Bethe equations in the form (2.13) from the \(QQ\)-relation (2.18), i.e. \(\big{(}\kappa-\kappa^{-1}\big{)}\overline{Q}_{\theta}=\overline{Q}^{-} \overline{\widetilde{Q}}^{+}-\overline{Q}^{+}\widetilde{\widetilde{Q}}^{-}\). Shifting the argument to \(u\to u+\mathrm{i}/2\) or \(u\to u-\mathrm{i}/2\) gives
\[\big{(}\kappa-\kappa^{-1}\big{)}\overline{Q}^{+}_{\theta}=\overline{Q}\, \widetilde{\widetilde{Q}}^{++}-\overline{Q}^{++}\,\widetilde{\widetilde{Q}}\,, \qquad\big{(}\kappa-\kappa^{-1}\big{)}\overline{Q}^{-}_{\theta}=\overline{Q}^{ --}\widetilde{\widetilde{Q}}-\overline{Q}\,\widetilde{\widetilde{Q}}^{--}\,.\] (B.1)
Evaluating both equations at \(u=u_{m}\) a root of \(\overline{Q}\), on the right-hand sides the terms with \(\widetilde{\widetilde{Q}}^{\pm\pm}\) vanish. For generic inhomogeneities, eliminating the remaining \(\widetilde{\widetilde{Q}}\) using the second equation yields the usual Bethe equations (2.17).
Now consider the fusion of two sites, with \(\theta_{j}=\theta_{j+1}\pm\mathrm{i}\) as in Section 2.3.2. Then there is a class of solutions for which \(\overline{Q}\) and \(\widetilde{\overline{Q}}\) have a common root at
\[u_{0}=\frac{\theta_{j}+\theta_{j+1}}{2}\,.\] (B.2)
Thus all terms in (B.1) vanish separately at \(u=u_{0}\), and we cannot cancel \(\widetilde{\overline{Q}}(u_{0})\) like before. Instead removing the common factor \(u-u_{0}\) from \(\overline{Q}\) and \(\widetilde{\overline{Q}}\) gives proper non-singular Bethe equations for an 'effective' spin chain of length \(L-2\) as discussed in Section 2.3.2.
### Derivation from the algebraic Bethe ansatz
Here we show how to prove the construction of eigenstates in the form (2.11) for the case when we fuse two sites by taking, say, \(\theta_{j+1}=\theta_{j}\pm\mathrm{i}\) as in Section 2.3.2. The subtlety is that for states in the invariant subspace \(V_{\mathrm{inv}}=\Pi^{\pi}_{j,j+1}(\mathcal{H})\) discussed there, some sets of Bethe roots involve the 'frozen' root \(u_{0}=\theta_{j}+\mathrm{i}/2\). For fusion into singlet, solutions including \(u_{0}\) describe the states in the invariant subspace. These solutions are easily missed when simplifying the Bethe equations (2.13) to (2.46). The reason for the existence of such solutions is different for fusion into triplet and singlet.
For \(\theta_{j+1}=\theta_{j}-\mathrm{i}\) (fusion into triplet) \(\overline{B}(u_{0})\,|0\rangle\) vanishes as we show in Appendix C just below, so it cannot be an eigenvector. If instead \(\theta_{j+1}=\theta_{j}+\mathrm{i}\) (fusion into singlet), \(\overline{B}(u_{0})\,|0\rangle\) is nonzero. Let us show that it is an eigenstate of the transfer matrix \(\tilde{\tau}(u;\kappa)=\kappa\,\tilde{\mathcal{A}}(u)+\kappa^{-1}\,\overline{ \mathcal{D}}(u)\) for any \(u\). The standard proof of the algebraic Bethe ansatz hinges on the commutation relations
\[\widetilde{A}(u)\,\overline{B}(u_{0}) =\frac{u-u_{0}-\mathrm{i}}{u-u_{0}}\,\overline{B}(u_{0})\, \widetilde{A}(u)+\frac{\mathrm{i}}{u-u_{0}}\,\overline{B}(u)\,\widetilde{A}(u _{0})\,,\] (B.3) \[\overline{D}(u)\,\overline{B}(u_{0}) =\frac{u-u_{0}+\mathrm{i}}{u-u_{0}}\,\overline{B}(u_{0})\, \overline{D}(u)-\frac{\mathrm{i}}{u-u_{0}}\,\overline{B}(u)\,\overline{D}(u_{ 0})\,.\] (B.4)
On \(|0\rangle\) the \(A\)- and \(D\)-operators can be replaced by their eigenvalues
\[\widetilde{A}(u)\,|0\rangle=\widetilde{Q}^{+}_{\theta}\,|0\rangle\,,\quad \overline{D}(u)\,|0\rangle=\widetilde{Q}^{-}_{\theta}\,|0\rangle\,.\] (B.5)
Usually the terms with \(\overline{B}(u)\) in (B.3) and (B.4) contribute to the 'unwanted' terms, which cancel against each other by virtue of the Bethe equations. However, when \(\theta_{j+1}=\theta_{j}+\mathrm{i}\) then \(\widetilde{Q}^{\pm}_{\theta}\) both vanish at \(u=u_{0}\), so the 'unwanted' terms cancel _separately_. Hence \(\overline{B}(u_{0})\,|0\rangle\) is an eigenstate even though the root \(u_{0}\) is not visible in the usual Bethe equations.
## Appendix C Action of the \(B\)-operator at the fixed root
Direct computation shows that
\[\overline{B}(u)\,|0\rangle=\mathrm{i}\sum_{i=1}^{L}\,\prod_{j=1}^{i-1}\!\left( u-\theta_{j}+\frac{\mathrm{i}}{2}\right)\prod_{j=i+1}^{L}\!\left(u-\theta_{j}- \frac{\mathrm{i}}{2}\right)\,|i\rangle\rangle\,.\] (C.1)
For generic values of the inhomogeneities this vector spans the sector with \(M=1\) magnon.
When \(\theta_{j+1}=\theta_{j}+\mathrm{i}\) (fusion into singlet) all coefficients with \(i\neq j,j+1\) in (C.1) contain a factor \(u-(\theta_{j}+\mathrm{i}/2)\), so they vanish at \(u=u_{0}=(\theta_{j}+\theta_{j+1})/2=\theta_{j}+\mathrm{i}/2\). The two remaining coefficients, with \(i=j,j+1\), differ by a sign. Thus \(|0^{\prime}\rangle=\overline{B}(u_{0})\,|0\rangle\in\Pi^{-}_{j,j+1}(\mathcal{H})\) in this case. If instead \(\theta_{j+1}=\theta_{j}-\mathrm{i}\) (fusion into triplet) then all coefficients in (C.1) contain a factor \(u-(\theta_{j}-\mathrm{i}/2)\). Thus \(\overline{B}(u_{0})\,|0\rangle\) now vanishes at \(u=u_{0}=(\theta_{j}+\theta_{j+1})/2=\theta_{j}-\mathrm{i}/2\).
Examples of fusion for low length
### Generic case and fusion for \(L=2\)
Let us illustrate in detail how fusion works for a spin chain with \(L=2\) sites. As representation of \(\mathfrak{sl}_{2}\), which is part of the Yangian, the Hilbert space \(\mathcal{H}=V_{2}\otimes V_{2}\) decomposes into the triplet and singlet, \(\mathcal{H}=V_{3}\oplus V_{1}\). Pick orthonormal bases \(\ket{1,1}\equiv\ket{\uparrow\uparrow}\), \(\ket{1,0}\equiv(\ket{\uparrow\downarrow}+\ket{\downarrow\uparrow})/\sqrt{2}\), \(\ket{1,-1}\equiv\ket{\downarrow\downarrow}\) for the copy of \(V_{3}\) and \(\ket{0,0}\equiv(\ket{\uparrow\downarrow}-\ket{\downarrow\uparrow})/\sqrt{2}\) for the copy of \(V_{1}\) in \(V_{2}\otimes V_{2}\). With respect to \((\ket{1,1},\ket{1,0},\ket{0,0},\ket{1,-1})\) we have
\[\vec{A}(u)=\begin{pmatrix}\overline{Q}_{\mathbf{\theta}}^{+}&0&0&0\\ 0&\overline{Q}_{\mathbf{\theta}}-\frac{1}{4}&\frac{\mathrm{i}}{2}\left(\theta_{1}- \theta_{2}+\mathrm{i}\right)&0\\ 0&\frac{\mathrm{i}}{2}(\theta_{1}-\theta_{2}-\mathrm{i})&\overline{Q}_{\mathbf{ \theta}}+\frac{3}{4}&0\\ 0&0&0&\overline{Q}_{\mathbf{\theta}}^{-}\end{pmatrix}\,,\] (D.1) \[\overline{B}(u)=\frac{\mathrm{i}}{\sqrt{2}}\begin{pmatrix}0&0&0&0 \\ 2(u-u_{0})&0&0&0\\ -(\theta_{1}-\theta_{2}-\mathrm{i})&0&0&0\\ 0&2(u-u_{0})&\theta_{1}-\theta_{2}+\mathrm{i}&0\end{pmatrix}\,,\] (D.2) \[\overline{C}(u)=\frac{\mathrm{i}}{\sqrt{2}}\begin{pmatrix}0&2(u-u _{0})&-(\theta_{1}-\theta_{2}+\mathrm{i})&0\\ 0&0&0&2(u-u_{0})\\ 0&0&0&\theta_{1}-\theta_{2}-\mathrm{i}\\ 0&0&0&0\end{pmatrix}\,,\] (D.3) \[\overline{D}(u)=\begin{pmatrix}\overline{Q}_{\mathbf{\theta}}&0&0&0 \\ 0&\overline{Q}_{\mathbf{\theta}}-\frac{1}{4}&-\frac{\mathrm{i}}{2}\left(\theta_{1}- \theta_{2}+\mathrm{i}\right)&0\\ 0&-\frac{\mathrm{i}}{2}\left(\theta_{1}-\theta_{2}-\mathrm{i}\right)& \overline{Q}_{\mathbf{\theta}}+\frac{3}{4}&0\\ 0&0&0&\overline{Q}_{\mathbf{\theta}}^{+}\end{pmatrix}\,,\] (D.4)
where \(\overline{Q}_{\mathbf{\theta}}=(u-\theta_{1})(u-\theta_{2})\), \(\overline{Q}_{\mathbf{\theta}}^{\pm}=\overline{Q}_{\mathbf{\theta}}(u\pm\mathrm{i}/2)\) and \(u_{0}=(\theta_{1}+\theta_{2})/2\). The twisted transfer matrix \(\overline{t}(u;\kappa)=\kappa\vec{A}(u)+\kappa^{-1}\overline{D}(u)\) is block diagonal. In the periodic case \(\kappa=1\) its \(2\times 2\) block at \(M=1\) becomes diagonal by \(\mathfrak{sl}_{2}\) symmetry, as the irreps \(V_{3}\) and \(V_{1}\) each occur once in \(\mathcal{H}\).
For generic \(\theta_{1},\theta_{2}\) the representation of the Yangian is irreducible, unlike for \(\mathfrak{sl}_{2}\). To see this explicitly, notice that any subspace invariant under Yangian action has to lie inside either \(\mathfrak{sl}_{2}\) irrep \(V_{3}\) or \(V_{1}\) of \(\mathcal{H}\). From the above we read off
\[\begin{split}\overline{B}(u)\ket{1,1}&=\sqrt{2} \,\mathrm{i}\left(u-u_{0}\right)\ket{1,0}-\frac{\mathrm{i}}{\sqrt{2}}\left( \theta_{1}-\theta_{2}-\mathrm{i}\right)\ket{0,0}\,,\\ \overline{B}(u)\ket{0,0}&=\frac{\mathrm{i}}{\sqrt{2}} \left(\theta_{1}-\theta_{2}+\mathrm{i}\right)\ket{1,-1}\,.\end{split}\] (D.5)
For generic \(\theta_{i}\) the \(B\)-operator mixes \(V_{3},V_{1}\), so there are no invariant subspaces for the Yangian.
Note from (D.5) that \(\ket{0,0}\) becomes an eigenvector of the \(B\)-operator iff
\[\theta_{2}=\theta_{1}+\mathrm{i}\] (D.6)
In this case \(\ket{0,0}\) is also an eigenvector of \(\vec{A}\), \(\overline{C}\) and \(\overline{D}\), so \(V_{1}\) becomes Yangian invariant (fusion into singlet). Yet \(V_{3}\) still is not invariant, cf. (D.5). Thus the Yangian representation on \(\mathcal{H}\) is reducible but indecomposable. Also note from (D.5) that, at the special Bethe root \(u_{0}=(\theta_{1}+\theta_{2})/2\) from (2.48), the \(B\)-operator sends the vacuum \(\ket{0}=\ket{1,1}=\ket{\uparrow\uparrow}\) to a multiple of \(\ket{0^{\prime}}=\ket{0,0}\in V_{1}\). This illustrates several parts of the discussion in Sections 2.3.1 and 2.3.2.
Similarly, by (D.5) we have \(\overline{B}(u)\ket{1,1}\in V_{3}\) iff
\[\theta_{2}=\theta_{1}-\mathrm{i}\] (D.7)
and one can check that \(V_{3}\) becomes an invariant subspace for the Yangian (fusion into triplet). This time \(V_{1}\) is not invariant, see (D.5), and we again have a reducible but indecomposable Yangian representation. Observe that this time \(\overline{B}(u_{0})\,|0\rangle=0\) at the special fixed root.
### Fusion into singlet for \(L=4\)
Since we are most interested in the case of fusion into a singlet let us illustrate the discussion from Section 2.3.2 with another example. To see the features related to the Bethe ansatz we take \(L=4\), with Hilbert space
\[\mathcal{H}=V_{2}^{\otimes 4}\,\cong\,V_{5}\oplus 3\,V_{3}\oplus 2\,V_{1}\qquad \text{for}\quad\mathfrak{sl}_{2}\,,\] (D.8)
where the quintet contains the reference state \(|0\rangle\), and there are three triplets and two singlets.
Let us fuse the two middle sites, by taking inhomogeneities
\[\theta_{3}=\theta_{2}+\mathfrak{i}\,,\qquad\text{with}\ \theta_{1},\theta_{2}, \theta_{4}\ \text{in general position}\,.\] (D.9)
The invariant subspace is
\[V_{\text{inv}}=\Pi_{23}^{-}(\mathcal{H})\cong\,V_{2}\otimes V_{1}\otimes V_{2 }\cong\,V_{3}\oplus V_{1}\qquad\text{for}\quad\mathfrak{sl}_{2}\,.\] (D.10)
Let us denote the copies of the triplet and singlet inside \(V_{\text{inv}}\subset\mathcal{H}\) by \(V_{\text{inv},3}\) and \(V_{\text{inv},1}\). The complement \(\Pi_{23}^{+}(\mathcal{H})\cong V_{2}\otimes V_{3}\otimes V_{2}\) is not invariant: the \(B\)-operator sends \(|0\rangle\in\Pi_{23}^{+}(\mathcal{H})\) to (C.1), which spans the \(M=1\) sector and thus has nontrivial overlap19 with \(V_{\text{inv},3}\). As a Yangian representation \(\mathcal{H}\) is therefore reducible but indecomposable, as illustrated in Figure 2.
Footnote 19: in particular if we plug into \(B\) the fixed Bethe root it gives a state lying entirely in \(V_{\text{inv},3}\), see (D.13)
We will describe the construction of the eigenstates of the transfer matrix by algebraic Bethe ansatz \(\overline{B}(u_{1})\cdots\overline{B}(u_{M})\,|0\rangle\).20 For simplicity we consider the periodic case \(\kappa=1\), for which the decomposition (D.8) describes the degeneracies of the transfer matrix eigenvalues, so there are six distinct eigenvalues, corresponding to six eigenvectors with highest weight for \(\mathfrak{sl}_{2}\), occuring at the sectors with \(M\leqslant 2\) spins \(\downarrow\). In the Bethe equations one of the factors in the numerator and denominator on the left-hand side cancel, yielding
Footnote 20: The following is based on numerics, but we expect our findings to hold for generic \(\theta_{1},\theta_{2},\theta_{4}\).
\[\frac{u_{m}-\theta_{1}+\mathfrak{i}/2}{u_{m}-\theta_{1}-\mathfrak{i}/2}\, \frac{u_{m}-\theta_{4}+\mathfrak{i}/2}{u_{m}-\theta_{4}-\mathfrak{i}/2}\, \frac{u_{m}-\theta_{2}+\mathfrak{i}/2}{u_{m}-\theta_{2}-3\mathfrak{i}/2}= \prod_{n(\neq m)}^{M}\frac{u_{m}-u_{n}+\mathfrak{i}}{u_{m}-u_{n}-\mathfrak{i}}\,.\] (D.11)
For \(M=1\) the algebraic Bethe ansatz reads (C.1). The right-hand side of (D.11) is unity and we obtain a degree-two polynomial equation for the Bethe root, with solutions that we denote by \(u_{1,\pm}\). Thus we obtain two states
\[\overline{B}(u_{1,\pm})\,|0\rangle\] (D.12)
that are the highest-weight vectors in the triplets contained in \(\Pi_{23}^{+}(\mathcal{H})\). The remaining \(\mathfrak{sl}_{2}\) highest-weight state with \(M=1\) is
\[|0^{\prime}\rangle=\overline{B}(u_{0})\,|0\rangle\in V_{\text{inv},3}\,, \qquad u_{0}=\frac{\theta_{2}+\theta_{3}}{2}=\theta_{2}+\frac{\mathfrak{i}}{2}\,,\] (D.13)
as expected from the discussion in Section 2.3.3. It obeys the Yangian highest-weight conditions \(\overline{C}(u)\,|0^{\prime}\rangle=0\) and is an eigenvector of both diagonal elements of the monodromy matrix:
\[\overline{A}(u)\,|0^{\prime}\rangle=\frac{u-u_{0}-\mathfrak{i}}{u-u_{0}}\, \overline{Q}_{\theta}^{+}(u)\,|0^{\prime}\rangle\,,\qquad\overline{D}(u)\,|0 ^{\prime}\rangle=\frac{u-u_{0}+\mathfrak{i}}{u-u_{0}}\,\overline{Q}_{\theta}^ {-}(u)\,|0^{\prime}\rangle\,.\] (D.14)
It remains to discuss the two singlets from (D.8) at \(M=2\). One is of the standard form
\[\overline{B}(u_{1})\,\overline{B}(u_{2})\,|0\rangle\] (D.15)
where \(u_{1},u_{2}\) solve the Bethe equations (D.11) with \(M=2\). Notice there is only one admissible solution of those Bethe equations with \(M=2\), i.e. without repeated or infinite Bethe roots. The last singlet state in (D.8) spans \(V_{\text{inv},1}\subset V_{\text{inv}}\). As expected, we can obtain it using the \(B\)-operator acting on (D.13) with the Bethe root determined by the reduced Bethe equations (2.50), which read
\[\frac{u_{1}^{\prime}-\theta_{1}+\text{i}/2}{u_{1}^{\prime}-\theta_{1}-\text{i }/2}\,\frac{u_{1}^{\prime}-\theta_{4}+\text{i}/2}{u_{1}^{\prime}-\theta_{4}- \text{i}/2}=1\,.\] (D.16)
Thus we obtain this last singlet state as
\[\overline{B}(u_{1}^{\prime})\,|0^{\prime}\rangle=\overline{B}(u_{0})\, \overline{B}(u_{1}^{\prime})\,|0\rangle\,,\qquad u_{1}^{\prime}=\frac{\theta_ {1}+\theta_{4}}{2}\.\] (D.17)
We see that all \(\mathfrak{sl}_{2}\) highest-weight states are given by the algebraic Bethe ansatz. (Here \(u_{1}^{\prime}\) happens to have the same form as \(u_{0}\), but this is a coincidence for low \(L\), unrelated to any cancellations like in Appendix B or any other special features in the presence of fusion.)
|
2305.19572 | The effect of "very fast" strategies on two species competition | We consider the effect of finite time extinction mechanisms (FTEM) such as
(1) semi-linear harvesting terms, and (2) quasi-linear fast diffusion terms on
two species Lokta-Volterra competition models. We show that these mechanisms
can alter classical dynamics of competitive exclusion, and weak and strong
competition by acting only on a \emph{small} portion of the weaker competitors'
population, analogous to small defector populations in game theory \cite{DC23}.
In particular, a stronger competitors population, with a few individuals
dispersing (``defecting") very quickly, could exhibit bi-stability, as well as
competitive exclusion \emph{reversal}. The non-linear harvesting is applied to
aphid-soybean crop systems, wherein novel dynamics are observed. Applications
to bio-control of invasive pests such as the soybean aphid are discussed. | Aniket Banerjee, Vaibhava Srivastava, Rana D. Parshad | 2023-05-31T05:49:02Z | http://arxiv.org/abs/2305.19572v1 | # The effect of "Very fast" strategies on two species competition
###### Abstract.
We consider the effect of finite time extinction mechanisms (FTEM) such as (1) semi-linear harvesting terms, (2) quasi-linear fast diffusion terms on two species Lokta-Volterra competition models. We show that these mechanisms can alter classical dynamics of competitive exclusion, and weak and strong competition by acting only on a _small_ portion of the weaker competitors' population, analogous to small defector populations in game theory [59]. In particular, a stronger competitors population, with a few individuals dispersing ("defecting") very quick, could exhibit bi-stability, as well as competitive exclusion _reversal_. The non-linear harvesting is applied to aphid-soybean crop systems, wherein novel dynamics are observed. Applications to bio-control of invasive pests such as the soybean aphid are discussed.
Key words and phrases: Finite time extinction, Coexistence
## 1. Introduction
The classical two-species Lotka-Volterra competition model has been rigorously investigated in the literature [37, 28]. It models two competing species via considering growth inter and intra species competition [26, 49]. It predicts well-observed states in population biology, of co-existence, competitive exclusion of one competitor, and bi-stability - and finds numerous applications in population ecology, invasion science, evolutionary biology, epidemics, economics and game theory, [49, 28, 29, 31, 59, 37]. Note, the equilibrium states are achieved only asymptotically, as is the case in many differential equation population models. Competition theory predicts that two competing species whose niches overlap perfectly, cannot coexist, and one must competitively exclude the other. The classical Lokta-Volterra ODE competition model predicts this dynamic under certain parametric regimes. Other parametric regimes predict initial condition-dependent outcomes (strong competition) or stable coexistence (weak competition). Analogously, the spatially explicit Lokta-Volterra model predicts that the slower diffusing competitor will always exclude the faster one, given that they have the same kinetics.
Invasive species are most successful in environments where they lack close relatives [16, 15]. This has been confirmed in experiments, where invasion success in microbial communities _increases_ as phylogeny or "relatedness" between invader and invade decreases [19]. Thus an invasion is most likely to be successful, if the invasive species faces low intraspecies competition, whilst being a superior inter-species competitor [50, 51]. However in reality a successful invasion results from the interaction of a myriad of factors, biological, environmental, landscape, and temporal - this is quantified via the concept of a species _ecological niche[18, 17]_. To
fix ideas, _niche space_ can be thought of as a continuum, similar to \(\mathbb{R}^{4}_{+}\), where the four axes are physical and environmental factors (say climate), biological factors (say resources), space and time. A species _niche_ is the _response_ it has to each point in the space, and the _effect_ it has at each point [18, 57]. The niche-based hypothesis posits invasive species are dexterous at using unexploited resources - that is filling _vacant_ niches, or broadening their niche breadth if the opportunity presents itself [57, 58].
In [63] we decided to consider the effect of creating a vacant niche very rapidly, via finite time extinction mechanisms (FTEM). There are several motivations for studying FTEM. In classical biological control, pest populations can _rebound_ from levels as close to extinction as one pleases [22]. This is well observed with soybean aphids (_Aphis glycines_), an invasive pest on soybean crops, particularly in the North-central US. Recent work in epidemics has considered a class of susceptible-infected models with non-smooth incidence functions, that can lead to host extinction in finite time - yet are seen to be better fitted to data collected than smooth systems [12, 13, 20, 14, 10]. Non-smooth responses have been considered analytically in the predator-prey literature as well [21, 23, 24, 1, 2], and in fitting various data [4, 8, 9, 5].
Consider a model, for the competition dynamics between a strong(er) invader \(u\) and weak(er) invade \(v\), permitting multiple locally stable equilibria. The state \((u^{*},0)\) would imply invasion success [52], or the competitive exclusion of \(v\). Managers would aim to rather attain \((0,v^{*})\) - eradication or \((u^{*},v^{*})\) - where \(u^{*}\) was at a _manageable_ level. These states would imply invasion failure [18, 52]. We show one can _reverse_ invasion success by strategically _increasing_ the rate of attraction to the weak(er) invadees extinction state, thereby creating a niche opportunity for the invader, occupied prior by the invadee. This will indirectly _increase_ intraspecific competition among the invaders as they attempt to fill the niche and facilitate invasion failure.
In the current manuscript, we show the following,
* A two species ODE Lotka-Volterra competition model, with a semi-linear harvesting term exhibiting FTEM can lead to bi-stability, via Lemma 3.8 and Theorem 3.9, also see Fig. 2. FTEM can also enable a competitor to avoid competitive exclusion and persist, for various regimes of initial conditions. This is seen via Theorem 2.1, also see Fig. 0(a). To this end, the harvesting needs to be performed only on a portion of the weaker competitors population. This is seen via Theorem 3.4.
* A new spatially explicit model is introduced, in the form of a quasi-linear PDE. Herein a certain portion of the weaker population moves at a faster rate than the stronger population. Degeneracy theory is applied to understand solutions to this model system, see definition 5.4. Also see Theorem 5.3 and Fig. [7(A),8(A)].
* FTEM in this PDE model, can cause the slower diffuser to _loose_, via Theorem 5.7, as long as there are a few "very" fast-moving individuals. Refer to Fig. [7(B),8(B)].
* This framework is applied to a recent soybean aphid-soybean plant model, where novel dynamics are observed. See Fig. 9.
## 2. Prior Results
### General competition Model
Consider the classical two-species Lotka-Volterra ODE competition model,
\[\frac{du}{dt}=a_{1}u-b_{1}u^{2}-c_{1}uv,\ \frac{dv}{dt}=a_{2}v-b_{2}v^{2}-c_{2}uv. \tag{1}\]
where \(u\) and \(v\) are the population densities of two competing species, \(a_{1}\) and \(a_{2}\) are the intrinsic (per capita) growth rates, \(b_{1}\) and \(b_{2}\) are the intraspecific competition rates, \(c_{1}\) and \(c_{2}\) are the interspecific competition rates. All parameters considered are positive. The dynamics of this system are well studied [3]. We recap these briefly,
* \(E_{0}=(0,0)\) is always unstable.
* \(E_{u}=(\frac{a_{1}}{b_{1}},0)\) is globally asymptotically stable if \(\frac{a_{1}}{a_{2}}>\max\left\{\frac{b_{1}}{c_{2}},\frac{c_{1}}{b_{2}}\right\}\). Herein \(u\) is said to competitively exclude \(v\).
* \(E_{v}=(0,\frac{a_{2}}{b_{2}})\) is globally asymptotically stable if \(\frac{a_{1}}{a_{2}}<\min\left\{\frac{b_{1}}{c_{2}},\frac{c_{1}}{b_{2}}\right\}\). Herein \(v\) is said to competitively exclude \(u\).
* \(E^{*}=\left(\frac{a_{1}b_{2}-a_{2}c_{1}}{b_{1}b_{2}-c_{1}c_{2}},\frac{a_{2}b_{ 1}-a_{1}c_{2}}{b_{1}b_{2}-c_{1}c_{2}}\right)\) exists when \(b_{1}b_{2}-c_{1}c_{2}\neq 0\). The positivity of the equilibrium holds if \(\frac{c_{2}}{c_{1}}<\frac{a_{2}}{a_{1}}<\frac{b_{2}}{c_{1}}\) and is globally asymptotically stable if \(b_{1}b_{2}-c_{1}c_{2}>0\). This is said to be the case of weak competition.
* If \(b_{1}b_{2}-c_{1}c_{2}<0\), then \(E^{*}=\left(\frac{a_{1}b_{2}-a_{2}c_{1}}{b_{1}b_{2}-c_{1}c_{2}},\frac{a_{2}b_{ 1}-a_{1}c_{2}}{b_{1}b_{2}-c_{1}c_{2}}\right)\) is unstable as a saddle. In this setting, one has an initial condition dependent attraction to either \(E_{u}(\frac{a_{1}}{b_{1}},0)\) or \(E_{v}(0,\frac{a_{2}}{b_{2}})\). This is the case of strong competition.
### Prior work on FTEM
The following model was introduced in [63] as a means to show that finite time extinction mechanism (FTEM) can alter the above classical dynamics.
\[\left\{\begin{array}{ll}\frac{du}{dt}&=a_{1}u-b_{1}u^{2}-c_{1}u^{p}v,\ 0<p\leq 1,\\ \frac{dv}{dt}&=a_{2}v-b_{2}v^{2}-c_{2}uv^{q},\ 0<q\leq 1.\end{array}\right. \tag{2}\]
We see that the classical model is a special case of the above when \(p=q=1\). Note, \(0<p<1,q=1\), allows for _finite_ time extinction of \(u\), and \(p=1,0<q<1\), allows for _finite_ time extinction of \(v\). There is also the more complex case when \(0<p<1,0<q<1\). If \(p=q=1\), and \(\frac{a_{1}}{a_{2}}>\max\left\{\frac{b_{1}}{c_{2}},\frac{c_{1}}{b_{2}}\right\}\), then \(u\) is said to competitively exclude \(v\), and \((\frac{a_{1}}{b_{1}},0)\) is globally asymptotically stable. One can investigate the effect of \(0<q<1\) on this situation. To this end the following result [63] is recapped,
**Theorem 2.1**.: _Consider (2), with \(p=1\), \(\frac{a_{1}}{c_{1}}>\frac{a_{2}}{b_{2}}\), \((a_{2})^{2}b_{1}+2a_{2}c_{1}c_{2}>4a_{1}b_{2}c_{2}\). Then there exists a \(q^{*}\in(0,1)\), s.t. for any \(q^{*}<q<1\) there is no interior equilibrium, for \(q=q^{*}\) there is a unique non-hyperbolic equilibrium and for \(0<q<q^{*}\), there exists two interior equilibrium, a saddle, and a nodal sink._
**Remark 1**.: It is a natural question to investigate the validity of the \(-c_{2}uv^{q},\ 0<q\leq 1\) term, as a control mechanism in a laboratory setting. Since \(-c_{2}uv\) models classical inter-species competition, it is inherent to any system, and not pragmatic to manipulate. This raises the question of other reaction terms, that would be more amenable to laboratory manipulation.
### The modified Model
Consider the population dynamics of a species \(v\) typically governed by an ODE, \(\frac{dv}{dt}=f(v,u)\), where dynamics such as growth, death, competition, and depredation are embedded in \(f(v,u)\). One can define an operator \(\mathcal{L}:\mathbb{R}\mapsto\mathbb{R}\), which is a combination of a linear growth operator such as \(\mathcal{L}(v)=a_{1}v\), and a non-linear harvesting type operator such as where \(L^{*}(v)\approx-v^{p}\)\(0<p<1\). We take the following approach. Define,
\[\mathcal{L}^{1}(v)\] \[= (q\mathcal{L}+(1-q)L^{*})(v),\ 0<q<1\] \[= q\mathcal{L}(v)+(1-q)L^{*}(v),\ 0<q<1\] \[= (q)a_{1}v-(1-q)v^{p-1}v), \tag{3}\]
where \(L^{*}(v)\approx-v^{p}\)\(0<p<1\). Thus the operator \(L^{*}(v)\) will play the role of the sub-linear harvesting term, so as to cause finite time extinction. However, this is only effective on a fraction \(((1-q),q\in(0,1))\) of the population.
**Remark 2**.: Thus the growth operator \(\mathcal{L}^{1}\), provides a way to formalize the action via which we have a fraction of the population growing as per the regular growth coefficient, while the other fraction is harvested at a sub-linear rate.
**Remark 3**.: Another interpretation of the non-linear operator \(L^{*}\) could be a faction or group within the \(v\) population that decides to divert from the incentives of the group as a collective whole. Such competitive mechanisms have been under intense recent investigation.
The model is modified to bring in the effect of finite time extinction. The modified model is as follows:
\[\frac{du}{dt} =a_{1}u-b_{1}u^{2}-c_{1}uv,\] \[\frac{dv}{dt} =a_{2}qv-b_{2}v^{2}-c_{2}uv-(1-q)v^{p}. \tag{4}\]
where \(0<p<1\) and \(0<q<1\).
## 3. Equilibria and Stability of System (4)
### Existence of equilibria
In this subsection, we perform the qualitative analysis of the system (4). Considering the biological implication of the system on ecological population, we are interested to study the dynamics of the system (4) in the closed first quadrant \(\mathbb{R}^{2}_{+}\) in the (u,v) plane. It is obvious that \(E_{0}(0,0)\) is an equilibrium point of the system (4). We can have two type of boundary equilibria \(E_{u}(\frac{a_{1}}{b_{1}},0)\) which is trivial and
\(E_{v}(0,\bar{v})\) as described in lemma 3.1. The whole plane \(\mathbb{R}^{2}_{+}\) is not the positively invariant under parameter \(q\) for the system (4). So, the positively invariant subspace we have from classical model \(\Gamma\{(u,v):0\leq u\leq\frac{a_{1}}{b_{1}},0\leq v\leq\frac{a_{2}}{b_{2}}\}\) contains all the equilibria of the system (4).
In this section, we will discuss the existence of the interior equilibria of the system (4) in the invariant set. In order to get the equilibria of the system (4), we consider the nullclines of both the population \(u\) and \(v\) of the system (4), which are given by:
\[a_{1}u-b_{1}u^{2}-c_{1}uv =0,\] \[a_{2}qv-b_{2}v^{2}-c_{2}uv-(1-q)v^{p} =0. \tag{5}\]
Simplifying system of equations (5) to get an explicit form in terms of population \(v\) we get,
\[(a_{2}b_{1}q-c_{2}a_{1})+(c_{1}c_{2}-b_{1}b_{2})v-b_{1}(1-q)v^{p-1}=0 \tag{6}\]
We will study the dynamics of the polynomial (6) to find the number of equilibria possible for the system (4).
**Lemma 3.1**.: _Let \(v_{\phi}^{p-2}=\frac{b_{2}}{(1-q)(1-p)}\). Then we have:_
1. _If_ \(q<\frac{b_{2}v_{\phi}(2-p)}{a_{2}(1-p)}\) _then there does not exist any boundary equilibrium of the form_ \(E_{v}(0,\bar{v})\)_._
2. _If_ \(q=\frac{b_{2}v_{\phi}(2-p)}{a_{2}(1-p)}\) _then there exists an unique boundary equilibrium_ \(E_{v}(0,\bar{v})\)_._
3. _If_ \(q>\frac{b_{2}v_{\phi}(2-p)}{a_{2}(1-p)}\) _then there exists two boundary equilibrium_ \(E_{v_{1}}(0,\bar{v_{1}})\) _and_ \(E_{v_{2}}(0,\bar{v_{2}})\)_._
**Lemma 3.2**.: _If \(b_{1}b_{2}-c_{1}c_{2}\leq 0\) there exists only one unique interior coexistence equilibrium._
**Lemma 3.3**.: _When \(b_{1}b_{2}-c_{1}c_{2}>0\) and \(v_{max}^{p-2}=\frac{b_{1}b_{2}-c_{1}c_{2}}{(1-q)(1-p)}\) then we have:_
1. _If_ \((a_{2}b_{1}q-c_{2}a_{1})-v_{max}(b_{1}b_{2}-c_{1}c_{2})\frac{(2-p)}{(1-p)}<0\) _then there does not exist any positive interior equilibrium_
2. _If_ \((a_{2}b_{1}q-c_{2}a_{1})-v_{max}(b_{1}b_{2}-c_{1}c_{2})\frac{(2-p)}{(1-p)}>0\) _then there exists two positive interior equilibria_
3. _If_ \((a_{2}b_{1}q-c_{2}a_{1})-v_{max}(b_{1}b_{2}-c_{1}c_{2})\frac{(2-p)}{(1-p)}=0\) _then there exists one unique interior equilibrium_
Proof.: Let \(\phi(v)=(a_{2}b_{1}q-c_{2}a_{1})+(c_{1}c_{2}-b_{1}b_{2})v-b_{1}(1-q)v^{p-1}\) where \((b_{1}b_{2}-c_{1}c_{2})>0\).
We try to study the dynamics of the slope of \(\phi(v)\) to determine the existence of equilibrium points.
We have, \(\phi^{\prime}(v)=(c_{1}c_{2}-b_{1}b_{2})-b_{1}(p-1)(1-q)v^{p-2}\).
Now as \((b_{1}b_{2}-c_{1}c_{2})>0\) so it \(\phi^{\prime}(v)\) can be of any sign depending on the magnitude of \(v\).
Let us assume there exists an extrema \(v_{max}\) then we have, \(\phi^{\prime}(v_{max})=0\).
Solving for the extrema we get, \(v_{max}^{p-2}=\frac{b_{1}b_{2}-c_{1}c_{2}}{b_{1}(1-q)(1-p)}\). As \(b_{1}b_{2}-c_{1}c_{2}>0\) and \(0<p,q,1\) thus \(v_{max}\) is positive.
Figure 1. The figures show the existence of interior equilibria with different parameter conditions as seen in theorem 3.4 part 4. Figure 0(a), 0(b) and 0(c) shows the existence of none, one, and two interior equilibria respectively in system 4. Figure 0(d) shows the nullclines where the system has three boundary equilibriums and two interior equilibriums. The straight line is the \(u\) nullcline and the curved line is the \(v\) nullcline.
To study the nature of the extrema we find the higher derivative which gives: \(\phi^{\prime\prime}(v_{max})=-b_{1}(1-q)(1-p)(2-p)v_{max}^{p-3}\). As \(0<p,q<1\) so \(\phi^{\prime\prime}(v_{max})\) is strictly negative for any value of \(p\) and \(q\). Thus at \(v_{max}\) we have a maxima. Now as the parameters are fixed so \(v_{max}\) is a unique positive root. So, \(v_{max}\) is a global maximum.
Then the maximum value of the function is
\(\phi(v_{max})=(a_{2}b_{1}q-c_{2}a_{1})+(c_{1}c_{2}-b_{1}b_{2})v_{max}-b_{1}(1- q)v_{max}^{p-1}\)
=\((a_{2}b_{1}q-c_{2}a_{1})-v(b_{1}b_{2}-c_{1}c_{2})\frac{(2-p)}{(1-p)}\)
We see \(\phi(v)\rightarrow-\infty\) and when \(v\to 0\) and \(v\rightarrow\infty\). So the curve is downward facing as seen in figure 1.
So if, \(\phi(v_{max})<0\) then it is obvious that there does not exist any positive root of \(\phi(v)\). If \(\phi(v_{max})=0\) then \(v_{max}\) is the only root of \(\phi(v)\). As \(v_{max}\) is the only maxima so if \(\phi(v_{max})>0\) then we can have two distinct positive roots of \(\phi(v)\). Hence the number of equilibria are determined for the polynomial \(\phi(v)\) when \(b_{1}b_{2}-c_{1}c_{2}>0\).
According to lemma 3.1, 3.2 and 3.3 we know the existence of equilibria of system (4) and can be summarized to theorem 3.4 given by:
**Theorem 3.4**.:
1. _System (_4_) has a trivial equilibrium_ \(E_{0}(0,0)\)_._
2. _System (_4_) has two types of boundary equilibrium_ \(E_{u}(a_{1}/b_{1},0)\) _and_ \(E_{v}(0,\bar{v})\)_. The number of equilibrium of the form_ \(E_{v}(0,\bar{v})\) _can be determined by:_ 1. _If_ \(q<\frac{b_{2}v_{\phi}(2-p)}{a_{2}(1-p)}\) _and_ \(v_{\phi}^{p-2}=\frac{b_{2}}{(1-q)(1-p)}\) _then there does not exist any boundary equilibrium of the form_ \(E_{v}(0,\bar{v})\)_._ 2. _If_ \(q=\frac{b_{2}v_{\phi}(2-p)}{a_{2}(1-p)}\) _and_ \(v_{\phi}^{p-2}=\frac{b_{2}}{(1-q)(1-p)}\) _then there exists an unique boundary equilibrium_ \(E_{v}(0,\bar{v})\)_._ 3. _If_ \(q>\frac{b_{2}v_{\phi}(2-p)}{a_{2}(1-p)}\) _and_ \(v_{\phi}^{p-2}=\frac{b_{2}}{(1-q)(1-p)}\) _then there exists two boundary equilibrium_ \(E_{v_{1}}(0,\bar{v_{1}})\) _and_ \(E_{v_{2}}(0,\bar{v_{2}})\)_._
3. _If_ \((b_{1}b_{2}-c_{1}c_{2})\leq 0\) _then system (_4_) has an unique equilibrium._
4. _If_ \((b_{1}b_{2}-c_{1}c_{2})>0\) _and_ \(v_{max}^{p-2}=\frac{b_{1}b_{2}-c_{1}c_{2}}{(1-q)(1-p)}\) _then_ 1. _If_ \((a_{2}b_{1}q-c_{2}a_{1})-v_{max}(b_{1}b_{2}-c_{1}c_{2})\frac{(2-p)}{(1-p)}<0\) _then system (_4_) has no equilibrium._ 2. _If_ \((a_{2}b_{1}q-c_{2}a_{1})-v_{max}(b_{1}b_{2}-c_{1}c_{2})\frac{(2-p)}{(1-p)}=0\) _then system (_4_) has an unique equilibrium._ 3. _If_ \((a_{2}b_{1}q-c_{2}a_{1})-v_{max}(b_{1}b_{2}-c_{1}c_{2})\frac{(2-p)}{(1-p)}>0\) _then system (_4_) has two different equilibria._
### Stability of the equilibria
In this subsection, we study the dynamics of the system (4) in the neighborhood of each equilibrium. We study the stability of each equilibrium using the Jacobian matrix \(J(u,v)\) of the system (4) given by,
\[\begin{bmatrix}a_{1}-2b_{1}u-c_{1}v&-c_{1}u\\ -c_{2}v&a_{2}q-2b_{2}v-c_{2}u-p(1-q)v^{p-1}\end{bmatrix}\]
The trivial equilibrium \(E_{0}(0,0)\) has two eigenvalues where \(\lambda_{1}=a_{1}\) which is always positive. So, \(E_{0}(0,0)\) is always unstable. Next, we investigate the stability of the boundary and the interior equilibria. The analysis of the boundary equilibrium \(E_{u}(a_{1}/b_{1},0)\), is not possible via linear stability analysis, due to the lack of differentiability of system (4) at \(u=0\). Nonetheless, the following result can be provided,
**Lemma 3.5**.: _Consider the boundary equilibrium \(E_{u}(a_{1}/b_{1},0)\). This is locally stable. That is there exists certain data that is attracted to this equilibrium in finite time._
Proof.: The proof follows via methods in [2, 22].
**Lemma 3.6**.: _Let \(v_{\phi}^{p-2}=\frac{b_{2}}{(1-q)(1-p)}\) then_
1. _If_ \(q=\frac{b_{2}v_{\phi}(2-p)}{a_{2}(1-p)}\) _then there exists an unique boundary equilibrium_ \(E_{v}(0,v_{\phi})\) _and it is a saddle node._
2. _If_ \(q>\frac{b_{2}v_{\phi}(2-p)}{a_{2}(1-p)}\) _then there exists two boundary equilibrium_ \(E_{v_{1}}(0,v_{1})\) _and_ \(E_{v_{2}}(0,v_{2})\) _and where_ \(v_{1}<v_{2}\) _and_ \(E_{v_{1}}\) _is a saddle point while_ \(E_{v_{2}}\) _is a stable node if_ \(v_{2}>\frac{a_{1}}{c_{1}}\)__
**Lemma 3.7**.: _If \((b_{1}b_{2}-c_{1}c_{2})\leq 0\) then the unique interior point is always an unstable point._
**Lemma 3.8**.: _If \(b_{1}b_{2}-c_{1}c_{2}>0\) then we two positive interior point where \(E(u^{*},v^{*})\) is a stable if \(q<\frac{1}{a_{2}b_{1}(1-p)}\min(a_{1}+a_{1}c_{2}(1-p)+((b_{2}-c_{1})+(b_{1}b_ {2}-c_{1}c_{2})(1-p))v^{*},a_{1}c_{2}(1-p)+(b_{1}b_{2}-c_{1}c_{2})\frac{2-p}{b _{1}}v^{*})\) and \(q>\frac{1}{a_{2}b_{1}(1-p)}((b_{1}b_{2}-c_{1}c_{2})(2-p)v_{max}+c_{2}a_{1}(1-p))\) where \(v_{max}^{p-2}=\frac{b_{1}b_{2}-c_{1}c_{2}}{(1-q)(1-p)}\) else it is a saddle point._
According to Lemma 3.5, 3.7 and 3.8 we have an understanding of the stability of the equilibria of the system (4). The results can be summarized into a theorem given by:
**Theorem 3.9**.: _The study of stability of the system (4) shows_
1. _The trivial equilibrium_ \(E_{0}(0,0)\) _is always unstable._
2. _The boundary equilibrium_ \(E_{u}(\frac{a_{1}}{b_{1}},0)\) _is stable when_
3. _The boundary equilibrium_ \(E_{v}(0,\bar{v})\) _is stable when let_ \(v_{\phi}^{p-2}=\frac{b_{2}}{(1-q)(1-p)}\) _then_ * _If_ \(q=\frac{b_{2}v_{\phi}(2-p)}{a_{2}(1-p)}\) _then there exists an unique boundary equilibrium_ \(E_{v}(0,v_{\phi})\) _and it is a saddle node._ * _If_ \(q>\frac{b_{2}v_{\phi}(2-p)}{a_{2}(1-p)}\) _then there exists two boundary equilibrium_ \(E_{v_{1}}(0,v_{1})\) _and_ \(E_{v_{2}}(0,v_{2})\) _where_ \(v_{1}<v_{2}\) _and_ \(E_{v_{1}}\) _is a saddle point while_ \(E_{v_{2}}\) _is a stable node if_ \(v_{2}>\frac{a_{1}}{c_{1}}\)__
4. _When_ \(b_{1}b_{2}-c_{1}c_{2}\leq 0\) _then the unique interior point is always an unstable point._
5. _When_ \(b_{1}b_{2}-c_{1}c_{2}>0\) _and_ \(v_{max}^{p-2}=\frac{b_{1}b_{2}-c_{1}c_{2}}{(1-q)(1-p)}\) _then,_ * _If_ \((a_{2}b_{1}q-c_{2}a_{1})-v_{max}(b_{1}b_{2}-c_{1}c_{2})\frac{(2-p)}{(1-p)}=0\) _then the equilibrium point is undergoes Saddle Node Bifurcation_
* _If_ \((a_{2}b_{1}q-c_{2}a_{1})-v_{max}(b_{1}b_{2}-c_{1}c_{2})\frac{(2-p)}{(1-p)}>0\) _then the equilibrium point is stable if_ \(\frac{1}{a_{2}b_{1}(1-p)}((b_{1}b_{2}-c_{1}c_{2})(2-p)v_{max}+c_{2}a_{1}(1-p))<q< \frac{1}{a_{2}b_{1}(1-p)}\min(a_{1}+a_{1}c_{2}(1-p)+((b_{2}-c_{1})+(b_{1}b_{2}- c_{1}c_{2})(1-p))v^{*},a_{1}c_{2}(1-p)+(b_{1}b_{2}-c_{1}c_{2})\frac{2-p}{b_{1}}v^{*})\)__
According to lemma 3.8, we see that when \(E_{max}(u_{max},v_{max})\) is the equilibrium and we cross from one side of a plane to another by change of parameter \(q\) the system (4) changes from no equilibrium to two equilibria. So, there can exist a saddle-node bifurcation at \(E_{max}(u_{max},v_{max})\) which we study in the next section.
### Saddle-node Bifurcation
When \((b_{1}b_{2}-c_{1}c_{2})>0\) then by lemma 3.3 there can be two interior equilibrium points from no equilibrium point when the nullclines crosses \(E_{max}(u_{max},v_{max})\) equilibrium point due to change in parameter q.
The Jacobian matrix for such a equilibrium point \(E_{max}(u_{max},v_{max})\) is J\((E_{max})\)= \(\begin{bmatrix}-b_{1}u_{max}&-c_{1}u_{max}\\ -c_{2}v_{max}&-b_{2}v_{max}+(1-p)(1-q)v_{max}^{(p-1)}\end{bmatrix}\)
So, \(\det J(E_{max})=u_{max}v_{max}((b_{1}b_{2}-c_{1}c_{2})-b_{1}(1-p)(1-q)v_{max}^{ (p-2))}\)
As \(v_{max}\) and \(u_{max}\) are not zero then,
Figure 2. The figures show the stability of interior equilibria with different parameter conditions as seen in theorem 3.9.The red lines are the nullclines of the system. The green line is the stable manifold passing through saddle equilibrium. The dots are the equilibria for the system. Figure 1(a) shows the instability of the interior equilibrium as seen in theorem 3.9(4). Figure 1(b) shows the dynamics of one interior equilibrium stable and one as saddle under the conditions as seen in 3.9(5).
\(\det\)\(J(E_{max})=0\) if \(v_{max}^{(p-2)}=\frac{b_{1}b_{2}-c_{1}c_{2}}{b_{1}(1-p)(1-q}\)
It is obvious that \(\lambda_{1}=0\) and \(\lambda_{2}=-b_{1}u_{max}-v_{max}(b_{2}-(b_{1}b_{2}-c_{1}c_{2})/b_{1})\) are the eigenvalues of the \(J(E_{max})\). So if \(-a_{1}+v_{max}(c_{1}-b_{2}+(b_{1}b_{2}-c_{1}c_{2})/b_{1})\neq 0\) then \(\lambda_{2}\neq 0\).
So from the conditions, we can obtain that
\(SN_{1}\)=\(\{(a_{1},b_{1},b_{2},c_{1},c_{2},p,q):(b_{1}b_{2}-c_{1}c_{2})>0,-a_{1}+v_{max}(c_{1}-b_ {2}+(b_{1}b_{2}-c_{1}c_{2})/b_{1})\neq 0,a_{1}>0,b_{1}>0,b_{2}>0,c_{1}>0,c_{2}>0,0<p,q<1\}\)
is a saddle-node bifurcation surface. Sotomayor's theorem [60] is used to verify the transversality conditions for the occurrence of saddle-node bifurcation of the parameter on the surface \(SN_{1}\). We know that \(J(E_{max})\) has one simple zero eigenvalue. If V and W represent the eigenvectors for the zero eigenvalue of the matrix \(J(E_{max})\) and \(J(E_{max})^{T}\), respectively, then V and W are,
V= \(\begin{bmatrix}V_{1}\\ V_{2}\end{bmatrix}\)= \(\begin{bmatrix}c_{1}\\ -b_{1}\end{bmatrix}\), W= \(\begin{bmatrix}W_{1}\\ W_{2}\end{bmatrix}\)= \(\begin{bmatrix}c_{2}v_{max}\\ -b_{1}u_{max}\end{bmatrix}\).
Furthermore, we have
\(F_{q}(E_{max};SN_{1})\)= \(\begin{bmatrix}0\\ a_{2}v_{max}+v_{max}^{p}\end{bmatrix}\).
\(D^{2}F(E_{max};SN_{1})(V,V)\)= \(\begin{bmatrix}\frac{\partial^{2}F_{1}}{\partial u^{2}}V_{1}^{2}&2\frac{ \partial^{2}F_{1}}{\partial u\partial v}V_{1}V_{2}&\frac{\partial^{2}F_{1}}{ \partial v^{2}}V_{2}^{2}\\ \frac{\partial^{2}F_{2}}{\partial u^{2}}V_{1}^{2}&2\frac{\partial^{2}F_{2}}{ \partial u\partial v}V_{1}V_{2}&\frac{\partial^{2}F_{2}}{\partial v^{2}}V_{2}^ {2}=\end{bmatrix}\)
=\(\begin{bmatrix}0\\ \frac{2b_{1}}{c_{1}^{2}}(b_{1}b_{2}-c_{1}c_{2})(p-2)\end{bmatrix}\).
Obviously, V and W satisfy the transversality conditions
\(W^{T}F_{q}(E_{max};SN_{1})=\frac{-b_{1}u_{max}}{c_{1}}(a_{2}+p)\neq 0\),
\(W^{T}D^{2}F(E_{max};SN_{1})(V,V)=\frac{-2b_{1}^{2}u_{max}}{c_{1}^{2}c_{2}v_{ max}}(b_{1}b_{2}-c_{1}c_{2})(p-2)\neq 0\), as \(b_{1}b_{2}-c_{1}c_{2}>0\) and \(0<p<1\).
for the occurrence of the saddle-node bifurcation at the parameters on the surface \(SN_{1}\). Hence, it can be stated that when the parameters cross from one side of the surface to the other side the number of equilibria of the model changes from zero to two, and the two equilibria which are interior equilibria are hyperbolic saddle and node. The saddle-node bifurcation existence can be numerically seen for a particular parameter set as in figure 3.
### Pitchfork Bifurcation
**Theorem 3.10**.: _Consider system (4), when the boundary equilibria \(E_{v}(0,\bar{v})\) satisfies the nullcline equation \(qa_{2}-b_{2}\bar{v}-(1-q)\bar{v}^{p-1}=0\) then a pitchfork bifurcation occurs as \(q\to q^{*}\), where,_
\[q^{*}=\frac{\bar{v}}{a_{2}}\left(\frac{c_{2}(b_{1}b_{2}-c_{1}c_{2})}{(1-p)}+b_ {2}\right)\]
Proof.: We obtain the bifurcation parameter by studying the gradient of the nullclines.
The nullclines from system (4) are,
Figure 3. The figures show the existence of Saddle Node bifurcation with change in \(q\) parameter. Figure 2(a) and 2(b) show the bifurcation diagram and the nullcline dynamics at the bifurcation threshold, respectively.Fig 2(c) and fig 2(d) show the existence of no and two positive equilibria with decreasing and increasing the \(q\) parameter respectively from the bifurcation threshold for the same parameter set.
\[u =f(v)=\frac{a_{1}}{b_{1}}-\frac{c_{1}}{b_{1}}v;\] \[u =g(v)=\frac{qa_{2}}{c_{2}}=\frac{b_{2}}{c_{2}}v-\frac{(1-q)}{c_{2}} v^{p-1}\]
The slope of the nullclines are to be determined at the equilibria \(E_{v}(0,\bar{v})\). The explicit form of \(\bar{v}\) can be derived from the v-nullclines as it is a root of the equation \(qa_{2}-b_{2}\bar{v}-(1-q)\bar{v}^{p-1}=0\). The gradiant of the nullclines at \(E_{v}(0,\bar{v})\) is given by \(\frac{df}{dv}|_{v=\bar{v}}=-\frac{c_{1}}{b_{1}}\) and \(\frac{dg}{dv}|_{v=\bar{v}}=-\frac{b_{2}}{c_{2}}+\frac{(1-q)(1-p)}{c_{2}}\bar{v }^{p-2}\). When pitchfork bifurcation takes place, the interior equilibria collide with the boundary equilibrium and the slopes of the nullclines are equal. Thus, \(-\frac{c_{1}}{b_{1}}=-\frac{b_{2}}{c_{2}}+\frac{(1-q)(1-p)}{c_{2}}\bar{v}^{p-2}\). Simplifying the equality with the fact that \(qa_{2}-b_{2}\bar{v}-(1-q)\bar{v}^{p-1}=0\) yields the following result,
\[q^{*}=\frac{\bar{v}}{a_{2}}\left(\frac{c_{2}(b_{1}b_{2}-c_{1}c_{2})}{(1-p)}+b_ {2}\right)\]
As \(q\) decreased to \(q^{*}\) the \(g(v)\) nullcline moves downward, and the three interior equilibriums come closer together. This follows by the shape of the nullclines, under the parametric restriction enforced. Since the two slopes of prey and predator nullclines at x = 0 are chosen to be the same, by continuity, the three equilibriums merge into one as \(q\to q^{*}\). Now as \(q\) is further decreased, the \(g(v)\) nullcline is completely below the prey nullcline, \(g(x)<f(x),\forall x\), and there is a single boundary equilibrium. Thus by definition, pitchfork bifurcation has occurred.
For a set parameter space, pitchfork bifurcation can be seen in figure 3(b). It can be seen that at two different values of \(q\), saddle-node bifurcation and pitchfork bifurcation takes place.
## 4. Comparison with Classical Competition Model
The system 4 can be trivially transformed to a classical competition system 1 if \(q=1\). We study the different cases of how the dynamics of the system change from the classical case with a change of parameter \(p\) and \(q\) in the range \((0,1)\).
### Competitive Exclusion
The classical system shows that when \(\frac{a_{1}}{a_{2}}<\min\left\{\frac{b_{1}}{c_{2}},\frac{c_{1}}{b_{2}}\right\}\) then \(E_{v}=(0,\frac{a_{2}}{b_{2}})\) is globally asymptotically stable. So, for any parameter set we choose with \(q=1\) and satisfying the condition then population \(v\) wins, and population \(u\) goes to extinction.
If we choose a \(q\) very close to 1 then the dynamics change. The parameter chosen satisfying the condition can be seen in figure 4(a) which satisfies the condition and \(E_{v}=(0,\frac{a_{2}}{b_{2}})\) is globally asymptotically stable. With the choice of \(q=0.91\) and \(p=0.6\), we see that \(E_{v}=(0,\frac{a_{2}}{b_{2}})\) becomes unstable and we get two stable equilibria i.e. u boundary equilibrium and an interior equilibrium on the two sides of the stable manifold of an interior saddle equilibrium as seen in figure 4(b). So, for any initial data below the stable manifold \(u\) boundary equilibrium is stable where v population goes to extinction while above the stable manifold coexistence takes place where
equilibrium becomes stable. Thus, the \(u\) population is always stable in the system and does not go extinct which is the case in the classical model.
### Weak Competition
The classical competition model provides evidence of weak competition when under parametric restrictions coexistence of two species is possible and stable over time. The parametric restriction to have a stable coexistence equilibrium is \(\frac{c_{2}}{b_{1}}<\frac{a_{2}}{a_{1}}<\frac{b_{2}}{c_{1}}\) and \(b_{1}b_{2}-c_{1}c_{2}>0\).In the classical competition model (1) we can get one interior equilibrium that can be asymptotically stable under these restrictions which in system 4 can be achieved by satisfying these parametric restrictions and with \(q=1\) as seen in 5c.
In system 4 if we use \(0<q<1\) then we can get two interior equilibria with one stable point and the other as a saddle point. Due to the saddle, we can have bi-stability where the stable manifold divides the invariant subspace into two regions, depending on the initial condition the populations can either go to the interior equilibrium or they can move to \(u\) boundary equilibrium as seen in figure 5d.
Figure 4. The parameter set used for the figures is \(a_{1}=0.4,a_{2}=0.6,b_{1}=1,b_{2}=.6,c_{1}=.3,c_{2}=.8,p=0.6\). The figures show the equilibria of the system corresponding to changes in parameter \(q\). The change in q shows both bifurcations taking place in figure 4a and the zoomed-in figure shows the pitchfork bifurcation in figure 4b.
Figure 5. The horizontal and vertical axes are the \(u\) and \(v\) populations respectively. The red lines are the nullclines and the points are the equilibria. The green lines in 5 and 5 are the stable manifolds of the saddle equilibrium. The classical competition cases with \(q=1\) are shown in fig 5 and 5. The comparison to the classical cases with change \(p\) and \(q\) are shown in fig 5 and 5.
## 5. The Spatially Explicit (PDE) Case
The spatially inhomogeneous problem has been intensely investigated in the past 2 decades [28, 30, 32, 36, 38, 39, 40, 41, 42, 47, 48, 46, 45, 44, 54, 53]. The premise here is that \(u,v\) do not have resources that are uniformly distributed in space, rather there is a spatially dependent resource function \(m(x)\). We consider again a normalized generalization of the classical formulation, where there are 2 parameters \(b\) and \(c\) for inter/intra specific kinetics as opposed to 6 kinetic parameters in (2) from earlier. The parameter choice \(0<p<1\), enables a FTEM in \(u\).
\[\left\{\begin{array}{ll}u_{t}&=d_{1}\Delta u+m(x)u-u^{2}-cu^{p}v,\quad 0<p \leq 1,\\ v_{t}&=d_{2}\Delta v+m(x)v-v^{2}-buv,\end{array}\right. \tag{7}\]
\[\nabla u\cdot n=\nabla v\cdot n=0,on\ \partial\Omega\,\ u(x,0)=u_{0}(x)>0,\ v (x,0)=v_{0}(x)>0. \tag{8}\]
Note, \(p=1\), is the classical case. We consider \(m\) to be non-negative on \(\Omega\) and bounded. We recap a seminal classical result [34, 35], which shows that the slower diffuser wins.
**Theorem 5.1**.: _Consider (7)-(8), when \(b=c=p=1\), and \(d_{1}<d_{2}\), solutions initiating from any positive initial data \((u_{0}(x),v_{0}(x))\) converge uniformly to \((u^{*}(x),0)\)._
That is the slower diffuser wins, in the case of equal kinetics. However, a difference in the interspecific kinetics via FTEM can cause the slower diffuser to _lose_, depending on the initial conditions [63].
The following conjecture was made (and some numerical evidence provided for) in [63] for the system:
\[\left\{\begin{array}{ll}u_{t}&=d_{1}\Delta u+m(x)u-u^{2}-cu^{p}v,\quad 0<p \leq 1,\\ v_{t}&=d_{2}\Delta v+m(x)v-v^{2}-buv^{q},\quad 0<q\leq 1.\end{array}\right. \tag{9}\]
**Conjecture 1** (Co-existence when \(p=1,0<q<1\)).: _Consider (9)-(8) where \(\Omega\subset\mathbb{R}\), is a bounded domain, and when \(b=c=p=q=1\), \(d_{1}<d_{2}\). There exists positive initial data \((u_{0}(x),v_{0}(x))\), for which solutions converge to \((u^{*}(x),0)\), but solutions with the same diffusion coefficients, initiating from the same data, will converge to \((u^{*}(x),v^{*}(x))\) in finite time, for a sufficiently chosen \(q\in(0,1)\)._
### Movement Operator
Consider a species \(v\) dispersing over a spatial domain \(\Omega\). Its dynamics is typically governed by a diffusion equation, \(v_{t}=\Delta v+f(v)\), where \(\Delta v\) represents movement by diffusion, and the other dynamics such as growth, death, competition, depredation are embedded in \(f(v)\). Thus one can define a movement operator \(L:H^{2}(\Omega)\mapsto L^{2}(\Omega)\), where \(\mathcal{L}(v)=\Delta v\). We take the following approach to modeling movement. Define,
\[\mathcal{L}^{1}(v) = \mathcal{L}^{1}((1-k)v+k(v)),\ 0<k<1\] \[= (1-k)L(v)+kL^{*}(v)\] \[= \Delta((1-k)v)+L^{*}(kv),\]
where \(L^{*}(v)\approx-v^{q}\), thus the operator \(L^{*}(v)\) will play the role of the sub-linear harvesting term, so as to cause finite time extinction. However, this is only affected by a (small) fraction of the population via the fraction \(k\in(0,1)\).
**Remark 4**.: Thus the movement operator \(\mathcal{L}^{1}\), provides a way to formalize the action via which we have a fraction of the population moving via regular diffusion, via the other fraction moving via fast diffusion.
This leads to the following quasi-linear PDE, representing the interaction of two competing species, with spatially dependent resource function \(m(x)(\in L^{\infty}(\Omega))\) and population densities \(u(x,t)\) and \(v(x,t)\)
\[\left\{\begin{array}{ll}u_{t}&=d_{1}\nabla\cdot(\nabla u)+u\Big{(}m(x)-u-v \Big{)},\quad x\in\Omega,t>0\\ v_{t}&=\nabla\cdot a(x,v,\nabla v)+v\Big{(}m(x)-u-v\Big{)},\quad x\in\Omega,t> 0,\\ a(x,v,\nabla v)&=\Big{(}d_{2}(1-k)\nabla v+k|\nabla v|^{p-2}\nabla v\Big{)}, \ p\in(1,2],\ 0\leq k\leq 1\\ \nabla u\cdot\eta&=a\cdot\eta=0,\quad x\in\partial\Omega\\ u(x,0)&=u_{0}(x),v(x,0)=v_{0}(x),\quad x\in\Omega\end{array}\right. \tag{10}\]
where all the parameters \(d_{i}(i=1,2)\) and \(k\) are positive and \(\Omega\subset\mathbb{R}^{n}\) is a bounded domain with smooth boundary.
**Lemma 5.2**.: _Consider the system (10). Then \(\nabla a(x,v,\nabla v)\cdot\eta\iff\nabla v\cdot\eta=0\)._
Consider the boundary conditions for \(v\),
\[a\cdot\eta=\Big{(}d_{2}(1-k)\nabla v+k|\nabla v|^{p-2}\cdot\nabla v\Big{)} \cdot\eta\ =0\iff\nabla v\cdot\eta\Big{(}d_{2}(1-k)+k|\nabla v|^{p-2}\Big{)}=0.\]
By the positivity of the bracket term, we can consider the Neumann boundary conditions for \(v\). Consider the system (10) updated boundary conditions:
\[\nabla u\cdot\eta=\nabla v\cdot\eta=0,\quad x\in\partial\Omega. \tag{11}\]
**Theorem 5.3**.: _Consider (10) in a bounded domain with smooth boundary \(\Omega\subset\mathbb{R}\) with \(p=2\) and \(d_{2}(1-k)+k<d_{1}\). Then for any choice of positive initial data \((u_{0}(x),v_{0}(x))\), the solution \((u,v)\) converges uniformly to \((0,v^{*})\) as \(t\to\infty\)._
The proof follows via standard techniques and can be found in [61, 62].
**Definition 5.4** (Weak solution).: A measurable function \(v\) is a local weak sub (super) solution of (10) in \(\Omega_{T}\) if
\[v\in C_{loc}(0,T;L^{2}_{loc}(\Omega)\cap L^{p}_{loc}(0,T;W^{1,p}_{loc}(\Omega)) \tag{12}\]
and for every compact subset \(K\) of \(\Omega\) and for every sub interval \([t_{1},t_{2}]\) of \((0,T]\)
\[\int_{K}v\phi dx|^{t_{2}}_{t_{1}}+\int_{t_{1}}^{t_{2}}\int_{K}\left(-v\phi_{t} +a(x,v,\nabla v)\cdot\nabla\phi\right)dxd\tau\geq(\leq)\int_{t_{1}}^{t_{2}} \int_{K}b(x,\tau,v)\phi dxd\tau \tag{13}\]
For all test functions \(\phi\in W^{1,2}_{loc}(0,T;L^{2}(K)\cap L^{p}_{loc}(0,T;W^{1,p}_{0}(K)),\ \phi\geq 0\).
**Lemma 5.5** (Gagliardo-Nirenberg-Sobolev interpolation inequality (GNS)).: _Consider functions \(\phi\in W^{m,q}(\Omega)\). Then the following interpolation inequality holds,_
\[||\phi||_{W^{k,p^{\prime}}(\Omega)}\leq C||\phi||_{W^{m,q^{\prime}}(\Omega)}^{ \theta}||\phi||_{\mathcal{L}^{q}(\Omega)}^{1-\theta}, \tag{14}\]
_for \(p^{\prime},q^{\prime},q\geq 1,\theta\in[0,1]\) as long as the following is satisfied,_
\[k-\frac{n}{p^{\prime}}\leq\theta\Big{(}m-\frac{n}{q^{\prime}}+\frac{n}{q} \Big{)}-\frac{n}{q}.\]
We next present the main result of this section.
**Theorem 5.6**.: _Consider the system (10). Then there exists initial data \((u_{0},v_{0})\in W^{1,2}(\Omega)\) for which there exist local weak solutions to (10)._
**Theorem 5.7**.: _Consider the spatially explicit competition model (10)-(11) in a bounded domain with smooth boundary \(\Omega\subset\mathbb{R}\). There exists some positive initial data \((u_{0}(x),v_{0}(x))\) such that for \(d_{2}(1-k)+k<d_{1}\) and \(p=2\), the solution \((u,v)\to(0,v^{*}),\) but for some \(p\in(1,2]\) and \(k\in(0,1)\), \((u,v)\to(u^{*},0)\) in \(L^{2}(\Omega)\), starting from the same initial data._
Proof.: Consider the system (10) for \(u\). Since the spatially dependent resource function \(m(x)(\in\mathcal{L}(\infty))\) serves as upper-bound (upto some constant \(C\)) for population density \(v(x,t)\), we get
\[u_{t}\geq d_{1}u_{xx}-u^{2}-C||m||_{\infty}u.\]
Moreover, on comparison with the standard logistic equation and using the comparison argument, we have \(u\geq C_{1}e^{-Ct}\) for some constant \(C_{1}\) (depends on \(u_{0}\)) and \(C\)[28].
Let's test the PDE (10) for \(v\) against \(v\)
\[\int_{\Omega}vv_{t}=\int_{\Omega}\left(d_{2}(1-k)vv_{xx}+mv^{2}-v^{2}u-v^{3}+ kv\frac{\partial}{\partial x}\Big{(}|v_{x}|^{p-2}\cdot v_{x}\Big{)}\right).\]
On integrating it on the full domain \(\Omega\) and using the homogeneous Neumann boundary conditions (11), we get
\[\frac{1}{2}\frac{d}{dt}||v||_{2}^{2}+d_{2}(1-k)||v_{x}||_{2}^{2}+C_{1}e^{-Ct} ||v||_{2}^{2}+\int_{\Omega}v^{3}+k||v_{x}||_{p}^{p}\leq\int_{\Omega}mv^{2}.\]
On using the positivity of \(v\) and the fact \(m\in\mathcal{L}^{\infty}(\Omega),\) we have
\[\frac{1}{2}\frac{d}{dt}||v||_{2}^{2}+d_{2}(1-k)||v_{x}||_{2}^{2}+k||v_{x}||_{p }^{p}\leq M||v||_{2}^{2},\]
where \(M=||m||_{\infty}.\) Recall the Rellich-Kondrachov theorem [64], we know
\[W^{1,\widetilde{p}}(\Omega)\overset{\leftrightarrow}{\hookrightarrow}L^{2}( \Omega)\quad\text{if}\quad\frac{n}{\widetilde{p}}-1<\frac{n}{2}.\]
Hence, we have
\[||v||_{2}^{2}\leq C||v_{x}||_{p}^{p}\]
provided \(p>\frac{2}{3}.\) As \(p\in(1,2]\), this embedding hold trivially true. Let us introduce a new parameter \(\widehat{p}\) such that \(1<\widehat{p}<p\). Hence, we have
\[\frac{1}{2}\frac{d}{dt}||v||_{2}^{2}+\widetilde{C}\Big{\{}||v_{x}||_{\widehat {p}}^{\widetilde{p}}+||v||_{p}^{p}\Big{\}}\leq M||v||_{2}^{2}.,\]
where \(\widetilde{C}=\min\{d_{2}(1-k),\frac{C}{k}\}.\)
We will control the term in the braces by using the GNS inequality. Let's compare and note down the corresponding spaces
\[n=1,\quad k=0,\quad p^{\prime}=2,\quad m=1,\quad q^{\prime}=\widehat{p},\quad q=p. \tag{15}\]
\[||v||_{2}\leq C||v_{x}||_{\widehat{p}}^{\theta}||v||_{p}^{1-\theta} \tag{16}\]
such that
\[-\frac{1}{2}\leq\theta\Big{(}1-\frac{1}{\widehat{p}}+\frac{1}{p}\Big{)}-\frac {1}{p}\]
On further rearrangement, we have
\[\theta\geq\frac{(2-p)\widehat{p}}{2(\widehat{p}p+\widehat{p}-p).} \tag{17}\]
Let's raise the both sides of (16) by \(l,\) where \(l\in(0,2)\)
\[\Big{(}\int_{\Omega}v^{2}\Big{)}^{\frac{l}{2}}\leq C\Big{(}\int_{\Omega}|v_{x} |^{\widehat{p}}\Big{)}^{\frac{l\theta}{p}}\Big{(}\int_{\Omega}|v|^{p}\Big{)}^ {\frac{l(1-\theta)}{p}}.\]
Recall the Young's inequality [65],
\[ab\leq\frac{a^{r}}{r}+\frac{b^{s}}{s},\]
such that \(\frac{1}{r}+\frac{1}{s}=1.\) Let's use the Young's inequality for \(r=\frac{\widehat{p}}{l\theta}\) and \(s=\frac{p}{l(1-\theta)}.\) Moreover, \(p\in(1,2],\) so we can find a \(l\in(0,2)\) such that
\[\frac{l\theta}{\widehat{p}}+\frac{l}{p}-\frac{l\theta}{p} =1\] \[\theta\Big{(}\frac{l}{\widehat{p}}-\frac{l}{p}\Big{)} =\Big{(}1-\frac{l}{p}\Big{)}\] \[\theta =\Big{(}\frac{\frac{1}{l}-\frac{1}{p}}{\frac{1}{p}-\frac{1}{p}} \Big{)}.\] \[\theta =\frac{\widehat{p}(p-l)}{l(p-\widehat{p})}.\]
On comparison with (17), we get
\[\frac{(p-l)}{l(p-\widehat{p})} \geq\frac{2-p}{2(\widehat{p}p+\widehat{p}-p)}\] \[\frac{p}{l}-1 \geq\frac{(2-p)(p-\widehat{p})}{2(\widehat{p}p+\widehat{p}-p)}\] \[\frac{p}{l} \geq\Big{\{}1+\frac{(2-p)(p-\widehat{p})}{2(\widehat{p}p+ \widehat{p}-p)}\Big{\}}\] \[\frac{1}{l} \geq\frac{1}{p}\Big{\{}1+\frac{(2-p)(p-\widehat{p})}{2(\widehat{ p}p+\widehat{p}-p)}\Big{\}}\geq\frac{1}{2}.\]
since \(l\in(0,2).\) This inequality's positive solution space is \([0,2]\). Combining this with the fact that \(p>1,\) we have \(p\in(1,2].\)
By the help of all these estimates, we can reduce the PDE system to an ODE given by
\[Y_{t}\leq MY-\widetilde{C}Y^{\alpha}, \tag{18}\]
where \(Y=||v||_{2}\) and as \(p\in(1,2]\), we can fix \(\alpha=\frac{p}{2}\in(0,1).\) In order to prove that \(\exists T^{*}<\infty\) such that \(Y\to 0\) as \(T\to T^{*}\), it is enough to prove that \(U=\frac{1}{Y}\rightarrow\infty\) in finite time \(T^{*}\). On using this substitution we get
\[-\frac{1}{U^{2}}U_{t} =\frac{M}{U}-\frac{\widetilde{C}}{U^{\alpha}}\] \[U_{t} =\widetilde{C}U^{2-\alpha}-MU.\]
One using the fact that \(\alpha\in(0,1)\), we have
\[U_{t}=U(\widetilde{C}U^{1-\alpha}-M). \tag{18}\]
The above ODE (18) blow up in a finite-time if \(U(0)>>(\frac{M}{\widetilde{C}})^{\frac{1}{1-\alpha}}\). Hence, we have that if \(Y(0)<<(\frac{M}{\widetilde{C}})^{\frac{1}{1-\alpha}}\), then (17) goes extinct in a finite time.
**Remark 5**.: One direct implication of the result of Theorem 5.3 along with Theorem 5.7 is that we can say: The slower diffusing population wins - but a slower diffusing population with a few (\(k<<1\)) "very fast" diffusing individuals could loose!
**Remark 6**.: Let's consider the numerical validation of Theorem 5.7 by considering two cases based on the diffusion coefficient of species \(u\) and \(v\):
1. Let's consider case when \(d_{1}\) is close to \(d_{2}\). Pick \(d_{1}=0.2,d_{2}=0.199\) and \(k=0.999\times 10^{-3}\). For classical case, \(p=2\), we have \(d_{2}(1-k)+k<d_{1}\), so \((u,v)\rightarrow(0,v^{*})\). But for \(p=1.6\), we \((u,v)\rightarrow(u^{*},0).\) (For other parameter, see Fig. 7)
2. Let's consider case when \(d_{1}\) and \(d_{2}\) are far-apart. Pick \(d_{1}=1,d_{2}=10^{-4}\) and \(k=0.159\). For classical case, \(p=2\), we have \(d_{2}(1-k)+k<d_{1}\), so \((u,v)\rightarrow(0,v^{*})\). But for \(p=1.6\), we \((u,v)\rightarrow(u^{*},0).\) (For other parameter, see Fig. 8)
## 6. Applications
### Soybean Aphid Control
The soybean aphid, _Aphis glycines_ (Hemiptera: Aphididae), was first detected in 2000, and since has become one of the most important insect pests of soybean in the major production areas of the Midwest US([69]). It has a heteroecious hologcylic life cycle that utilizes a primary host plant for over-wintering and a secondary host plant during the summer. In the spring, aphids emerge and produce three asexual generations on common buckthorn, _Rhamnus cathartica_ L., then migrate to soybeans _Glycine max_ L.. Aphids continue to reproduce asexually on soybeans, producing as many as 15 generations during the summer ([11]). In North America, aphids arrive on soybean fields in June, where populations increase by four orders of magnitude and at the end of the growing season (mid-September), aphids begin the migration back to their overwintering host, reproduce sexually and overwinter in the egg stage ([68]). Within Iowa, 40% of growing seasons from 2004 to 2019, populations of aphids large enough to reduce soybean yield have been observed with aphid populations peaking in the middle to the end of August ([70]).
Colonization and feeding by an insect herbivore can alter the plant's physiology, favoring the subsequent colonization of additional conspecifics. There are two mechanisms by which this susceptibility can be induced, feeding facilitation and the obviation of resistance. Feeding facilitation is a more general mechanism by which the general physiology of the host plant is altered by the herbivore, often in a density dependent manner ([71]).
A more specific mechanism that inducts susceptibility is the obviation of traits that confer resistance to the herbivore ([72]; [67]). This mechanism requires a subset of the herbivore population that is virulent, capable of surviving on the resistant genotype of the host plant. By obviating the resistance through a physiological change to the plant, avirulent subpopulations can now survive on the resistant plant. Both mechanisms allow sub-populations that vary by genotype (i.e. virulent and avirulent) to co-exist on resistant host plants. Both mechanisms have been observed in populations of soybean aphids, colonizing soybean plants of varying genotypes that includes resistance to soybean aphids ([67]).
Field surveys in North America have demonstrated that soybean aphid biotypes can co-occur in the same fields ([73], [79]). Laboratory studies have demonstrated that virulent and avirulent biotypes can co-exist on a shared plant for at least 2-3 generations ([67]).
### Model Formulation
#### 6.2.1. Background: The Single Bio Type Case
Kindlemann et.al. 2010 ([75]) are among the first to propose a model for the population dynamics of Aphids using a
set of differential equations,
\[\frac{dh}{dt} = ax;h(0)=0\] \[\frac{dx}{dt} = (r-h)x\ ;x(0)=x_{0}\]
where \(h(t)\) is the cumulative population density of a single aphid biotype at time \(t\); \(x(t)\) is the population density at time \(t\). \(a\) is a scalar constant and \(r\) is the growth rate of the aphids. The aphid population initially rises due to the linear growth term, but as the cumulative density becomes greater than the growth rate \(r\), the population is brought down, due to the effects of competition. This results in a hump-shaped population density over time, typical of a boom-bust type scenario ([75]). This is an apt description of aphid dynamics, particularly when one is interested in exploring soybean aphid dynamics on soybeans during the summer growing season. The type of population growth described by this model has been observed in soybean aphids in North America, with colonisation in June, then a gradual build-up of the population with a high peak in August and then a reduction, with all of the Aphids dispersing by September, for overwintering.
The model in (19), is quite different from the classical logistic growth model, which predicts growth to a certain carrying capacity. It is an example of a non-autonomous model, wherein the right-hand side of the differential equation depends explicitly on time. The rigorous mathematical analysis of such systems is quite involved, and the methods of classical autonomous systems do not apply. Hence the rigorous dynamical analysis of (19) is not found in the literature. However, it provides a starting point to model more intricate aphid dynamics, particularly when a species presents two or more biotypes.
Figure 8. Numerical simulation of (10) with \(\Omega=[0,1]\) and \(m(x)=(30+x^{2})\). The The initial data is chosen as\([u_{0}(x),v_{0}(x)]=[29-x,x^{2}]\), whereas parameters are chosen as \(d_{1}=1,d_{2}=0.0001\) and \(k=0.159\).
#### 6.2.2. Virulent and avirulent aphids: Two Bio types
Sub-populations of an herbivorous species can be organized into biotypes, defined as genotypes capable of surviving and reproducing on host plants containing traits (e.g., antibiosis and or antixenosis) conferring resistance to that herbivore ([76]). Specifically, for the soybean aphid, biotypes are classified based on their ability to colonize soybean varieties expressing _Rag_-genes(Rag is derived from the expression, Resistance to _Aphis glycines_). For example, soybean aphid biotype 1 is susceptible to all _Rag_-genes, therefore it is called avirulent. Biotype 2 is virulent to _Rag_1 ([77]), biotype 3 is virulent to _Rag_2 ([78]), and biotype 4 is virulent to both _Rag_1 and _Rag_2, capable of surviving on plants with these genes either alone and together ([74]). These four soybean aphid biotypes have been found throughout the soybean production areas in the Midwest US ([73], [79]).
If one considers a resistant soybean plant, where virulent and avirulent aphids are both trying to colonize, various dynamics are at play, that _cannot_ be described by earlier models. First, the virulent and avirulent are in direct competition for space, similar to interspecies competition. The virulent aphids are also in competition for space with other virulent aphids, as avirulent aphids are in competition for space with other avirulent aphids, similar to the intraspecies competition. These are direct effects of competition. Note, on a resistant plant both the avirulent and virulent aphids are able to weaken the plant's defenses via feeding facilitation. However, for the avirulent aphid this only occurs if it arrives in sufficiently large numbers ([67]). Thus there is a definite resistant level in the plant that is dependent on initial avirulent aphid density. If the avirulent aphids arrive in sufficient numbers above this level, they could colonise a resistant plant, but below this will die out - this is very similar to a resistance effect in ecology. Note the virulent biotype alters the plant by obviating the resistance, removing its impact on both virulent and avirulent aphids. This removal of the plants' resistance level by the virulent biotype, eases the colonisation process for the avirulent biotype - this then is an indirect form of cooperation at play. Thus, the plants' resistance is a dynamic process, dependent on the presence and densities of these biotypes. The following model is proposed to capture these dynamics,
\[\frac{dh}{dt} = a(x_{A}+x_{V})\] \[\frac{dx_{A}}{dt} = (r-h)(x_{A}-A)\] \[\frac{dx_{V}}{dt} = (r-h)x_{V}\] \[\frac{dA}{dt} = -(k_{r}x_{V}+k_{f}x_{V}+k_{f}sgn(x_{A}-R)x_{A})A\]
Here \(x_{A}(t)\) refer to avirulent aphid population density and \(x_{V}(t)\) refers to the virulent aphid population density [6]; \(h\) is the combined cumulative population density of both avirulent and virulent aphids respectively at time \(t\); \(r\) is the maximum potential growth rate of the Aphid;
\(a\) is a scaling constant relating prey cumulative density to its own dynamics; \(A\) is the dynamic resistance threshold of the plant [66]. This decreases due to both avirulent and virulent aphid density, that is \(x_{V}\) and \(x_{A}\). It is measured in the same
units as Aphid density.
\(k_{f}\) is the rate of feeding facilitation constant and \(k_{r}\) is the rate of obviation of resistance[67]\(R\) is the threshold population density above which an avirulent population can have a feeding facilitation effect on the plant and below it, the effect of feeding facilitation because of avirulent aphid is zero;
_sgn(x)_ is a function returning 0 or 1. It returns 1 if \(x>0\) or else it returns 0; This function regulates whether avirulent aphids have enough initial density to have the effect of feeding facilitation on the plant. Previous studies have demonstrated that obviation of resistance is much more effective in shutting down the resistance than feeding facilitation [67]. Therefore, \(k_{r}\)\(>\)\(k_{f}\) whenever both effects take place simultaneously.
#### 6.2.3 Harvesting of avirulent aphids
If a similar harvesting technique is used as in system 4 and a particular aphid population is harvested then the dynamics for aphid bio-type dominance in the system can change. We modify the system 19 with fast diffusion on each aphid bio-type separately and study the system.
The Virulent aphid are harvested out of the system 19 which results in the modified model
\[\frac{dh}{dt} = a(x_{A}+x_{V})\] \[\frac{dx_{A}}{dt} = (rq-h)(x_{A}-A)-(1-q)x_{A}^{p}\] \[\frac{dx_{V}}{dt} = (r-h)x_{V} \tag{19}\] \[\frac{dA}{dt} = -(k_{r}x_{V}+k_{f}x_{V}+qk_{f}sgn(x_{A}-R)x_{A})A\]
Figure 9. The dynamics of the virulent aphid(\(x_{V}\)) and ariulent aphid (\(x_{A}\)) is shown in fig 9a - 9e.The dotted lines represent the dynamics of system 19 and the dense line represents the dynamics of system 19. The red colour represents virulent aphid (\(x_{V}\)) and blue colour represents the ariulent aphid population (\(x_{A}\)). Figures 9a-9e have the same parameter set \(r=0.27,a=0.000005,k_{f}=0.001,k_{r}=0.01,R=30,k=1\).
### Effect of avirulent aphid harvesting
In the figures 9 the effect of harvesting avirulent aphids is seen compared to the original results
1. Harvesting of avirulent aphids decreases the peak population of avirulent aphid as seen in the figures. When the avirulent aphid population is below the initial threshold the avirulent aphids go to extinction
2. The peak population for both the biotypes of aphid occurs at different times as can be seen in figure 9c.
## 7. Appendix
Proof.: of lemma 3.1: When \(\bar{u}=0\) when studying the boundary equilibrium \(E_{v}(0,\bar{v})\), the nullcline (6) simplifies to \(\phi(\bar{v})\) given by,
\(\phi(v)=a_{q}-b_{2}\bar{v}-(1-q)\bar{v}^{p-1}\)
To study the roots of the polynomial \(\phi(\bar{v})\), we study the monotonicity of the polynomial which is described by,
\(\phi^{{}^{\prime}}(v)=-b_{2}+(1-q)(1-p)v^{p-2}\).
Thus, the extrema is attainable when \(\phi^{{}^{\prime}}(v)=0\). We see, it is possible to attain extrema at \(v=v_{\phi}\) where \(v_{\phi}^{p-2}=\frac{b_{2}}{(1-q)(1-p)}>0\) as \(b_{2}>0\) and \(0<p,q<1\).
As \(\phi^{{}^{\prime\prime}}(v_{\phi})=(1-q)(1-p)(p-2)v_{\phi}^{p-3}>0\), we have a maxima at \(v=v_{\phi}\). Now as \(v_{\phi}\) is unique for the fixed parameter set so in the invariant set we have unique maxima at \(v=v_{\phi}\).
As there exist one maxima, we can deduce there can be at most two roots of \(\phi(\bar{v})\) and may have no roots depending on the functional value \(\phi(v_{\phi})\). Computing we get, \(\phi(v_{\phi})=a_{2}q-b_{2}v_{\phi}(\frac{2-p}{1-p})\).
If \(\phi(v_{\phi})>0\) i.e. \(v_{\phi}<\frac{a_{2}q(1-p))}{b_{2}(2-p)}\) then there exists two boundary equilbria. When \(v_{\phi}>\frac{a_{2}q(1-p))}{b_{2}(2-p)}\) then there are no boundary equilibrium of the form \(E_{v}(0,\bar{v})\).
Proof.: of lemma 3.2: Let there exist a positive \(v^{*}\) which is a root of the polynomial \(\phi(v)\) given by,
\[\phi(v^{*})=(a_{2}b_{1}q-c_{2}a_{1})+(c_{1}c_{2}-b_{1}b_{2})v^{*}-b_{1}(1-q)v^{ *(p-1)}\text{ where }b_{1}b_{2}-c_{1}c_{2}\leq 0.\]
To understand the monotonicity of the curve we study the slope given by,
\(\phi^{\prime}(v^{*})=(c_{1}c_{2}-b_{1}b_{2})-b_{1}(p-1)(1-q)v^{*(p-2)}\) We know, \(0<p,q<1\) and \(v^{*}>0\) so \(b_{1}(p-1)(1-q)v^{*(p-2)}>0\). Thus, \(\phi^{\prime}(v^{*})>0\) which shows \(\phi(v^{*})\) is a monotonically increasing curve.
For \(v\to 0\), \(\phi(v)\to-\infty\) and for \(v\to\infty\), \(\phi(v)\to\infty\) so \(\min(\phi(v))<0\).
Thus, there exists a positive \(c_{1}\) such that \(\phi(c_{1})<0\) and \(\exists\) positive \(c_{2}\) such that \(\phi(c_{2})>0\). By using the Mean Value theorem we can conclude that there exists a \(v^{*}\in(c_{1},c_{2})\) such that \(\phi(v^{*})=0\) and as \(\phi(v)\) is monotonically increasing so \(v^{*}\) is unique.
Thus if \((b_{1}b_{2}-c_{1}c_{2})\leq 0\) then there can only be one positive \(v^{*}\) such that \(\phi(v^{*})=0\).
Proof.: of lemma 3.6: According to lemma 3.1 we can have two, one or no boundary equilibria of the form \(E_{v}(0,\bar{v})\) depending on the functional value of the nullcline. We study when we have two boundary equilibria \(E_{v_{1}}(0,\bar{v_{1}})\) and \(E_{v_{2}}(0,\bar{v_{2}})\). Without loss of generality, lets assume \(v_{1}<v_{2}\). By lemma 3.1 we know \(v_{\phi}\) is the maxima of the nullcline. By Mean Value Theorem, we can assume \(v_{1}<v_{\phi}<v_{2}\).
To study the stability of the equilibria we simplify the Wronskian matrix and have
\(W(E_{v})=\begin{bmatrix}a_{1}-c_{1}v&-c_{1}u\\ -c_{2}v&-b_{2}v+(1-p)(1-q)v^{p-1}\end{bmatrix}\).
The eigenvalues for the system are \(\lambda_{1}=a_{1}-c_{1}v\) and \(\lambda_{2}=v(-b_{2}+(1-p)(1-q)v^{p-2})\).
We study the case when there exist two boundary equilibria. When \(v_{2}>v_{\phi}\) where \(v_{\phi}^{p-2}=\frac{b_{2}}{(1-q)(1-p)}\), then
\(\lambda_{2}=v_{2}(-b_{2}+(1-p)(1-q)v_{2}^{p-2})\)
\(<v_{2}(-v_{\phi}^{p-2}(1-p)(1-q)+(1-p)(1-q)v_{2}^{p-2})\).
\(=v_{2}(1-p)(1-q)(v_{2}^{p-2}-v_{\phi}^{p-2})\).
As \(v_{\phi}<v_{2}\) and \(p-2<0\) we have \(\lambda_{2}<0\).
If we simplify \(\lambda_{1}=a_{1}-c_{1}v_{1}\), then \(\lambda_{1}<0\) if \(v_{2}>\frac{a_{1}}{c_{1}}\). Thus \(E_{v_{2}}(0,v_{2})\) is stable node when \(v_{2}>a_{1}/c_{1}\).
The determinant of wronskian is given by at \(E_{v_{1}}\),
\(det(W(E_{v_{1}}))=v_{1}(a_{1}-c_{1}v_{1})(-b_{2}+(1-p)(1-q)v_{1}^{p-2})\).
As \(v_{1}<v_{\phi}\) so, \(det(W(E_{v_{1}}))=v_{1}(a_{1}-c_{1}v_{1})(-b_{2}+(1-p)(1-q)v_{2}^{p-2})\)
\(<v_{1}(a_{1}-c_{1}v_{1})(1-p)(1-q)(v_{\phi}^{p-2}-v_{1}^{p-2})=0\).
Thus \(E_{v_{1}}(0,v_{1})\) is a saddle point.
Proof.: of lemma 3.7: According to lemma 3.2 if \((b_{1}b_{2}-c_{1}c_{2})\leq 0\) there exists an unique equilibrium \(E^{*}(u^{*},v^{*})\). The simplified Jacobian matrix for the unique equilibrium is given by,
\(\begin{bmatrix}b_{1}u&-c_{1}u\\ -c_{2}v&-b_{2}v+(1-p)(1-q)v^{p-1}\end{bmatrix}\)
The determinant of the Jacobian matrix can be given by,
\((-b_{1}u^{*})(-b_{2}v^{*}+(1-p)(1-q)v^{*(p-1)}-c_{1}c_{2}u^{*}v^{*}\)
By simplifying we get, \(det(J(E^{*}))=(b_{1}b_{2}-c_{1}c_{2})v^{*}-b_{1}(1-p)(1-q)v^{*(p-1)}\). As \((b_{1}b_{2}-c_{1}c_{2})\leq 0\) and \(0<p,q<1\) so \(\det(J(E^{*}))\) is always negative.
Thus \(E(u^{*},v^{*})\) is always an unstable point.
Proof.: of lemma 3.8: According to lemma 3.4 if \((c_{1}c_{2}-b_{1}b_{2})<0\) and \((a_{2}b_{1}q-c_{2}a_{1})+(c_{1}c_{2}-b_{1}b_{2})(1+\frac{1}{1-p})v_{max}>0\) where \(v_{max}=-\frac{(1-q)(1-p)}{(c_{1}c_{2}-b_{1}b_{2})}\frac{1}{2-p}\) then there exists two interior equilibrium points. We first study when we can get a stable interior equilibrium.
The conditions to be met for a stable equilibrium are:
* \((a_{2}b_{1}q-c_{2}a_{1})+(c_{1}c_{2}-b_{1}b_{2})(1+\frac{1}{1-p})v_{max}>0\) (Existence condition)
* Trace= \(-b_{1}u^{*}-b_{2}v^{*}+(1-p)(1-q)v^{*(p-1)}<0\)
* Determinant= \((-b_{1}u^{*})(-b_{2}v^{*}+(1-p)(1-q)v^{*(p-1)}-c_{1}c_{2}u^{*}v^{*}>0\)
By simplifying the trace we get
\((1-q)V^{*(p-1)}<\frac{a_{1}+(b_{2}-c_{1})v^{*}}{1-p}\)
\(qa_{2}b_{1}-a_{1}c_{2}+(c_{1}c_{2}-b_{1}b_{2})v^{*}<\frac{a_{1}+(b_{2}-c_{1})v^{* }}{1-p}\)
So, \(q<\frac{1}{a_{2}b_{1}(1-p)}(a_{1}+a_{1}c_{2}(1-p)+((b_{2}-c_{1})+(b_{1}b_{2}-c_{ 1}c_{2})(1-p))v^{*})\)
Simplifying determinant we get,
\((1-q)v^{*(p-1)}<\frac{(b_{1}b_{2}-c_{1}c_{2})v^{*}}{(1-p)b_{1}}\)
\(qa_{2}b_{1}-a_{1}c_{2}+(c_{1}c_{2}-b_{1}b_{2})v^{*}<\frac{(b_{1}b_{2}-c_{1}c_{ 2})v^{*}}{(1-p)b_{1}}\)
\(q<\frac{1}{a_{2}b_{1}(1-p)}(a_{1}c_{2}(1-p)+(b_{1}b_{2}-c_{1}c_{2})\frac{2-p}{ b_{1}}v^{*})\)
Combining the results we can say that the interior equilibrium point is locally stable if
\(q<\frac{1}{a_{2}b_{1}(1-p)}\min(a_{1}+a_{1}c_{2}(1-p)+((b_{2}-c_{1})+(b_{1}b_{ 2}-c_{1}c_{2})(1-p))v^{*},a_{1}c_{2}(1-p)+(b_{1}b_{2}-c_{1}c_{2})\frac{2-p}{b_ {1}}v^{*})\).
From the existing condition we get that, \(q>\frac{1}{a_{2}b_{1}(1-p)}((b_{1}b_{2}-c_{1}c_{2})(2-p)v_{max}+c_{2}a_{1}(1-p))\)
|
2309.08972 | Architecture-Aware Synthesis of Stabilizer Circuits from Clifford
Tableaus | Since quantum computing is currently in the NISQ-Era, compilation strategies
to reduce the number of gates executed on specific hardware are required. In
this work, we utilize the concept of synthesis of a data structure called
Clifford tableaus, focusing on applying CNOTs within the respective
connectivity graph of the quantum device. We hence contribute to the field of
compilation or, more precisely, synthesis by reducing the number of CNOTs in
the synthesized quantum circuit. Upon convergence, our method shows to
outperform other state-of-the-art synthesis techniques, when executed with
respect to a specific hardware. Upon executing the resulting circuits on real
hardware, our synthesized circuits tend to increase the final fidelity and
reduce the overall execution times. | David Winderl, Qunsheng Huang, Arianne Meijer-van de Griend, Richie Yeung | 2023-09-16T12:11:56Z | http://arxiv.org/abs/2309.08972v2 | # Architecture-Aware Synthesis of Stabilizer Circuits from Clifford Tableaus
###### Abstract
Since quantum computing is currently in the NISQ-Era, compilation strategies to reduce the number of gates executed on specific hardware are required. In this work, we utilize the concept of synthesis of a data structure called Clifford tableaus, focusing on applying CNOTs within the respective connectivity graph of the quantum device. We hence contribute to the field of compilation or, more precisely, synthesis by reducing the number of CNOTs in the synthesized quantum circuit. Upon convergence, our method shows to outperform other state-of-the-art synthesis techniques, when executed with respect to a specific hardware. Upon executing the resulting circuits on real hardware, our synthesized circuits tend to increase the final fidelity and reduce the overall execution times.
## 1 Introduction
In the current noisy intermediate-scale quantum (NISQ) era of quantum computing, the effects of decoherence and imperfect physical quantum gates mean that deep circuits typically accrue significant errors before computation and measurement can be completed. Hence, optimizing circuits to improve overall fidelity is important to obtain meaningful results from quantum algorithms.
However, current quantum computing architectures typically limit the set of possible multi-qubit interactions, increasing the circuit's size because information needs to take a detour to go from one qubit to another. We typically represent the device's connectivity constraints using a simple graph where physical qubits are represented by nodes in the graph, and edges between nodes indicate that the represented physical qubits can interact directly. We henceforth refer to this graph as the architecture or connectivity of the given quantum system.
The problem of making a quantum circuit fit the connectivity constraints of a particular device is called the qubit routing problem. Traditionally, this problem was solved from the perspective that if the architecture does not allow a gate in the circuit, it is because the qubits are in the wrong place. Hence, the qubits were moved to different registers using SWAP gates. These strategies are still dominant in existing compilers, such as the Qiskit transpiler [24].
Recent advances in architecture-aware synthesis techniques solve this problem through the perspective that gates in the circuit are wrong. Whenever the quantum circuit was created, the wrong gates were chosen. Thus, the circuit is represented using an intermediate representation from which a new circuit can be synthesized in an architecture-aware fashion. This approach to routing a circuit makes it easier to make global optimizations of the circuit, such that the gate overhead of architecture-aware synthesis should be lower than the gate overhead of swapping qubits.
These architecture-aware synthesis techniques typically use parity maps or phase polynomials to represent pieces of the circuit that only contain CNOT gates [21, 15, 29, 25], or CNOT gates with \(R_{Z}(\alpha)\)-gates [21, 20, 26], respectively. Then, these intermediate representations are stitched together to form a universal synthesis method [10, 17, 12, 28, 19].
This paper proposes an architecture-aware synthesis algorithm that uses a different class of simulable quantum circuits, namely those only containing Clifford gates. These circuits are often called stabilizer circuits or Clifford circuits, and they can be efficiently represented using a binary matrix called a Clifford tableau.
Synthesis of stabilizer circuits from Clifford tableaus is a highly-explored field, and extensive work has been invested into tableau synthesis algorithms [1, 7, 18, 5]. We adapted the algorithm from van den Berg [4] to be architecture-aware using strategies from the RowCol algorithm for architecture-aware CNOT synthesis [29].
As a proof of concept, we evaluate our algorithm against the existing state-of-the-art Clifford tableau synthesis algorithm in qiskit [5] as well as the one implemented in stim [4, 11]. When targeting different IBM quantum computers, we demonstrate a significant reduction of CNOT gates for circuits with sufficient depth. Our experimental data shows an asymptotic upper bound on CNOT gates for input circuits with sufficient depth for all synthesis methods (also after transpiling the circuits from the baseline methods). However, that bound does not occur when directly transpiling the input circuits. Instead, the CNOT gate count increases linearly with input circuit size. Additionally, we executed the optimized circuits on real quantum devices available on the IBM quantum platform, demonstrating higher fidelity compared to circuits synthesized with the two methods we used as a baseline.
## 2 Preliminaries
Clifford tableaus, introduced by Aaronson and Gottesman [1], are traditionally used to perform efficient Clifford circuit simulation. They typically are constructed by \(n\) stabilizer rows and their respective \(n\) destabilizer rows. Such a stabilizer or destabilizer row consists of a tensor-product of the four Pauli Matrices: \(X\), \(Y\), \(Z\), and \(I\), typically abbreviated as a paulistring. So, for instance, \(Z\otimes I\otimes I\) can be written as \(ZII\). Such a paulistring can then be encoded into the \(2n+1\) large vector \(r\in GF(2)\)1, by the following formula:
Footnote 1: \(GF(2)\) is the unique field with two elements with its additive and multiplicative identities
\[f(r)=(-1)^{r_{2n+1}}\bigotimes_{i=1}^{n}(Z^{r_{i}}X^{r_{i+n}}) \tag{1}\]
The underlying principle in Equation 1 is the following relation among paulis: \(Y=-iZX\). Hence for a qubit \(i\), a Pauli \(Y\) can be encoded by setting the bits \(r_{i}\) and \(r_{i+n}\) to one, a Pauli \(X\) by setting \(r_{i+n}\) to one, a Pauli \(Z\) can be encoded by setting \(r_{i}\) to one and finally a \(I\) can be encoded by leaving both \(r_{i}\) and \(r_{i+n}\) at zero. At last, one can undo the possible change of signs occurring due to the prefactor of \(-i\); therefore, each row possesses a sign column \(r_{2n+1}\), which can be flipped to undo the change of sign.
Thus given \(n\) stabilizer and \(n\) destabilizer states, using the Clifford tableau, any \(n\)-qubit Clifford circuit can be represented by \(2n(2n+1)\) bits and allows updates by adding \(H\), \(S\), and CNOT gates to the end (appending) or the front (prepending) [13, 1].
The specific actions of appending and prepending \(H\), \(S\), and CNOT gates in the Clifford tableau formulation are laid out in [11, 13] and visually specified in Figure 1 sans the effects on the sign-flip vector. Then, recognizing the similarities between the Clifford tableau representation and the ubiquitous parity map representation of CNOT-composed circuits used in circuit optimization by [8, 15, 21], we obtain an easily manipulable representation for Clifford circuit synthesis that allows reuse of existing machinery.
Most approaches to synthesizing circuits from Clifford tableaus rely on normal forms. An initial synthesis algorithm was proposed by Aaronson and Gottesman [1] using the normal form: \(H-C-P-C-P-C-H-P-C-P-C\), where \(H\) represented a region consisting of Hadamard gates, \(P\) a region of Phase gates and \(C\) a region of CNOT gates. Dehaene and De Moor [7] formulated a courser-grained 5-layer form, which was later improved by Maslov and Roetteler [18] and van den Nest [22], respectively. Duncan et al. [8] have implemented an improved \(H-S-CZ-CX-H-CZ-S-H\) normal form in the pyzx library.
We particularly point the reader to the algorithm by Bravyi and Maslov [5], used in our benchmarks as a state-of-the-art baseline. Their method uses a combination of template matching, designed explicitly for Clifford circuits, to level out CNOT and SWAP gates. They propose a canonical decomposition of any Clifford circuit in the form of a modified Bruhat [18] decomposition of \(-CX-CZ-P-H-P-CZ-CX-\) and perform gate reduction by first shifting gates into the initial \(-CX-CZ-P-\) layers before optimizing the \(CX\) layer via a parity matrix representation
and the \(CZ\) layer using a phase polynomial form. Similar to the current work, the focus of their optimization lies in reducing the number of entangling gates, which are \(CX\) and \(CZ\) in their normal form. Currently, their algorithm is implemented as the standard when synthesizing a Clifford tableau in the qiskit software stack.
The general structure of our method relies on the synthesis method proposed by van den Berg [4]. To the best of our knowledge, this method is the first one that does not use the normal forms of synthesis but relies on the sanitization (clearing of elements in the context of van den Berg [4]) and removal of interactions (sweeping in the context of van den Berg [4]). This algorithm is currently implemented in the stim library [11]--a highly efficient library for simulating Clifford circuits.
## 3 Methods
In the following, we describe the proposed algorithm for the architecture-aware synthesis of Clifford tableaus. As such, it heavily borrows from an existing algorithm for simulating Clifford circuits [4] that is implemented in stim[11]. Similar to this algorithm, our proposed algorithm consists of the following steps:
1. Pick a pivot qubit in the Clifford tableau
2. Sanitize the qubit w.r.t. destabilizer
3. Remove destabilizer interactions with other qubits
4. Sanitize the qubit w.r.t. stabilizer
5. Remove stabilizer interactions with other qubits
6. Remove the qubit from the problem and go to (1) while there are still qubits left
7. Sanitize the tableau signs
During this process, we will only synthesize Clifford gates allowed by a target architecture as specified by a given connectivity graph.
By construction, the synthesis process is the inverse of constructing the Clifford tableau. As such, if we apply gates to a Clifford tableau \(C\) until the tableau becomes an identity matrix, the gates will correspond to the Clifford tableau \(C^{\dagger}\). Hence, we start the procedure by inversing the tableau such that we generate the circuit corresponding to \((C^{\dagger})^{\dagger}=C\).
Figure 1: Action of appending Clifford gates \(H\), \(S\) and CNOT to an \(n\) qubit Clifford tableau.
Figure 2: Action of prepending Clifford gates \(H\), \(S\) and CNOT to an \(n\) qubit Clifford tableau.
### The architecture-aware Clifford synthesis algorithm
The algorithm works by applying Clifford gates to the tableau to transform it into an identity matrix.
\[\left[\begin{array}{c}XX:ZX\\ \begin{array}{c}XX:ZX\\ \begin{array}{c}ZZ\end{array}\end{array}\right]\leadsto\left[\begin{array}{ c}I:0\\ \begin{array}{c}0\end{array}\end{array}\right]\]
We do this by iteratively picking a qubit and turning its corresponding rows and columns into identity rows and columns; this equivalently means that the qubit no longer interacts with the other qubits and can be removed from the Clifford tableau. We continue this process until the tableau is the identity; in other words, it is empty.
First, we pick a qubit to remove from the tableau; we call this the pivot qubit. In principle, any qubit is a suitable pivot as long as it does not disconnect the connectivity graph upon removal (i.e., it is a non-cutting vertex). Thus, we will assume in the following that the pivot qubit is any non-cutting vertex in the graph, and we will later provide a heuristic for picking a specific pivot.
Given a tableau representing a \(q\)-qubit Clifford circuit, the pivot qubit (the \(p^{th}\) qubit in this example) can be removed if its corresponding rows, \(r_{p}\) and \(r_{q+p}\), and columns, \(c_{p}\) and \(c_{q+p}\), in the tableau, have a single non-zero entry along the diagonal of the tableau. In other words, if it describes the stabilizer state \(I\ldots IZ_{p}I\ldots I\) and destabilizer state \(I\ldots IX_{p}I\ldots I\).
We remove the undesired non-zero entries from row \(r_{p}\) in two phases. First, we remove the non-zero entries from the second half of row \(r_{p}\), which are the elements in the \(ZX\) block matrix. Suppose the row has a \(1\) in column \(c_{q+i}\); we then apply a Hadamard gate on qubit \(i\) if the row has a \(0\) in column \(c_{i}\) and an S gate otherwise. This will make the entry in \(c_{q+i}\) a \(0\) and the one in \(c_{i}\) a \(1\), thus sanitizing row \(r_{p}\) when applying this for each non-zero entry in the second half of \(r_{p}\). We want to point out to the reader again that this process is equivalent to converting each element of the corresponding destabilizer to a Pauli string consisting only of Xs and Is.
Schematically, \(r_{p}\) changes at \(c_{i}\) and \(c_{q+i}\) when sanitizing as follows
\[\begin{array}{l}\textbf{if}\ c_{i}=0:\\ \left[\begin{array}{ccccc}c_{i}&\vdots&c_{q+i}&\\...&0&...&1\end{array}\right]\xrightarrow{H_{i}}\left[\begin{array}{ccccc} c_{i}&\vdots&c_{q+i}&\\...&1&...&0&...\end{array}\right]\\ \\ \textbf{if}\ c_{i}=1:\\ \left[\begin{array}{ccccc}c_{i}&\vdots&c_{q+i}&\\...&1&...&1\end{array}\right]\xrightarrow{S_{i}}\left[\begin{array}{ccccc} c_{i}&\vdots&c_{q+i}&\\...&1&...&0&...\end{array}\right]\end{array}\]
Then, we want to remove the undesired non-zero entries from the first half of row \(r_{p}\). We do this by applying CNOT gates. Since the CNOT gates only act within the respective halves of the row, the second half or \(r_{p}\) will remain zero. When applying the CNOT gates, we want to turn row \(r_{p}\) into an identity row, but we are restricted to the connectivity constraints of the given connectivity graph. We can do this like architecture-aware CNOT synthesis [21, 15, 29, 25] by building a Steiner tree over the interacting qubits in the graph. Then, we can apply CNOT gates to turn the entries corresponding to Steiner nodes into a \(1\) and use those nodes to make all entries in the Steiner tree a \(0\) except for the root (i.e., the pivot qubit). This results in row \(r_{p}\) being an identity row in the tableau.
For example, suppose we have three qubits connected in a line, and the pivot row \(r_{p}=r_{1}\) looks as follows: \([101|000]\). Then, we cannot directly add \(c_{1}\) to \(c_{3}\) with a CNOT since those qubits are not connected. Instead, we build a Steiner tree over the connectivity graph given qubit \(q_{1}\) and \(q_{3}\). This results in the complete graph with qubit \(q_{2}\) as a Steiner node. First, we traverse the Steiner tree bottom-up to make the Steiner nodes into a \(1\) in the row:
\[[101|000]\xrightarrow{CNOT_{3,2}}[111|000]\]
Then, we traverse the Steiner tree bottom-up once more to remove all the non-zero entries:
\[[111|000]\xrightarrow{CNOT_{2,3}}[110|000]\xrightarrow{CNOT_{1,2}}[100|000]\]
Hence, the pivot row \(r_{p}=r_{1}\) is successfully turned into an identity row.
Moving on towards the stabilizer row \(r_{q+p}\), which we will handle similarly to the destabilizer. We can look at the first half of \(r_{q+p}\), which is part of the XZ block matrix in the tableau. Here, we want to apply a Hadamard gate on qubit
when row \(r_{q+p}\) has a \(1\) in column \(c_{i}\) and a \(0\) in column \(c_{q+i}\) or an S gate followed by a Hadamard gate when there is a \(1\) in both column \(c_{i}\) and \(c_{q+i}\). Applying these gates for each qubit will sanitize row \(r_{q+p}\).
Schematically, \(r_{q+p}\) changes at \(c_{i}\) and \(c_{q+i}\) when sanitizing as follows:
\[\begin{array}{l}\textbf{if }c_{q+i}=0:\\ \left[\begin{array}{ccccc}&c_{i}&\vdots&c_{q+i}\\...&1&...&\vdots&...&0&...\end{array}\right]\xrightarrow{H_{1}}\left[\begin{array} []{ccccc}&c_{i}&\vdots&c_{q+i}\\...&0&...&\vdots&...&1&...\end{array}\right]\\ \textbf{if }c_{q+i}=1:\\ \left[\begin{array}{ccccc}&c_{i}&\vdots&c_{q+i}\\...&1&...&\vdots&...&1&...\end{array}\right]\xrightarrow{S_{i}}\left[\begin{array} []{ccccc}&c_{i}&\vdots&c_{q+i}\\...&1&...&\vdots&...&0&...\end{array}\right]\xrightarrow{H_{i}}\left[\begin{array} []{ccccc}&c_{i}&\vdots&c_{q+i}\\...&0&...&\vdots&...&1&...\end{array}\right]\end{array}\]
However, we must be careful when this process applies to the pivot qubit \(i=p\). By construction, row \(r_{p}\) corresponds to a destabilizer of the form \(I....I...I\), where only \(X\) is on the pivot qubit \(p\). Since row \(r_{q+p}\) is a stabilizer; hence, \(r_{p}\) and \(r_{q+p}\) need to be anti-commuting. Consequently, the \(p^{th}\) entry of the stabilizer corresponding to \(r_{q+p}\) is either \(Z\) or \(Y\). Thus, we know that row \(r_{q+p}\) only has a \(1\) in column \(c_{p}\) if and only if there is also a \(1\) in column \(c_{q+p}\) (i.e., if it is a \(Y\)). Then, we could create an S gate followed by a Hadamard gate to sanitize row \(r_{q+p}\). However, when doing that on the pivot qubit, we move the \(1\) in row \(r_{p}\) to column \(c_{q+p}\). So we must first apply an H-, then an S-, followed by an H-Gate. That way, row \(r_{p}\) remains an identity row, and row \(r_{q+p}\) is sanitized. Additionally, row \(r_{q+p}\) has a \(1\) in column \(c_{q+p}\), or else the row would commute with row \(r_{p}\).
Schematically,\(r_{p}\) and \(r_{q+p}\) change at \(c_{p}\) and \(c_{q+p}\) when sanitizing as follows:
\[\begin{array}{l}\left[\begin{array}{ccccc}&c_{p}&\vdots&c_{q+p}\\...&1&...&\vdots&...&0&...\\ &\vdots&\vdots&\vdots&\vdots\\...&1&...&\vdots&...&1&...\end{array}\right]\xrightarrow{H_{p}}\left[\begin{array} []{ccccc}&c_{p}&\vdots&c_{q+p}\\...&0&...&\vdots&...&1&...\\...&\vdots&\vdots&\vdots\\...&1&...&\vdots&...&1&...\end{array}\right]\xrightarrow{S_{p}}\left[\begin{array} []{ccccc}&c_{p}&\vdots&c_{q+p}\\...&0&...&\vdots&...&1&...\\...&\vdots&\vdots&\vdots\\...&1&...&\vdots&...&0&...\end{array}\right]\\ \xrightarrow{H_{p}}\left[\begin{array}{ccccc}&c_{p}&\vdots&c_{q+p}\\...&1&...&\vdots&...&0&...\\...&\vdots&\vdots&\vdots\\...&0&...&\vdots&...&1&...\end{array}\right]\\ \end{array}\]
After sanitization, we want to remove the remaining non-zero entries in row \(r_{q+p}\) using CNOT gates. For this, we use the exact same strategy as for row \(r_{p}\), but all CNOTs will be applied in the opposite direction because they are acting on the stabilizer, which is a Pauli string consisting solely of Is and Zs. To avoid introducing new interactions on the corresponding destabilizer state, we need to ensure that we never add a column to the column corresponding to the pivot qubit, \(c_{q+p}\). It is only needed to do this when row \(r_{q+p}\) has a \(0\) in column \(c_{q+p}\). However, we have seen from the sanitization process that this is never the case because the stabilizer corresponding to \(r_{q+p}\) needs to anti-commute with \(r_{p}\). So after sanitization, the \(p^{th}\) entry of the stabilizer corresponding to row \(r_{q+p}\) is always a \(Z\), and thus row \(r_{q+p}\) has a \(1\) in column \(c_{q+p}\).
When rows \(r_{p}\) and \(r_{q+p}\) are identity rows in the Clifford tableau, we need to make the corresponding columns \(c_{p}\) and \(c_{q+p}\) into identity. Luckily, because of the nature of the Clifford tableau, this is already the case. In the previous steps of the algorithm, we have turned row \(r_{p}\) and row \(r_{q+p}\) into two rows that correspond to a destabilizer and a stabilizer that only have \(I\) except for on the pivot qubit \(p\) where it has an \(X\) and \(Z\), respectively. Additionally, we know these rows must commute with all other rows in the Clifford tableau. Thus, all other rows correspond with destabilizers or stabilizers with an \(I\) on the pivot qubit \(p\). Hence, the columns \(c_{p}\) and \(c_{q+p}\) have zeroes everywhere except on the diagonal, and we can safely remove the pivot qubit from the tableau since it no longer interacts with any other qubits.
We can continue this process of sanitization of the destabilizer, removal of interactions in the destabilizer, sanitization of the stabilizer, and removal of interactions in the stabilizer by choosing a new pivot. After \(q-1\) many iterations, a single quit will be left, which can be synthesized trivially.
Lastly, the Clifford tableau keeps track of the sign of the stabilizers because \(Z\left|1\right\rangle=-\left|1\right\rangle\) and \(X\left|-\right\rangle=-\left|-\right\rangle\). If there is a sign on a qubit after synthesis, this must be undone. We do this by applying the corresponding gate once more. For non-zero entries in the first \(q\) rows of the sign column, this means applying \(X_{i}=H_{i}S_{i}S_{i}H_{i}\), and for the non-zero entries in the bottom \(q\) rows of the sign column, this means applying \(Z_{i}=S_{i}S_{i}\).
### Heuristic for choosing the pivot qubit
As stated, the pivot qubit can be any non-cutting vertex in the connectivity graph. However, the choice of pivot will influence which gates will be synthesized.
Since our primary goal is synthesizing the Clifford tableau with as few CNOTs as possible, we want to choose the row with the smallest Steiner tree. However, calculating the minimum Steiner tree is NP-hard [14]. For our implementation, we opted for an approximate Steiner tree solution, similar to [21; 15; 29; 25]. To improve the computational overhead, we also opted to pre-compute the shortest paths between each qubit using the Floyd-Warshall algorithm [9; 27], which provides a lookup table for distances: \(d\). Thus, we can use these distances to approximate the cost of the Steiner tree for the pivot row \(r\) as follows:
\[s(r)=\sum_{i}d_{r,j}\left[\mathbb{I}(T_{r,i}\neq 0\lor T_{r,i+q}\neq 0)+\mathbb{I }(T_{r+q,i}\neq 0\lor T_{r+q,i+q}\neq 0)\right] \tag{2}\]
where \(T\) is the Clifford tableau and \(\mathbb{I}:\mathbb{B}\rightarrow\{0,1\}\) describes the indicator function converting a boolean expression to its corresponding pseudboolean integer. Intuitively we are counting the number of non identity interactions for each row and estimating the distance of removing those. Hence, for our heuristic, we want to pick a non-cutting vertex \(r\) for the graph \(G\) such that \(s(r)\) is minimized.
### Qubit placement heuristic
The above algorithm uses the benefit of applying routing throughout the Clifford synthesis, such that all gates are synthesized with the target architecture in mind. Thus, avoiding the need to add SWAP gates during a later routing step. Although this can greatly improve the CNOT count of the final circuit, it still lacks the ability to place the qubits. We remedy this by the following heuristic.
We start the synthesis without a mapping from logical qubit to vertex on the connectivity graph. When considering a pivot, we pick from the qubits that have not yet been mapped or have been mapped (in a later step) to a vertex that is not cutting the remaining graph. Then, we follow the procedure as before, but when building the Steiner tree, we map the required qubits that have not yet been mapped to the graph as close to the other qubits as possible. Note that during sanitization, we might add single qubit gates to qubits that have not yet been mapped. In this case, we can buffer them and place them as soon as they are mapped.
The pseudocode for the full algorithm can be found in Algorithm 1.
```
1Functiontableau_synth(G,tableau):
2tableau\(\leftarrow\)tableau\({}^{-1}\); // Invert the tableau
3\(qc\leftarrow\)AQuantum Circuit, which all operations are appended to;
4\(mapping\leftarrow\)Empty dictionary;
5\(q\leftarrow\)Number of qubits;
6whileGhasnodesdo
7//1.Pick a pivotqubit in the Clifford tableau
8\(p\leftarrow\)choose an unmapped qubit or a non-cutting vertex from G according to heuristic \(s(r)\); // Map qubit if needed //2. Sanitize\(p\)w.r.t. destabilizer
9for\(i\in 1...q\)where\(tableau_{p,q+i}=1\And\)tableau\({}_{p,i}=0\)do
10\(qc\gets H_{i}\)
11for\(i\in 1...q\)where\(tableau_{p,q+i}=1\And\)tableau\({}_{p,i}=1\)do
12\(qc\gets S_{i}\)
13//3. Remove interactions w.r.t. destabilizer
14\(nodes\leftarrow\)Non zero entries of tableau\({}_{p}\); \(T\leftarrow\)Steiner tree over \(G\)containing\(nodes\)and\(p\); // Map qubits if needed
15forparent,child\(\in bottom_{up}(T)\)do
16\(qc\gets CNOT_{child}\),parent
17forparent,child\(\in bottom_{up}(T)\)do
18\(qc\gets CNOT_{parent}\),child
19//4. Sanitize\(p\)w.r.t. the stabilizer
20if\(tableau_{q+p,p}=1\)then
21\(qc\gets H_{p}S_{p}H_{p}\)
22for\(i\in 1...q\)where\(tableau_{q+p,q+i}=0\And\)tableau\({}_{q+p,i}=1\)do
23\(qc\gets H_{i}\)
24for\(i\in 1...q\)where\(tableau_{q+p,q+i}=1\And\)tableau\({}_{q+p,i}=1\)do
25\(qc\gets S_{i}\);
26\(qc\gets H_{i}\);
27//5. Remove interactions w.r.t. stabilizer
28\(nodes\leftarrow\)Non zero entries of tableau\({}_{q+p}\);
29\(T\leftarrow\)Steiner tree over \(G\)containing\(nodes\)and\(p\); // Map qubits if needed
30forparent,child\(\in bottom_{up}(T)\)where\(parent\notin nodes\)do
31\(qc\gets CNOT_{parent}\)
32\(qc\gets CNOT_{parent}\)
33\(qc\gets H_{i}\);
34//5. Remove interactions w.r.t. stabilizer
35\(nodes\leftarrow\)Non zero entries of tableau\({}_{q+p}\); \(T\leftarrow\)Steiner tree over \(G\)containing\(nodes\)and\(p\); // Map qubits if needed
36\(q\gets CNOT_{parent}\)
37\(q\leftarrow\)Number of qubits; // 1. Pick a pivotqubit in the Clifford tableau
38\(p\leftarrow\)choose an unmapped qubit or a non-cutting vertex from G according to heuristic \(s(r)\); // Map qubit if needed //2. Sanitize\(p\)w.r.t. destabilizer
39for\(i\in 1...q\)where\(tableau_{p,q+i}=1\And\)tableau\({}_{p,i}=0\)do
40\(qc\gets H_{i}\)
41for\(i\in 1...q\)where\(tableau_{p,q+i}=1\And\)tableau\({}_{p,i}=1\And\)do
42\(qc\gets S_{i}\);
43\(qc\gets H_{i}\);
44\(qc\gets H_{i}\);
45//5. Remove interactions w.r.t. stabilizer
46\(nodes\leftarrow\)Non zero entries of tableau\({}_{q+p}\); \(T\leftarrow\)Steiner tree over \(G\)containing\(nodes\)and\(p\); // Map qubits if needed
47\(p\leftarrow\)Steiner tree over \(G\)containing\(nodes\)and\(p\); // Map qubits if needed
48\(p\leftarrow\)Steiner tree over \(G\)containing\(nodes\)and\(p\); // Map qubits if needed
49\(p\leftarrow\)Steiner tree over \(G\)containing\(nodes\)and\(p\); // Map qubits if needed
50\(q\gets CNOT_{parent}\)
51\(qc\gets CNOT_{parent}\)
52\(qc\gets CNOT_{parent}\)
53\(qc\gets CNOT_{child}\)
54\(qc\gets CNOT_{child}\)
55
56\(qc\gets CNOT_{child}\)
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
91
92
93
94
95
96
97
98
99
100
111
12
113
114
12
13
14
15
162
17
182
192
193
194
195
196
20
219
220
23
240
251
262
272
28
297
298
30
3199
421
322
334
335
36
377
38
399
59
60
719
82
399
97
98
99
101
120
201
213
214
215
216
223
217
218
224
219
225
236
241
252
263
277
28
299
38
429
599
61
70
83
84
91
92
93
94
95
96
97
101
98
112
122
13
214
225
215
216
217
218
226
219
227
219
239
240
241
25
252
263
27
28
299
30
319
43
444
45
46
47
48
49
59
62
596
63
719
84
85
86
87
888
97
98
99
99
99
110
99
120
13
140
151
216
227
217
218
229
33
445
46
47
596
68
719
88
99
99
109
130
140
151
216
27
217
218
229
34
48
596
69
72
89
99
116
92
93
95
96
973
98
99
109
174
18
199
19
20
219
22
23
241
25
263
27
28
299
30
319
40
411
42
43
445
46
596
67
78
89
999
12
979
89
99
99
130
99
140
151
29
47
48
59
69
70
89
916
92
93
94
95
96
97
98
17
99
189
19
197
19
20
219
22
23
241
25
26
27
289
319
40
419
412
42
43
446
47
59
69
719
89
99
99
109
109
110
111
120
20
219
48
59
69
72
89
99
121
89
90
914
92
93
94
109
115
220
22
23
241
25
26
27
28
295
310
40
41
42
43
444
59
69
73
89
95
96
974
89
97
98
99
109
116
17
189
19
200
219
219
22
23
24
25
26
27
28
299
310
42
44
59
69
74
89
98
99
111
28
43
89
90
912
44
59
92
69
93
94
95
109
114
109
120
219
23
244
59
69
74
89
96
131
89
145
89
16
89
17
97
189
19
200
222
25
26
27
289
32
334
35
36
37
38
398
40
410
42
59
69
74
89
99
100
119
210
27
28
39
43
44
59
69
75
89
99
100
111
29
302
43
44
59
69
76
89
101
89
110
20
40
411
89
111
212
42
59
69
102
43
69
103
44
69
114
104
44
70
89
115
216
20
44
89
10
10
217
218
45
106
217
29
320
46
107
47
108
48
109
49
119
210
49
210
40
411
112
42
59
10
210
40
421
52
10
69
111
22
210
43
10
44
10
44
112
210
44
12
210
44
22
210
44
23
44
24
59
10
25
10
44
25
26
10
44
27
10
44
27
10
43
28
10
44
28
10
44
29
21
20
43
44
5
20
44
5
210
45
21
20
46
21
22
210
47
210
47
10
48
210
49
211
222
210
4
22
21
22
23
24
23
24
25
24
25
26
25
26
27
28
29
32
32
33
30
40
41
## 4 Evaluations
To determine the efficacy of the proposed method, we conducted experiments on different architectures available in the IBM Software stack 2. More specifically, we targeted the backends: _quito_ (5 qubits), _nairobi_ (7 qubits), _guadalupe_ (16 qubits), _mumbai_ (27 qubits), _ithaca_ (65 qubits) and _brisbane_ (127 qubits). An outline of the used connectivity graphs can be found in Appendix B.
Footnote 2: Our implementation can be found on Github: [https://github.com/daehiff/pauliopt/tree/clifford_synthesis](https://github.com/daehiff/pauliopt/tree/clifford_synthesis). We used qiskit version 0.39.0, qiskit-aer 0.11.0. qiskit-ibm-runtime 0.8.0 and qiskit-ibm-provider 0.19.2. The architectures _ibm_,_ithaca_ and _ibm_,_brisbane_ where obtained from the qiskit API: [https://api-qcon.quantum-computing.ibm.com/api/users/backends](https://api-qcon.quantum-computing.ibm.com/api/users/backends) (version September 2023)
We generated random circuits with the gate set \(\{H,S,CX\}\) for these experiments. For each architecture, we utilized increasing input gate counts until asymptotic behavior was observed for most methods, seen in Figure 4 and Figure 5. When generating each random circuit, for each input gate, we sampled the gate type as well as position in the architecture uniformly from the indicated gate set and available qubits; this implies that single-qubit gates are more likely to be present in the circuit.
We compare our proposed algorithm against three baselines that were all transpiled using the default transpile method from qiskit: the original generated circuit (qiskit transpile), the method of Bravyi and Maslov [5], and the method of van den Berg [4], present in the stim library. We compare these methods with respect to their gate counts on different architectures and the circuit fidelities when executed on actual quantum hardware.
### Evaluation of the Gate Count
We focus the evaluation of our algorithm on the CNOT counts because CNOTs typically have a lower fidelity than single qubit gates on quantum hardware. For completeness, the single-qubit gate counts obtained in our evaluation can be found in Appendix A.
As can be seen in Figure 4 and Figure 5, our method displays comparable results on a complete architecture with the benchmark algorithms; our method generally outperforms van den Berg [4] and is outperformed by Bravyi and Maslov [5]. We note that the comparative performance also tends to worsen with larger architectures. The horizontal dashed black line in each graph indicates the asymptotic behavior of Bravyi and Maslov [5] on a complete architecture and allows a quick visual comparison of CNOT count before and after routing to a specific architecture.
However, when we compare the CNOT count in the case of a specific target architecture, the extra CNOTs needed to adhere to the connectivity constraints are much fewer with our proposed method when compared to the of Bravyi and Maslov [5] and van den Berg [4].
We further observe in architectures with higher qubit count, such as Guadalupe and all architectures in Figure 5, our algorithm is outperformed by the other methods if the input gate count is low. We expect that this is caused by the greedy heuristic to choose our pivot.
Lastly, we want to point out that the standard _transpile_-method of qiskit on a complete connectivity performs better than the synthesis methods. This shows that none of these methods synthesize an optimal circuit. This motivated us to compare the empirical convergence behaviour of the synthesis methods, with respect to the theoretical bounds for CNOT synthesis. Since the method by Bravyi and Maslov [5] synthesizes layers of CNOTs and the other synthesis methods use a RowCol-like structure, these methods should generate \(\mathcal{O}(n^{2})\)[29] CNOTs at worst and \(\mathcal{O}(n^{2}/\log(n))\)[23] at best.
Table 1 shows the median CNOT count of the circuits in our experiments after convergence3 and how that relates to the two theoretical CNOT synthesis bounds. The table shows that all methods seem to follow the \(\mathcal{O}(n^{2})\) bound, but with a much lower scalar than what would be expected in the worst case (i.e. 1 CNOT per qubit per row = \(2n^{2}\)). However, the increasing scalar with respect to the optimal bound shows that there is room for improvement.
Footnote 3: all circuits with originally \(\geq 75\) gates for Quito, Nairobi \(\geq 110\), Guadalupe \(\geq 250\), Mumbai \(\geq 500\), Ithaca \(\geq 1250\) and Brisbane \(\geq 3000\).
We further evaluate the number of CNOTs introduced when transpiling that is needed to adhere to the connectivity constraints. For this we define a metric called the _routing portion_ that describes the percentage of the CNOT count that is added when targeting a specific device rather than assuming all-to-all connectivity:
\[\text{Routing portion}=\frac{\#CX_{r}-\#CX_{fc}}{\#CX_{r}}, \tag{3}\]
with \(\#CX_{fc}\) the number of CNOTs on the fully connected architecture and \(\#CX_{r}\) the number of CNOTs after transpiling to a device-specific architecture. The routing portion of the compared synthesis methods upon convergence can be found in Table 2. We note that since Bravyi and Maslov [5] and van den Berg [4] do not synthesize to a target architecture, the routing portion for these methods represents the ability of the qiskit transpiler to route the circuits obtained with these methods.
We observe that the routing portion of our proposed algorithm is less than half the routing portion when synthesizing with a previous method and then routing it. Additionally, the routing portion of all methods tends to increase as the number of qubits in the architecture increases. We expect that this is related to the distance between the qubits in the architecture which also increases as the number of qubits increases.
### Evaluation on actual hardware
\begin{table}
\begin{tabular}{l c c|c c c|c c c c} \hline \hline \multirow{2}{*}{Qubits} & \multirow{2}{*}{Bound. (\(\frac{n^{2}}{\log_{2}}\))} & \multirow{2}{*}{Bound. (\(\frac{n^{2}}{\log_{2}}\))} & \multicolumn{3}{c|}{Ours} & \multicolumn{3}{c|}{Brayi and Maslov [5]} & \multicolumn{3}{c}{van den Berg [4]} \\ & & & & & & & & & \\ & Bound. (\(\frac{n^{2}}{\log_{2}}\)) & & Empirical & Empirical/Inwidth & \(\frac{n^{2}}{\log_{2}}\), & Empirical & Empirical/Inwidth & \(\frac{n^{2}}{\log_{2}}\), & Empirical & Empirical/Inwidth & \(\frac{n^{2}}{\log_{2}}\), \\ \hline
5 & 11 & 25 & 15 & 1.36 & 0.60 & 17 & 1.55 & 0.68 & 20 & 1.82 & 0.80 \\
7 & 17 & 49 & 31 & 1.82 & 0.63 & 31 & 1.82 & 0.63 & 39 & 2.29 & 0.80 \\
16 & 64 & 256 & 167 & 2.61 & 0.65 & 153 & 2.39 & 0.6 & 197 & 3.08 & 0.77 \\
27 & 153 & 729 & 488 & 3.19 & 0.67 & 413 & 2.70 & 0.57 & 557 & 3.64 & 0.76 \\
65 & 702 & 4225 & 2901 & 4.13 & 0.69 & 2252 & 3.21 & 0.53 & 3194 & 4.55 & 0.76 \\
127 & 2308 & 16129 & 11396 & 4.94 & 0.71 & 8551 & 3.70 & 0.53 & 12173 & 5.27 & 0.75 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The theoretic asymptotic bounds of CNOT counts for each architecture size using \(\mathcal{O}(n^{2}/\log_{2}(n))\)[23] or naive \(\mathcal{O}(n^{2})\)[29], the observed bounds and their fractions with respect to each bound for each synthesis algorithm in the all-to-all case.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline AlgorithmArchitecture & Quito & Nairobi & Guadalupe & Mumbai & Ithaca & Brisbane \\
5 qubits & 7 qubits & 16 qubits & 27 qubits & 65 qubits & 127 qubits \\ \hline Bravyi and Maslov [5] & 44.96\% & 60.20\% & 72.37\% & 78.50\% & 86.36\% & 91.34\% \\ van den Berg [4] & 46.34\% & 63.45\% & 75.91\% & 80.77\% & 84.37\% & 87.92\% \\ our & 16.93\% & 23.70\% & 30.57\% & 31.16\% & 30.44\% & 35.14\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Routing portion (Equation 3) for all circuits with originally \(\geq 75\) gates for Quito, Nairobi \(\geq 110\), Guadalupe \(\geq 250\), Mumbai \(\geq 500\), Ithaca \(\geq 1250\) and Brisbane \(\geq 3000\).
Figure 3: Average Counts of our simulation of \(CC^{\dagger}\) for 20 random Clifford circuits.
To verify our conjecture that reducing the CNOT count will improve the final fidelity of our circuit, we executed the random Clifford circuits on real hardware. Because each random Clifford circuit creates a different quantum state, it is difficult to aggregate their fidelity into a single picture. Therefore, we executed each Clifford circuit \(C\) followed by their inverse \(C^{\dagger}\) on the IBM Hardware, which should precisely provide us with the input state: \(|0\ldots 0\rangle\), which we can measure the fidelity on.
We generated 20 random circuits \(C\) and executed \(CC^{\dagger}\) on the quantum device with \(16.000\) shots. We used the devices _quito_ and _nairobi_ to verify our results since those were the only devices publicly available. For this experiment, we generated \(25\) random gates per circuit for _quito_ and \(30\) random gates for _nairobi_, based on the maximum circuit size that each device can execute without decohering or accruing too much error.
We re-synthesized each circuit with the method of Bravyi and Maslov [5], van den Berg [4], and our approach. Then, we transpiled the circuit to target architecture for the two baseline methods before finally executing the resulting circuit \(C\) on the quantum device as \(CC^{\dagger}\).
To outline the results, we report the averaged shots in Figure 3. On average, we can observe that our method performs best regarding the final distribution of shots. This is also visible when computing the average Hellinger fidelity of our experiments in Table 3. Particularly, we show that our method obtains the highest Hellinger fidelity \(65-70\%\) of the time (\(13\) circuit for Quito and \(14\) circuits for Nairobi). However, we note that due to the decoherence times and the gate fidelities of the Nairobi device, the observed Hellinger fidelities are close to zero for most circuits in all methods. Specifically experiment \(12\) on Nairobi is interesting. Here both methods van den Berg [4] and Bravyi and Maslov [5] showed a much better fidelity compared to our method. Upon evaluating the resulting circuits, we found that van den Berg [4] provided an overhead of \(8\) CNOTs, Bravyi and Maslov [5] an overhead of \(8\) CNOTs and our method an overhead of \(30\) CNOTs 4. We expect that this is a special case where our heuristics choose the wrong pivot qubits or qubit mapping resulting in a lower fidelity caused by the extra CNOTs.
Footnote 4: We counted the CNOTs of the complete circuit describing \(CC^{\dagger}\).
Lastly, we compared the execution times our circuit required on the device. Overall, we observed that our algorithm slightly improves the execution time compared to the methods of Bravyi and Maslov [5] and van den Berg [4]. This is more noticeable for the Nairobi device than the Quito device. We expect that the difference in overall execution time is due to the fact that fewer CNOTs are executed on the device.
Figure 4: CNOT count for smaller architectures up to 16 qubits. Specifically, the backends Guadalupe, Nairobi, and Quito were used. We additionally reported the CNOT count of the complete architectures of the same size. The dashed black line describes the CNOT count upon the convergence of Bravyi and Maslov [5] in both pictures.
Figure 5: CNOT count for larger architectures up to 127 qubits. Specifically, the backends Brisbane, Ithaca, and Mumbai were used. We additionally reported the CNOT count of the complete architectures of the same size. The dashed black line describes the CNOT-Count upon the convergence of Bravyi and Maslov [5] in both pictures.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c}{\(F(Q,P)\)} & \multicolumn{3}{c}{time (seconds)} \\ Quito & Ours & qiskit [5] & stim [4] & Ours & qiskit [5] & stim [4] \\ \hline
[MISSING_PAGE_POST]
0376 & 0.0522 & **0.0886** & **6.8788** & 7.0341 & 8.2870 \\ \hline \hline & 0.1028 & 0.0801 & 0.0888 & 6.8922 & 7.1775 & 8.5161 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Fidelities, denoted as \(F(Q,P)=(\sum_{i}\sqrt{p_{i}q_{i}})^{2})\), and run-times for the **IBM Quito** and **IBM Nairobi device**. Each row marks the execution of a single circuit generated from \(25\) or \(30\) Clifford gates for Quito and Nairobi, respectively. The last row in each table shows the average fidelity and run-time. The highest fidelity and shortest run-time in each row are marked in bold.
## 5 Conclusion
In our work, we created a novel, architecture-aware synthesis algorithm for synthesizing Clifford tableaus. We observed that our algorithm reduces the number of CNOT gates by a significant factor when targeting architecturally constrained devices. However, in the case of all-to-all connectivity, our proposed algorithm performed worse than the state-of-the-art method by Bravyi and Maslov [5]. Since we expect quantum devices to remain architecturally constrained in the future, architecture-aware synthesis is needed to address the qubit routing problem. Our evaluations on real hardware have shown a similar picture, specifically that reducing the number of multi-qubit gates executed on a quantum circuit reduces execution time and increases the final fidelity.
Nevertheless, we can see that our heuristic still has room for improvement, specifically for the larger architectures and for Clifford circuits with little gates. Since none of the synthesis algorithms were able to recover a similar CNOT count from the complete architecture, there seems to be some inherent inefficiency within all Clifford synthesis algorithms. This could potentially show that these algorithms are not yet fully leveraging the structure of the stabilizer and destabilizer relationships in the Clifford tableaus. Therefore, we suggest integrating the block-wise strategy of Patel et al. [23] towards the synthesis of Clifford tableaus as far as possible. Combining this with an architecture-aware synthesis method could yield both optimal asymptotic behaviour and further reduction of CNOTs when compiling the circuit towards a specific architecture.
For completeness, we want to emphasize that stabilizer circuits are classically simulable and therefore not universal for quantum computation. As such, our algorithm needs to be combined with other algorithms to be used for the compilation of quantum circuits. One way to do this is to recognize the similarities between Clifford tableaus and parity maps and realize that parity maps can be generalized to phase polynomials [2; 20; 21; 26]. This would generalize Clifford tableaus to Paulistring operators, in a way similar to the method proposed by Martiel and de Brugiere [17]. In fact, it might be possible to use our algorithm directly to optimize their final, complicated Clifford operator for cases where the expected value is not calculated with respect to a specific hamiltonian. Alternatively, we can adapt existing Paulistring compilation methods [6; 16] to be made architecture-aware, or extend existing methods that use the mixed ZX-phase polynomial form [12; 28; 19] to so-called Pauli gadgets [6; 30].
Similarly, it is possible to adapt our proposed algorithm in the compilation of quantum error correction codes because quantum error correction codes consist of stabilizers and the algorithm synthesizes stabilizers. Additionally, our algorithm might be useful for the architecture-aware extraction of quantum circuits from ZX-diagrams in ZX-calculus [8; 3].
In summary, the synthesis of Clifford tableaus is an important primitive for the efficient compilation of quantum programs. Using our proposed algorithm, we are able to synthesize directly to a target architecture without the need for added SWAP gates. We have shown that this holistic approach reduces routing overhead tremendously, a key factor in improving the performance of quantum hardware in the near term. |
2310.00295 | Anomalous emission from Li- and Na-like ions in the Corona heated via
Alfvén wave | The solar ultraviolet intensities of spectral lines originating from Li- and
Na-like ions have been observed to surpass the expectations derived from
plasmas with coronal approximation. The violation of the coronal approximation
can be partially attributed to non-equilibrium ionization (NEI) due to dynamic
processes occurring in the vicinity of the transition region. To investigate
the impact of these dynamics in Alfv\'{e}n-wave-heated coronal loop, a set of
equations governing NEI for multiple ion species was solved numerically in
conjunction with 1.5-dimensional magnetohydrodynamic equations. Following the
injection of Alfv\'{e}n waves from the photosphere, the system undergoes a time
evolution characterized by phases of evaporation, condensation, and
quasi-steady states. During the evaporation phase, the ionization fractions of
Li- and Na-like ions were observed to increase when compared to the fractions
in ionization equilibrium, which lead to the intensity enhancement of up to
1.6. This over-fractionation of Li- and Na-like ions was found to be induced by
the evaporation process. While collisions between shocks and the transition
region temporarily led to deviations from ionization equilibrium, on average
over time, these deviations were negligible. Conversely, under-fractions of the
ionization fraction led to intensity reduction of down to 0.9 during the
condensation phase and the quasi-steady state. Given the dependency of the
over/under-fractionation on mass circulations between the chromosphere and the
corona, these observations will serve as valuable benchmarks to validate not
only Alfv\'{e}n wave models but also other existing mechanisms on coronal
heating. | Takuma Matsumoto | 2023-09-30T08:11:08Z | http://arxiv.org/abs/2310.00295v2 | Anomalous Enhancement of Li- and Na-Like Ions Due to Mass Circulation with Non-Equilibrium Ionization
###### Abstract
The solar ultraviolet intensities of spectral lines originating from Li- and Na-like ions have been observed to surpass the expectations derived from plasmas with coronal approximation. The violation of the coronal approximation can be partially attributed to non-equilibrium ionization (NEI) due to dynamic processes occurring in the vicinity of the transition region. However, the quantitative analysis of these dynamic effects has not yet been conducted. To investigate the impact of these dynamics, a set of equations governing NEI for multiple ion species was solved numerically in conjunction with 1.5-dimensional magnetohydrodynamic equations describing an Alfven-wave-heated coronal loop. Following the injection of Alfven waves from the photosphere, the system undergoes a time evolution characterized by phases of evaporation, condensation, and quasi-steady states. During the evaporation phase, the ionization fractions of Li- and Na-like ions were observed to increase, with a maximum enhancement of 1.6 when compared to the fractions in ionization equilibrium. This over-fractionation of Li- and Na-like ions was found to be induced by the evaporation process, while collisions between shocks and the transition region did not exhibit deviations from ionization equilibrium. Consequently, the intensities calculated using the coronal approximation underestimated the intensities of Li- and Na-like ions by up to 60%. Conversely, under-fractions of at least 0.9 was observed during the condensation phase and the quasi-steady state. Given that the degree of over/under-fraction exhibits a dependency on mass motions, our study has a possibility to impose limitations on both the mass circulation in coronal heating and mass loss processes.
keywords: Sun: corona - Sun: chromosphere - Sun: transition region - Sun: UV radiation
## 1 Introduction
Ultraviolet (UV) emissions have posed a long-standing challenge due to inconsistencies between observations and theoretical predictions. To address these disparities, researchers have explored the concept of non-equilibrium ionization (NEI) (Neupert, 1964; Mariska et al., 1978; Dupree et al., 1979). NEI becomes particularly relevant when the time scales associated with ionization and recombination processes exceed those of dynamic phenomena, such as the transit time of mass motions within the thin transition region, the dynamical heating time scale, and the time scales of acoustic fluctuations.
The anomalous behavior of Li- and Na-like ions, stemming from the violation of the coronal approximations, has been a subject of investigation since the early days of UV observations (Burton et al., 1971; Dupree, 1972). Subsequent in-depth analyses have revealed that the intensities of spectral lines originating from these ions can exceed theoretical expectations based on the coronal approximation by factors of up to approximately 5 (Judge et al., 1995). While advanced atomic models have partially mitigated this discrepancy (Dufresne et al., 2023), challenges remain.
To address these gaps, the consideration of NEI is a crucial avenue. Given the intrinsic time dependence associated with NEI, it necessitates the solution of either time-steady or dynamic equations to assess its impact. While various studies have demonstrated deviations from ionization equilibrium within the context of phenomena such as siphon flows (Noci et al., 1989; Spadaro et al., 1990), solar flares (MacNeice et al., 1984; Imada et al., 2011, 2015), and nanoflares (Mariska et al., 1982; Hansteen, 1993; Bradshaw, 2009), to the best of our knowledge, there have been no quantitative and systematic investigations into the emission anomalies of Li- and Na-like ions within the framework of NEI.
In this study, we adopted a 1.5-dimensional magnetohydrodynamics (MHD) model with Alfven-wave-heated coronal loops (Moriyasu et al., 2004; Moriyasu & Shibata, 2004) to predict the emission from ions in NEI. The hot corona is maintained via shocks produced through nonlinear mode conversions from Alfven waves (Hollweg et al., 1982; Kudoh & Shibata, 1999), and its behaviors in quasi-steady states (Antolin & Shibata, 2010) and during long-term evolution (Washinoue & Suzuki, 2019) have been extensively inves
tigated. By solving a set of equations for NEI, we can quantitatively explore the behaviors of Li- and Na-like ions in the coronal loop with complex dynamics.
Because the transition region, where most Li- and Na-like ions form, is a very thin layer, even small mass flows associated with evaporation, condensation, and shock propagation can occur in time scales shorter than the ionization and recombination time scale. This phenomenon may contribute to deviations in ionization fractions from ionization equilibrium, although detailed numerical simulations are required to quantitatively estimate the degree of departure in ionization fraction and the associated emissions.
The primary objective of this study is to investigate the differences in emissions between plasmas in NEI and plasmas in ionization equilibrium. This paper is organized as follows: Section 2 presents models and assumptions, Section 3 details our simulations and analysis with discussions, and Section 4 provides the conclusions of the study.
## 2 Models and assumptions
In this study, we investigated the impact of NEI on emergent intensity by employing a coronal loop model heated by Alfven waves. We concurrently solved the dynamic heating process governed by MHD equations and the evolution of the ion fraction. Subsequently, we reconstructed UV radiations based on the obtained ion fractions, electron density, and temperature.
We employed a 1.5-dimensional numerical model for the coronal loop, building upon the pioneering works of Moriyasu et al. (2004) and Moriyasu & Shibata (2004). This model naturally reproduces warm loops as a consequence of Alfven wave injection from the photosphere. The hot corona is achieved through MHD shocks generated by nonlinear mode conversion from Alfven waves (Hollweg et al., 1982; Kudoh & Shibata, 1999). The model's properties have been extensively investigated, including parameter dependencies (Antolin & Shibata, 2010; Matsumoto & Shibata, 2010), and differences from the nanoflare model (Antolin et al., 2008).
The fundamental equations solved are as follows: The equation of mass conservation:
\[\frac{\partial\rho A}{\partial t}+\frac{\partial\rho v_{\rm s}A}{\partial s}=0, \tag{1}\]
the equation of momentum conservation along the field line:
\[\frac{\partial\rho v_{\rm s}A}{\partial t}+\frac{\partial}{\partial s}\left( \left[\rho v_{\rm s}^{2}+P_{\rm g}+\frac{B_{\phi}^{2}}{2}\right]A\right)= \left(P_{\rm g}+\frac{\rho v_{\rm\phi}^{2}}{2}\right)\frac{dA}{ds}-\rho g_{\rm s }A, \tag{2}\]
the equation of angular momentum conservation:
\[\frac{\partial\rho v_{\rm\phi}A^{3/2}}{\partial t}+\frac{\partial}{\partial s }\left(\left[\rho v_{\rm\phi}v_{\rm s}-B_{\rm\phi}B_{\rm s}\right]A^{3/2} \right)=\rho L_{\rm trq}A, \tag{3}\]
the induction equation:
\[\frac{\partial B_{\phi}A^{1/2}}{\partial t}+\frac{\partial}{\partial s}\left( \left[B_{\phi}v_{\rm s}-B_{\rm s}v_{\rm\phi}\right]A^{1/2}\right)=0, \tag{4}\]
and the equation of total energy conservation:
\[\frac{\partial EA}{\partial t}+\frac{\partial}{\partial s}\left( \left[\left\{\mathcal{E}+P_{\rm g}+\frac{B_{\phi}^{2}}{2}\right\}v_{\rm s}-B_ {\phi}B_{\rm s}v_{\rm\phi}\right]A\right)\] \[=-L_{\rm rad}A+Q_{\rm Cnd}A-\rho v_{\rm s}g_{\rm s}A+\rho v_{\rm \phi}L_{\rm trq}A^{1/2}. \tag{5}\]
In these equations, \(\rho\) and \(P_{\rm g}\) represent mass density and gas pressure, respectively; \(v_{\rm s}\) is the velocity along the field line; \(v_{\rm\phi}\) is the toroidal velocity; \(B_{\rm\phi}\) is the toroidal magnetic field strength normalized by \(\sqrt{4\pi}\). \(\mathcal{E}\) is the total energy density given by
\[\mathcal{E}=\frac{1}{2}\rho\left(v_{\rm s}^{2}+v_{\rm\phi}^{2}\right)+\frac{P _{\rm g}}{\gamma-1}+\frac{B_{\phi}^{2}}{2}, \tag{6}\]
where \(\gamma\) is the ratio of specific heats and was taken to be \(5/3\).
The variables \(g_{\rm s}\) and \(A\) represent gravitational acceleration along the field line and the cross-section of the flux tube, respectively. These variables depend on the shape of the flux tube, and we adopted the same shape as used in Moriyasu et al. (2004). The field line below the chromosphere is significantly inclined from the vertical direction, leading to p-mode leakage.
Equation 5 includes source terms accounting for radiation, thermal conduction, gravity, and torque. For radiation, we employed a composite model that considers both optically thick and thin radiative loss mechanisms (Shoda & Takasao, 2021). The radiative loss functions from optically thin plasma were estimated via CHIANTI, assuming photospheric element abundances. We did not account for the feedback of NEI in the radiative loss function, although it has been suggested that the impact of NEI on the radiative loss function is projected to be at most 5% in dense atmospheres after solar flares (MacNeice et al., 1984) or at most a factor of 2 to 4 in coronal loops (Mariska et al., 1982; Hansteen, 1993).
We selected the Spitzer-type thermal conduction. To reduce numerical diffusion near the lower transition region, we implemented a broadening technique for temperatures below 0.15 MK, denoted as \(T_{\rm c}\)(Lionello et al., 2009). This technique involves adjustments to both the thermal conduction coefficient and radiative cooling. By implementing this technique, the width of the transition region below \(T<T_{\rm c}\) will be broadened by a factor of \(\sim(T_{\rm c}/T)^{5/2}\). Consequently, we achieved a reduction in the relative temperature difference between adjacent grid points, \(\Delta\ln T\), to less than 5% for the current grid size, thereby ensuring the allocation of more than 20 grid points for the transition region. We will discuss the limitations of these modifications in subsequent sections.
The amplitude of the torque at the footpoints is denoted as \(L_{\rm trq}\). This torque was enforced only at the footpoints, and its amplitude was determined such that the root mean square of the toroidal velocity amplitude reached \(\sim\) 1 km s\({}^{-1}\). This velocity amplitude is consistent with the horizontal velocities in the granular cells (van Ballegooijen et al., 1998; Matsumoto & Kitai, 2010; Chitta et al., 2012; Oba et al., 2017).
We adopted an approximated equation of state in our model to include the effect of a change in molecular weight (Matsumoto & Suzuki, 2014). Although this slightly modified the atmospheric structure below the chromosphere (\(T<10^{4}\) K) compared to constant molecular weight, the effects on the transition region and the corona should be subtle.
We investigated the impact of NEI for the elements C, N, O, Si, and S. In this regard, we solved a series of NEI equations for these elements, expressed as follows:
\[\frac{\partial N_{\rm i}}{\partial t}+\nabla\cdot\left(N_{\rm i}\mathbf{v} \right)=N_{\rm e}\left(S_{\rm i-1}N_{\rm i-1}+\alpha_{\rm i+1}N_{\rm i+1}- \left(S_{\rm i}+\alpha_{\rm i}\right)N_{\rm i}\right), \tag{7}\]
where \(N_{\rm i}\) denotes the number density of a specific element in the ith ionization stage while \(N_{\rm e}\) indicates the electron number density. The coefficients on the right-hand side, \(S_{\rm i}\) and \(\alpha_{\rm i}\), indicate temperature-dependent ionization and recombination rate coefficients obtained from CHIANTI atomic database 10.0.2 (Del Zanna et al., 2021). To focus solely on the effects of NEI, we assumed a constant electron density of \(5\times 10^{9}\) cm\({}^{-3}\) when calculating the coefficients, although
the density dependence removes some discrepancies between theory and observations on Li- and Na-like ions (Dufresne et al., 2023). Note that we did not include the feedback effects on MHD variables through the radiative cooling function.
We developed an original MHD code capable of conducting precise and stable simulations, even in the low beta region surrounding the transition region. To calculate numerical flux, we employed the HLL-approximated Riemann solver (Einfeld et al., 1991). Furthermore, conservative variables were reconstructed in each cell using a 3rd order weighted essentially non-oscillatory (WENO) scheme and subsequently integrated in time through the 3rd order Arbitrary Derivative Riemann Problem (ADER) scheme (Balsara et al., 2009). Considering that the time scale of thermal conduction is generally much shorter than that of dynamics, we adopted an operator split method, implicitly solving thermal conduction using the super time-stepping method (Meyer et al., 2012). Because the shortest time scale in equations (7) is often smaller than dynamical time scales, we implicitly solved the right-hand side of equations (7).
We applied the same boundary and initial conditions as outlined in Antolin & Shibata (2010). In brief, the initial conditions maintained hydrostatic equilibrium up to a height of 2 Mm, above which an artificially denser atmosphere was assumed to avoid severe CFL condition. The numerical domain spaned 103 Mm, including 2 Mm of subphotospheric layers at both ends to mitigate artificial oscillations arising from the boundaries. Grid spacing started at 10 km for the initial 16 Mm from both boundaries and gradually increased to 100 km in the central region. We initially computed the entire evolution of coronal loops using this coarse grid spacing. Subsequently, we concentrated on three specific 20-minute time intervals and conducted additional calculations with finer grid sizes, reaching down to 1.25 km near the transition regions. This approach enabled us to reduce computational costs while effectively resolving the narrow transition region.
## 3 Results and discussions
In this study, we performed 1.5-dimensional MHD simulations of a coronal loop model subjected to Alfven wave heating. Simultaneously, we solved the NEI equations for selected elements. Subsequently, by tracking the dynamic evolution of temperature, electron density, and ionization fractions, we calculated the emergent intensity employing the CHIANTI database. Our results unveil that specific Li- and Na-like ions display higher intensities when contrasted with the predictions based on the coronal approximation.
### Properties of coronal model
The system underwent both the evaporation (\(t<\) 200 min) and condensation (\(200<t<\) 500 min) phases before attaining a quasi-steady state (\(t>\) 500 min) (Fig. 1). The coronal mass in Fig. 1(b) was normalized by the cross-sectional area at the base, denoted as \(A_{0}\), and defined as \(\int\rho A/A_{0}ds\), where integration was performed over the region where the temperature exceeded \(10^{5}\) K. In the quasi-steady state, the model reproduced a warm loop with an apex temperature of approximately 1.1 MK, featuring sharp transition regions between the chromosphere and the corona (the minimum temperature scale height of \(\sim\) 70 km on average). The average electron density at the apex was \(6.2\times 10^{8}\) cm\({}^{-3}\), while the average length of the coronal loop was 79 Mm. These characteristics aligned with those of warm loops that have been extensively studied since the work of Aschwanden et al. (1999).
Two significant dynamics pertaining to the transition region are noteworthy: the collision of shocks and evaporation/condensation processes. Firstly, the transition region exhibited vertical motion in response to the interaction with MHD shocks emanating from the photosphere (see Fig. 2). The typical altitude of the transition region measured approximately 9 Mm, with temporal variations of up to 3 Mm. These motions have previously been interpreted as spicule motion (Hollweg et al., 1982; Kudoh & Shibata, 1999). This spicular dynamics did not change significantly through the whole calculation after the formation of the hot corona. Secondly, the shock-induced heating coincided with evaporation and condensation processes (see Fig. 1b), resulting in the circulation of materials between the corona and the chromosphere. These phenomena contributed significantly to the dynamic ionization and recombination processes occurring within the region, rendering them pivotal factors for consideration in the examination of emission originating from the transition region. Subsequent subsections of this study will delve into these aspects in greater detail.
### Ionization fractions in the evaporation phase
Significant departures from ionization equilibrium were observed in the middle of corona as well as near the transition region. Figure 3 presented a snapshot of the ionization fractions of C ions during the evaporation phase. The spatial distribution of ionization fractions exhibited two notable features. First, in the middle corona, the spatial profile of ionization fractions appeared considerably smoother than that in the equilibrium state (Fig. 3a). This phenomenon can be attributed to the longer ionization time scale, which prevents the ionization fraction from promptly responding to temperature fluctuations induced by shocks and acoustic waves. Previous sim
Figure 1: Temporal evolution of (a) the maximum temperature and (b) the coronal mass normalized by the cross sectional area at the photosphere.
ulations with time-dependent heating rates have demonstrated that NEI effects can arise due to temperature fluctuations, even in the absence of mass motion (Mariska et al., 1982). Observations have also indicated that the variability time scales are often constrained by ionization processes, regardless of the underlying atmospheric dynamics (Golub et al., 1989). Second, the peak of the ionization fraction for Li- and Na-like ions (e.g., C iv in Fig. 3b) was frequently displaced to higher altitudes. Consequently, the distributions of C iii and C v were also shifted to higher altitudes.
The aforementioned deviations in the transition region were also evident in the probability distribution function (PDF) of the ionization fraction in temperature space (Fig. 4). Due to the effects of NEI, the ionization fraction did not solely depend on the local temperature, leading to a certain distribution with finite width at a specific temperature. We discretized the temperature space into bins of \(\Delta\log T=0.05\) and computed PDFs, denoted as \(F_{\rm i}\), for each temperature bin as follows:
\[F_{\rm i}(T;x)=P(T;n_{\rm i}\leq x), \tag{8}\]
where \(i\) serves as the index for ion species; \(P(n_{\rm i}\leq x)\) represents the probability that a particular ionization fraction, \(n_{\rm i}\), is smaller than \(x\) at temperature \(T\). To estimate PDF in the evaporation phase, we used the data between \(t=135\) to \(t=145\) min. The solid lines in Fig. 4 depicted the ranges of 1-sigma levels for PDF (\(F_{\rm i}\in[0.17,0.83]\)), whereas the dashed lines illustrate the ionization fraction at the equilibrium state. The over-fraction and shift toward higher temperature was found for C iv while other C ions almost followed the ionization fraction in equilibrium.
The reason for the enhancement of C iv fraction can be primarily attributed to the evaporation from the chromosphere to the corona. In figure 5, we depicted evolution of a certain plasma parcel co-moving with fluid to trace its temperature, electron density, position, and ionization fractions. From \(t=137.5\) min, the ionization fraction of C iv increased as the plasma temperature increased to its formation temperature. As time went on, the plasma parcel evaporated to coronal temperature from \(t=142.5\) min in 40 sec, and then, the ionization fraction started to decrease. Because the evaporation
Figure 4: Ionization fractions of C ions as a function of temperature. Solid lines depict 1 sigma intervals of PDF of the NEI fractions. Dashed lines represent the fractions in ionization equilibrium.
Figure 3: (a) A snapshot of ionization fractions of C ions as a function of \(s\) for the entire loop. (b) Zoomed-in view of (a) focused on the transition region. Solid lines depict the NEI fractions, while dashed lines represent the ionization fraction in equilibrium state.
Figure 2: Time distance diagram of (a) temperature and (b) velocity along the field line. The black solid line in panel (b) indicates the contour at \(T=10^{9}\) K that represent the transition region.
time scale of \(\sim 40\) sec was comparable to the recombination time scale, the ionization fraction stayed larger than that in equilibrium during the evaporation process. The similar over-fractionation can be found in the steady solution of siphon flow (Noci et al., 1989; Spadaro et al., 1990). The over-fraction of C iv is also shown in the evaporation phase in nanoflare-heated loops Hansteen (1993). Throughout the evaporation process, this plasma parcel encountered shocks at \(t\sim 140\) min and 140.8 min. Upon interaction with the shocks, the C iv ionization fraction increased spontaneously. Despite these collisions occurring within a 10-second timeframe, which is shorter than the ionization time scales, the C iv ionization fraction closely tracked the equilibrium fraction. Considering that C iii and C v displayed over- and under-fraction, respectively, the C iv ionization fraction was likely maintained by an enhancement of ionization and a reduction in recombination during the shock passage. Nano-flare-generated waves entering to the transition region from the corona could also modify the ionization fraction (Hansteen, 1993), although we did not observe this phenomena probably because our model lacks nanoflares.
### Ionization fractions in the condensation and quasi-steady phase
During the condensation phase, the C iv ions exhibited, on average, an under-fraction. Figure 6 displayed the pertinent physical characteristics of the condensing plasma parcel. Due to the finite duration required for the recombination process from C v to C iv, which exceeded the condensation timescale, the ionization fraction of C iv remained below that of the equilibrium state. This under-fraction of C iv is qualitatively consistent with the simulation by Hansteen (1993) during the condensation phase in a nanoflare-heated loop. Moreover, it was observed that the plasma parcel experienced a collision with a shock at \(t\sim 311\) min, during which the C iv ionization fraction once again closely followed the equilibrium fraction, as was noted during the evaporation phase. Despite the plasma undergoing repeated evaporation (e.g., at \(t\sim 307.8\) min) and condensation episodes during the whole condensation phase, the average ionization fraction of C iv was consistently smaller than that in the equilibrium state.
While there were fluctuations in the deviations from equilibrium, the average ionization fraction of C iv remained nearly consistent with that of the equilibrium state during the quasi-steady phase.
### Synthetic UV intensity
The synthetic UV intensities obtained using the coronal approximation were generally smaller than those derived with NEI by a maximum of 40% during the evaporation phase (Table 1). The UV intensity computation under the coronal approximation is governed by the following formula:
\[I(\lambda)=\int Ab(X)C(T,\lambda,N_{\rm e})N_{\rm e}N_{\rm H}ds. \tag{9}\]
Here, \(Ab(X)\) represents the abundance of element \(X\), assuming photospheric abundance. The function \(C(T,\lambda,N_{\rm e})\) corresponds to the contribution functions for each spectral line, calculated using the CHIANTI software. For NEI plasma intensities, we modified the integrand in eq. (9) by multiplying it with the ionization fraction ratio, \(N_{\rm i;NEI}/N_{\rm i;EI}\). Among several spectral lines, we specifically highlighted lines with a ratio \(I_{\rm EI}/I_{\rm NEI}\) less than 0.8 and an intensity greater than 10 erg cm\({}^{-2}\) s\({}^{-1}\) sr\({}^{-1}\). While the effect of NEI significantly enhanced the intensity through the over-fraction during the evaporation phase, our model still exhibited discrepancies between \(R\) (defined as \(I_{\rm EI}/I_{\rm NEI}\)) and \(I_{\rm H}/I_{\rm obs}\), as observed in Judge et al. (1995) and Dufresne et al. (2023). The under-fraction observed during the condensation and quasi-steady phases resulted in the ratio \(R<1.1\) at most.
### The effect of broadening technique of the transition region
In our study, we implemented a numerical broadening technique for the transition region to mitigate numerical diffusion effects near the lower transition region, which was identified as a factor reducing the impacts NEI. To investigate the impact of this technique, we conducted simulations with a broader transition region, specifically setting \(T_{\rm c}=0.2\) MK. The results of this simulation revealed a increase in the ratio of \(R\), such as an increase from 0.6 to 0.8 for C iv ions 384.17 A, and similar increases were observed for other Li- and Na-like ions. This increase in the intensity ratio can be attributed to the broadening of the lower transition region, which results in a longer dynamical time scale for traversing the layer compared to realistic conditions, thus potentially reducing the influence of NEI. Consequently, we consider the intensity ratio obtained under our parameter settings to represent an upper limit in this context.
### Mass circulation
The magnitude of over-fraction evolved through the mass circulation process occurring between the corona and the chromosphere, as our model has unveiled distinct ionization fractions across various phases. Our model disclosed that evaporation occurred prior to
\begin{table}
\begin{tabular}{l r r r r r} \hline Ion (Seq) & \(\lambda\) [Å] & \(I_{\rm EI}\) & \(I_{\rm NEI}\) & \(R\) & \(I_{\rm B}/I_{\rm obs}\) \\ \hline C iv (Li) & 1548.19 & 3452.0 & 4328.2 & 0.8 & 0.197\({}^{a}\), 0.31-1.28\({}^{b}\) \\ C iv (Li) & 1550.7 & 1725.0 & 2162.2 & 0.8 & 0.34-1.01\({}^{b}\) \\ C iv (Li) & 384.17 & 27.0 & 43.0 & 0.6 & \\ C iv (Li) & 312.42 & 19.4 & 30.8 & 0.6 & \\ C iv (Li) & 419.71 & 25.1 & 39.2 & 0.6 & \\ C iv (Li) & 384.03 & 15.0 & 23.9 & 0.6 & \\ Nv (Li) & 1238.82 & 417.2 & 599.7 & 0.7 & 0.198\({}^{a}\), 0.28\({}^{b}\) \\ Nv (Li) & 1242.80 & 208.6 & 299.7 & 0.7 & 0.17-0.40\({}^{b}\) \\ Nv (Li) & 247.71 & 10.5 & 15.6 & 0.7 & \\ Nv (Li) & 209.27 & 7.2 & 10.7 & 0.7 & \\ Nv (Li) & 266.38 & 7.5 & 11.4 & 0.7 & \\ O v (Li) & 1031.91 & 3586.7 & 4675.5 & 0.8 & 0.524\({}^{a}\),0.5-0.71\({}^{b}\) \\ O v (Li) & 1037.61 & 1785.8 & 2326.5 & 0.8 & 0.41-0.76\({}^{b}\) \\ O v (Li) & 173.08 & 143.9 & 185.6 & 0.8 & \\ O v (Li) & 184.12 & 95.9 & 125.5 & 0.8 & \\ O v (Li) & 172.94 & 80.1 & 103.4 & 0.8 & \\ O v (Li) & 183.94 & 48.4 & 63.3 & 0.8 & \\ Si iv (Na) & 1128.34 & 33.5 & 41.7 & 0.8 & 0.16-0.19\({}^{b}\) \\ Si iv (Na) & 1122.48 & 18.8 & 23.4 & 0.8 & \\ Si vi (Na) & 933.38 & 120.4 & 163.6 & 0.7 & 0.31-0.59\({}^{b}\) \\ Si vi (Na) & 944.52 & 60.0 & 81.5 & 0.7 & 0.58-1.15\({}^{b}\) \\ S vi (Na) & 712.67 & 8.2 & 11.2 & 0.7 & 0.4\({}^{b}\) \\ \hline \end{tabular} Note. Ion- emitting ion; Seq- ion isoelectronic sequence; 4- wavelength; \(I_{\rm EI}\) and \(I_{\rm NEI}\) - predicted intensity in EI and NEI [ergs cm\({}^{-2}\) s\({}^{-1}\) sr\({}^{-1}\)]; \(R\) - ratio of \(I_{\rm EI}\) to \(I_{\rm NEI}\); \(I_{\rm H}/I_{\rm obs}\) - ratio of theoretical predictions of intensity to observed intensity (\({}^{a}\) Judge et al. (1995) and \({}^{b}\) Dufresne et al. (2023)).
\end{table}
Table 1: Comparison of intensities from ionization fractions in equilibrium and non-equilibrium during the evaporation phase.
\(t<200\) min, followed by the onset of condensation until approximately \(t\sim 500\) min. The duration of these mass cycles could be partly affected by NEI effect on the radiative cooling function that is suggested to increase the cooling time scale of the coronal loop (Bradshaw & Mason, 2003) if we include the feedback effects. Subsequently, the system maintained a quasi-steady state, at least until \(t=800\) min, though it is worth noting that the continuity of this quasi-steady state beyond that point is not guaranteed. Similar models with significantly extended computational durations have elucidated the cyclic evolution of coronal loops (Washinoue & Suzuki, 2019), a phenomenon that could be attributed to the thermal instability inherent in coronal loops (Kuin & Martens, 1982). While providing a quantitative estimate for the amplitude of over-fraction remains challenging, it is reasonable to anticipate that the ratio \(R\) presented during the evaporation phase (as detailed in Table 1) would undergo increase when taking the condensation phase into consideration.
## 4 Conclusions
In this study we have conducted 1.5-dimensional MHD simulations for Alfven-wave-heated coronal loops simultaneously solving a series of NEI equations for several ion species. After introducing the Alfvenic fluctuations at the foot point of the loop, the system experiences the evaporation and condensation phases before it reaches the quasi-steady state. During the evaporation phase, the over-fractionation of Li- and Na-like ions results in the intensity ratios, \(R\), as low as 0.6. Conversely, during the condensation and quasi-steady phases, the under-fractionation leads to \(R\) values as high as 1.1. These pronounced fluctuations in ionization fractions are primarily attributed to the processes of evaporation and condensation between the corona and the chromosphere. Interestingly, the collision with shocks do not significantly deviate the C iv ion fraction from the ionization equilibrium.
While our model has successfully demonstrated a reduction in the intensity ratios \(R\) during the evaporation phase, a noticeable gap between the model and observational data still persists. Bridging this gap necessitates further investigations into atomic physics aspects, such as density effects, photoionization, and charge transfer, which collectively have the potential to align the theoretically predicted UV intensities of transition region spectral lines with observations (Dufresne et al., 2023). Furthermore, the consideration of first ionization potential effects may play a crucial role in narrowing the discrepancy between our model and observations, particularly in the case of Si iv. This hypothesis has been discussed in prior studies and has the potential to explain anomalous behaviors in the line ratio between Si iv and O iv(Olluri et al., 2015; Martinez-Sykora et al., 2016).
The observed over-fractionation of Li- and Na-like ions holds significant scientific interest, as it may serve as an indicator of mass motions closely linked to coronal heating mechanisms and mass loss processes. This phenomenon gains relevance in the context of the wealth of supporting evidence for impulsive heating events that drive the cycle of evaporation and condensation in the solar atmosphere (Klimchuk, 2006). Consequently, the extent of over-fractionation could potentially offer valuable constraints on the amplitude and frequency of such impulsive heating events (Bradshaw & Klimchuk, 2011). Furthermore, it's worth noting that the solar wind are known
Figure 5: (a) Temperature (dashed line), electron density (dotted line), position (solid line), and (b) ionization fractions of a traced particle in the evaporation phase. The solid and the dotted lines in panel (b) indicate the ionization fraction in non-equilibrium and equilibrium, respectively.
to expand the UV-observed solar atmosphere via the effects of NEI (Neupert, 1964; Mariska et al., 1978; Dupree et al., 1979). Consequently, the degree of over-fraction could be considered as a metric for mass loss processes in the Sun. Importantly, anomalies in the ionization states of Li- and Na-like ions have also been identified in stellar atmospheres (Del Zanna et al., 2002), suggesting that the observation of these ions may provide valuable insights into the heating and mass-loss mechanisms in other stars.
Multi-dimensional simulations that incorporate NEI effects are crucial for conducting a quantitative comparison between theoretical models and complex observational data (Olluri et al., 2013, 2015; Martinez-Sykora et al., 2016). However, such simulations remain a formidable challenge, even with the current computational resources at our disposal. This challenge arises from the necessity to accurately resolve the narrow transition region when solving the advection equations for Li- and Na-like ions. The width of the transition region where those ions form, determined by the Field length or \(\sqrt{\kappa T/|\rho\,\mathcal{L}|}\)(Begelman & McKee, 1990), was sometimes going down to \(\sim 20\) km at \(T\sim 10^{5}\) K in our simulation. Notably, this width is broadened by a factor of \((T_{c}/10^{5}\) K)\({}^{5/2}\sim 2.8\) with our broadening technique, effectively reducing the width to approximately 7 km in the realistic situation. To achieve a high-resolution representation of this thin layer, typically requiring at least 10 grid points to mitigate numerical diffusion, we find it necessary to employ the adaptive mesh refinement schemes, even in one-dimensional simulations (Bradshaw & Mason, 2003).
The extent of over-fractionation is likely contingent on the mass flux or the rate of mass exchange between the corona and the chromosphere. However, our current model does not comprehensively explore this particular parameter. To conduct a thorough investigation of these parameters, it would be advantageous to employ time-steady solutions (Noci et al., 1989; Spadaro et al., 1990; Gilly & Cranmer, 2020). By deriving the over- and under-fractionation behaviors of Li- and Na-like ions as functions of mass flux, we could potentially establish valuable constraints on coronal heating mechanisms and mass loss rates. These constraints would be particularly informative when compared with UV observations in future.
## Acknowledgements
This work was supported by JSPS KAKENHI Grant Number JP23K03456. This study was carried by using the computational resource of the Center for Integrated Data Science, Institute for Space-Earth Environmental Research, Nagoya University through the joint research program. CHIANTI is a collaborative project involving George Mason University, the University of Michigan (USA), University of Cambridge (UK) and NASA Goddard Space Flight Center (USA).
## Data Availability
The data underlying this article will be shared upon reasonable request by the corresponding author.
|
2303.17809 | Never a Dull Moment: Distributional Properties as a Baseline for
Time-Series Classification | The variety of complex algorithmic approaches for tackling time-series
classification problems has grown considerably over the past decades, including
the development of sophisticated but challenging-to-interpret
deep-learning-based methods. But without comparison to simpler methods it can
be difficult to determine when such complexity is required to obtain strong
performance on a given problem. Here we evaluate the performance of an
extremely simple classification approach -- a linear classifier in the space of
two simple features that ignore the sequential ordering of the data: the mean
and standard deviation of time-series values. Across a large repository of 128
univariate time-series classification problems, this simple distributional
moment-based approach outperformed chance on 69 problems, and reached 100%
accuracy on two problems. With a neuroimaging time-series case study, we find
that a simple linear model based on the mean and standard deviation performs
better at classifying individuals with schizophrenia than a model that
additionally includes features of the time-series dynamics. Comparing the
performance of simple distributional features of a time series provides
important context for interpreting the performance of complex time-series
classification models, which may not always be required to obtain high
accuracy. | Trent Henderson, Annie G. Bryant, Ben D. Fulcher | 2023-03-31T05:55:54Z | http://arxiv.org/abs/2303.17809v1 | # Never a Dull Moment: Distributional Properties as a Baseline for Time-Series Classification
###### Abstract
The variety of complex algorithmic approaches for tackling time-series classification problems has grown considerably over the past decades, including the development of sophisticated but challenging-to-interpret deep-learning-based methods. But without comparison to simpler methods it can be difficult to determine when such complexity is required to obtain strong performance on a given problem. Here we evaluate the performance of an extremely simple classification approach--a linear classifier in the space of two simple features that ignore the sequential ordering of the data: the mean and standard deviation of time-series values. Across a large repository of 128 univariate time-series classification problems, this simple distributional moment-based approach outperformed chance on 69 problems, and reached 100% accuracy on two problems. With a neuroimaging time-series case study, we find that a simple linear model based on the mean and standard deviation performs better at classifying individuals with schizophrenia than a model that additionally includes features of the time-series dynamics. Comparing the performance of simple distributional features of a time series provides important context for interpreting the performance of complex time-series classification models, which may not always be required to obtain high accuracy.
Time series Time-series classification Benchmarking.
## 1 Introduction
Time-series classification is a key problem in the sciences and industry wherein time-varying data is used to distinguish labeled classes. The quantity and diversity of time-series classification algorithms is large and increasing, from simple linear decision boundaries in interpretable feature spaces [1] to complex methods based on deep neural networks, such as long short-term memory networks [2]. Complex new algorithms can yield impressive classification accuracy on challenging problems, but often at the expense of clear human interpretability [3].
The UEA/UCR univariate time-series classification repository, which currently contains 128 problems spanning a variety of domains [4], has been crucial for encouraging transparent reporting of the relative strengths and weaknesses of classification algorithms. It has also enabled the field to overcome key limitations in prior standard practice of reporting and comparison, including avoiding cherry-picking of optimistic datasets when reporting algorithm performance [5]. Systematic comparisons of time-series classification algorithms across this database have been essential for benchmarking the accuracy of state-of-the-art algorithms [6]. However, in settings ranging from policy-making [7] to healthcare [8], deriving interpretable understanding that can guide subsequent decision-making can be more important than raw classification accuracy. In such settings, simpler methods that are faster to train and clearer to interpret, are often preferred.
Recent work has shown that parsimonious and interpretable methods can match (or even outperform) more sophisticated ones, in settings ranging from sleep-stage classification [9] to earthquake detection [10]. A particularly striking example is the recent demonstration of a two-parameter logistic regression model with equivalent performance in earthquake aftershock forecasting to a deep neural network with thousands of parameters [11]. For such settings, in which simple methods can perform well, the choice to instead use overly complex models risks over-fitting, leading to poor generalizability on out-of-sample data. This effect has been demonstrated in a recent meta-analysis of deep
learning models for classifying autism spectrum disorder from resting-state neuroimaging data, which exhibited inferior performance to linear support vector machine (SVM) models on unseen data [12].
Defaulting to complex classification models and comparing their performance to chance-level accuracy--or even other complex methods--can thus result in classifiers that are over-complicated and difficult to interpret. This issue is well-illustrated by the time-series classification task of distinguishing 'epilepsy' from 'eyes-open' states from electroencephalogram (EEG) data [13]. While state-of-the-art approaches had applied complex algorithmic approaches, ranging from independent components analysis and discrete wavelet transforms to multi-layer neural networks [14], it was found that the time-series standard deviation alone could completely separate the two classes [15]. The strong performance of a simple threshold classifier on standard deviation thus suggests it as an interpretable and parsimonious alternative to overly complicated algorithmic approaches to this problem, highlighting the utility of starting with a simple and parsimonious model and building in complexity only when it yields clearly demonstrable benefits.
Given that the characteristic complexity of time-series classification tasks (compared to classification using non-sequential data) relates to the challenge of quantifying class-informative dynamical patterns, here we aimed to investigate the performance of an extremely simple benchmark: a linear classifier in the two-dimensional space of two extracted features that ignore the sequential ordering entirely: the mean and standard deviation. As these features characterize the distribution of time-series values (and are thus unrelated to the sequential ordering), high performance of this simple benchmark indicates problems for which trivial properties of the distribution are already sufficient to perform well, undermining the need for more complex approaches that aim to quantify informative temporal patterns [16]. While some previous work [1, 17] has \(z\)-scored all time series prior to analysis--thereby insuring mean and variance are uninformative, to focus on dynamical patterns--such normalization has not been applied consistently across all problems in the UEA/UCR Repository [6]. We also investigate the performance gains of additionally incorporating a simple set of time-series features that capture different types of dynamical patterns in the data, using the _catch22_ feature set [17]. None of these features are sensitive to the mean or standard deviation of the data; they instead capture more subtle properties of the time series, including properties of distributional shape; periodic patterns; temporal spacing of outliers; and linear and nonlinear autocorrelation [17]. Finally, to demonstrate the practical implications of our findings, we present a case study of classifying individuals with schizophrenia using functional neuroimaging time series.
## 2 Methods
This study aims to evaluate the performance of a simple baseline linear classifier for time series in the two-dimensional space of their mean (\(\mu\)) and standard deviation (\(\sigma\))--features related to the First Two Moments (FTM) of the distribution. We first discuss methods related to fitting the FTM and _catch22_ feature-based time-series classifiers across the UEA/UCR Repository [4] (in Sec. 2.1) and then describe specific methods related to our neuroimaging case study (in Sec. 2.2). Code for reproducing our results is available on GitHub1.
Footnote 1: [https://github.com/hendersontrent/mean-var-ts-classify](https://github.com/hendersontrent/mean-var-ts-classify)
### Feature-based classifier performance on the UEA/UCR Repository
In the UEA/UCR Time-Series Classification Repository [4], each of the 128 problems (as downloaded on 9 Feb 2023) are partitioned into designated train and test sets. These problems vary widely in the number of time series, the relative size of train and test set sizes, time-series lengths, and number of classes (see [4, 6] for details). For all time series, we computed the FTM feature set containing two features: the mean (\(\mu\)) and standard deviation (\(\sigma\)). For comparison to more sophisticated features of the time-series dynamics, we used the _catch22_ set of 22 time-series features [17]. Given a feature space, we fit and evaluated classification models following the resample-based procedure outlined in [6], using 30 resamples of train-test splits. In addition to the designated split, we generated 29 additional (seeded) resamples of the data that preserve the class proportions of the designated split, for a total of 30 train-test splits. Prior to fitting a linear SVM, features were normalized as a \(z\)-score by computing the mean and standard deviation of each feature in the train set, and using these values to rescale both the train and test sets (ensuring that test data was completely unseen). A linear SVM was fit on the train set and used to generate predictions for the test set. Classification accuracy was used as the performance metric, following prior work. To statistically compare the performance of FTM, and the union set of 24 features from both _catch22_ and FTM (_catch22_ + FTM), we implemented a correlated test statistic [18] that corrects the traditional \(T\)-statistic to account for the violation of the assumption of independence incurred in the usage of resampling using the _correctR_ package [19]. To compare the FTM and _catch22_ + FTM feature sets, we removed four problems (AllGestureWiimoteX, AllGestureWiimoteY, AllGestureWiimoteX, and PLAID) due to presence of time series that had \(<10\) real values before containing all or mostly missing values--the minimum threshold for calculations in _catch22_.
### Case study: Schizophrenia classification
As a case study, we investigated the performance of simple feature-based classifiers to distinguish adults with schizophrenia versus cognitively healthy controls based on their whole-brain activity dynamics. We obtained resting-state functional magnetic resonance imaging (rs-fMRI) data from the University of California at Los Angeles Consortium for Neuropyschiatric Phenomics LA5c Study [20], which was pre-processed using the ICA-AROMA + 2P + GMR method, as described previously [21]. Blood oxygen level-dependent (BOLD) signals consisting of 152 time samples were extracted for each of 68 cortical [22] and 14 subcortical [23] regions per participant. Any participants with flatline time series across all brain regions after preprocessing were excluded, yielding a final sample of \(N=166\) participants (\(N=118\) control and \(N=48\) schizophrenia). The two groups differ in terms of age (Control \(=31.5\pm 8.8\) years; Schizophrenia \(=36.6\pm 9.0\) years; Wilcoxon rank sum test \(p<0.01\)) and sex (Control \(=46.6\%\) Female; Schizophrenia \(=25\%\) Female; \(X^{2}(1,N=166)=5.75\), \(p<0.05\)).
After feature extraction (using _theft_[24]), the dataset was in the form of an \(N\times R\times F\) matrix, for \(N\) subjects, \(R\) brain regions, and \(F\) features. We incorporated all combinations of \(82\) brain regions and features (either \(2\) FTM features or \(24\) FTM + _catch22_ features) as inputs to a regularized linear SVM classifier (from _scikit-learn_[25]), setting the regularization parameter \(\mathtt{C}=\mathtt{1}\), disabling the shrinking heuristic, and using balanced class weights. In other words, the FTM-only model had a total of \(2\times 82=164\) SVM input features and the FTM + _catch22_ model had a total of \(24\times 82=1968\) features.
Model performance was evaluated as balanced accuracy using \(10\)-fold cross-validation (CV) with 10 repeats (setting the random state such that all compared models received the same resampled sets of test folds). For each \(k\)-fold, the same \(z\)-score feature-normalization was applied as described in Sec. 2.1. Balanced accuracy is reported as the mean \(\pm\) SD across the 10 repeats, in which each repeat contains the mean balanced accuracy across the 10 CV folds. Statistical significance of a given balanced accuracy value was assessed using permutation testing using 1000 null samples obtained from shuffling class labels, pooling null samples across all combinations of regions and features for each feature set [26]. False positives were controlled at the \(\alpha=0.05\) level using Bonferroni correction.
## 3 Results
Results are structured as follows. First, in Sec. 3.1, we describe findings across the UEA/UCR Repository, where we demonstrate that the FTM classifier outperforms chance on the majority of problems. We then highlight the practical ramifications of these results using the example of a neuroimaging biomarker classification task in Sec. 3.2, in which the mean and standard deviation exhibit strong performance that is weakened by adding dynamical properties of the functional neuroimaging time series.
### Linear FTM classifier performance across the UEA/UCR Database
We first investigated how a linear classifier based on the two FTM features, \((\mu,\sigma)\), performs across the 128 problems from the UEA/UCR Repository. To determine the problems on which FTM beats chance, we calculated a \(p\)-value of the chance probability against the distribution of 30 resampled FTM accuracy values for each problem. We found that the FTM-based classifier statistically outperformed chance (\(p<0.05\)) on 69 of the 128 problems (i.e., on \(53.9\%\) of problems). FTM-based classification accuracies relative to chance level are plotted for these 69 problems in Fig. 1. We note that chance is often beaten by a substantial margin on these problems. Indeed, this simple classifier achieves 100% accuracy on two problems: InseqCEPGRegularTrain and GunPointOldVersusYoung. These results demonstrate that, even on a repository devoted to time-series classification tasks, strong performance can be obtained using the two simplest distributional statistics that are unrelated to the sequential ordering of the data, due to the database containing problems in which labeled classes have distinctive levels (means) or scales (variances).
To better understand how the FTM classifier is behaving, we examined the two problems for which it achieved 100% accuracy. For GunPointOldVersusYoung, the classes are clearly distinguished in the \((\mu,\sigma)\) feature space (Fig. 2A), while for InsecretEPGRegularTrain each class has a characteristic mean level (Fig. 2B). For the binary ('Young' vs 'Old') GunPointOldVersusYoung task, the time series is the \(x\)-axis coordinate of their centre of the hand at each frame while moving it from rest position to a gun pose and back again. Figure 2A shows that the 'Young' actor exhibits characteristically lower mean and variation of their hand coordinate, consistent with them being shorter and having a shorter arm than the 'Old' actor. It is worth noting that, in contrast to features of the time-series dynamics (many of which are invariant to linear rescalings of the time-series values), \(\mu\) and \(\sigma\) are highly sensitive to the calibration of experimental measurement. For the GunPoint collection of problems, this makes these features less likely to generalize well to new examples, with small differences in the angle of the camera used to measure hand coordinates, or the distance from camera to actor, strongly affecting differences in \(\mu\) and \(\sigma\) and hence accurate classification [27]. In
Figure 1: **Mean and standard deviation as features in a linear SVM classifier statistically outperforms chance-level accuracy on 69 of 128 problems in the UEA/UCR time-series repository. Classification accuracy (\(\%\)) is displayed along the horizontal axis and the name of each problem is shown on the vertical axis. Chance accuracies are displayed as black crosses. Points indicate mean classification accuracy and error bars show \(\pm 1\) standard deviation (across train–test resamples).**
InsectEPGRegularTrain, each class has a characteristic mean voltage level of the electrical circuit that connects insects with their food source, allowing the classes to be accurately distinguished by ignoring sequential patterns, and simply focusing on this mean voltage.
Finally, we aimed to investigate the extent to which measuring the _catch22_ features in addition to the FTM features would improve performance. We compared the performance of the FTM feature set (2 features) to that of the _catch22_ + FTM feature set (24 features) across the 124 problems where _catch22_ features could be successfully calculated (four problems were excluded due to a high number of missing values, cf. Methods). Adding _catch22_ features resulted in an average absolute improvement in classification accuracy of \(33.7\%\) across the 124 problems, confirming the general importance of capturing dynamical properties for time-series classification problems. However, pairwise comparisons using the corrected test statistic revealed that there was no statistical difference between FTM and _catch22_ + FTM on \(45\) of the \(124\) (or \(36.4\%\) of) problems. This demonstrates that for many problems, simple distributional properties yield a surprisingly strong baseline against which to assess the benefits gained by using more complex approaches.
### Neuroimaging Biomarker Case Study
We next extended our investigation of simple baseline time-series classifiers to a schizophrenia classification problem using rs-fMRI time series, a setting in which complex, opaque classifiers are common [28, 29]. Given the strong performance of simple distributional statistics for some time-series classification problems above, we evaluated the performance of FTM alone (and with the addition of the _catch22_ feature set) in classifying individuals with schizophrenia from healthy controls.
As shown in Fig. 3, the model based on FTM features displayed a high balanced accuracy (\(67.5\pm 2.2\%\)), which sits well within the range of a recent meta-analysis of schizophrenia classification studies using rs-fMRI [30]. Adding _catch22_ features to the model decreased its performance to \(63.6\pm 1.0\%\) (corrected resampled \(T\)-statistic = \(3.63\), \(p=0.003\)). Consistent with the findings on some problems in the UAE/UCR Repository above, these results demonstrate the surprisingly strong performance of basic properties of the distribution of time-series values for this fMRI classification task. Surprisingly, for this rich and complex whole-brain time-series dataset, incorporating information about temporal patterns in the data (using _catch22_) yields models with inferior performance than the simple two-dimensional model based on mean and standard deviation.
Figure 2: **Simple distributional moments can perfectly separate classes in the** GunPointOldVersusYoung **and** InsectEPGRegularTrain **datasets.****A** Individual time series in **GunPointOldVersusYoung** are represented in the two-dimensional feature space of mean and standard deviation. Points are colored by class (as labeled) and class-level covariances are shown as shaded ellipses to guide the eye. **B** Histogram of time-series means by class in **InsectEPGRegularTrain**.
## 4 Discussion
With the growing sophistication of machine-learning algorithms, modern data analysts have increasingly adopted a methodological approach that defaults to more complex statistical methods, which can obscure their clear interpretation. Presenting such complex approaches without direct comparison to simpler alternatives makes it difficult to discern whether the methodological complexity is beneficial relative to simpler approaches, an assumption that has been challenged in some recent studies [9; 11; 10]. In this work, focusing on time-series classification problems, we demonstrate that a method at the extreme end of simplicity (using just two distributional-moment-based features, \(\mu\) and \(\sigma\), and a linear classifier) performs surprisingly well on many such problems. Despite being insensitive to the unique property of sequential data relative to an unordered vector--its ordering (in time)--this simple benchmark classifier statistically outperformed chance on approximately half of the problems in a prominent time-series classification archive, the UEA/UCR Repository [4], and even achieved \(100\%\) accuracy on two problems. Our results emphasize the importance of carefully considering the factors that contribute to model performance, assessing model gains relative to simpler models, and favoring parsimony by carefully building up model complexity incrementally from simple baselines.
We demonstrated the applicability of our findings to a time-series classification problem in neuroimaging, a setting in which many prior models have been highly complex models [28] but can fail to generalize [12]. Importantly, classification performance was stronger with FTM than with a model that also included _catch22_ features of the dynamics using an SVM classifier, which is generally the top-performing classifier type in rs-fMRI analysis for schizophrenia [30]. The combination of \(\sigma\) and \(\mu\) yielded a mean balanced accuracy of \(67.5\%\) while affording clear statistical and biological interpretations.
The strong performance of mean and standard deviation across the UEA/UCR Repository demonstrates that time series in these problems have not been consistently normalized, as prior work has highlighted [6]. If all time series were individually \(z\)-score transformed, there would be no class differences in mean or standard deviation for any problem (and the FTM-based classifier would exhibit null performance). Our results have implications for comparing time-series classification algorithms on problems for which labeled classes can be distinguished based on distributional properties alone. For example, consider the _catch22_ features, which are all insensitive to the mean and variance of the input time series [17]. Relative to _catch22_, the superior performance of an alternative time-series classifier (which _is_ sensitive to \(\mu\) or \(\sigma\) of the input series) could be driven entirely by a class-relevant difference in \(\mu\) or \(\sigma\): properties that are unrelated to dynamical patterns (and that _catch22_ cannot access). One approach to testing the ability of different classification algorithms to capture properties related to the dynamics of a uniformly sampled univariate time series would be to normalize time series such that there are no class differences in basic distributional properties. A second approach
Figure 3: **The FTM feature set shows strong performance at classifying individuals with schizophrenia from resting-state fMRI time series.** The distribution of mean balanced accuracy (across 10 repeats), using the combination of all brain regions and either the two _FTM_ features (green), or the 24 FTM + _catch22_ features (orange), is shown as a combined spaghetti plot and boxplot. Each gray line indicates the same resampled set of \(k=10\) folds evaluated for each feature set along the \(x\)-axis. \({}^{**}\): \(p=0.003\), corrected resampled \(T=3.63\).
would be to compare model performance to a benchmark in which only simple distributional properties are included, with gains relative to this benchmark then being attributable to class-relevant differences in more complex properties. In this latter approach, there is scope for extending the two simple (FTM) features used here by adding higher-order moments, or other types of distributional features (such as those included in _hctsa_[31]).
This work also highlights the need for careful consideration of the generalizability of models that use features sensitive to the calibration of time-series measurements (like \(\mu\) and \(\sigma\)), relative to features that are invariant to linear rescalings of the input time series (like the _catch22_ feature set). This is because such measurement-scale-dependent features are highly sensitive to changes in the calibration of experimental measurements that may not be precisely maintained in new data. An illustrative example is shown here in the \(\mathtt{GunPointOldVersusYoung}\) dataset, for which our FTM-based method achieved 100% accuracy. However, this may be a deceptively high value, as it is reliant on the precise calibration of new data. A prior analysis of this dataset, focusing on early classification, has shown that classification accuracy can plummet with only slight variability in experimental calibration (to changes as small as a \(\approx 1.9^{\circ}\) tilt in the angle of the camera used to measure hand coordinates) [27]. In general, the decision of whether or not to include features that are sensitive to measurement scale (or focus on scale-invariant features of the dynamics or distribution shape, as in _catch22_) should be motivated by domain expertise to avoid overly optimistic classification results.
In summary, by highlighting many time-series classification problems for which simple distributional properties of a time series can achieve surprisingly high classification accuracy, our results raise important issues for the development and interpretation of time-series classification models. Future work on evaluating time-series classification algorithms may consider using simple benchmarks for comparison to aid interpretation and provide evidence for the contribution of model complexity to any performance advantage, particularly for problems highlighted here for which simple distributional features are highly informative of class differences.
#### Acknowledgements
The authors would like to thank Kevin Aquino for sharing preprocessed fMRI data [32] used in the case study, and Eamonn Keogh for providing useful feedback on a manuscript draft.
|
2309.04190 | SegmentAnything helps microscopy images based automatic and quantitative
organoid detection and analysis | Organoids are self-organized 3D cell clusters that closely mimic the
architecture and function of in vivo tissues and organs. Quantification of
organoid morphology helps in studying organ development, drug discovery, and
toxicity assessment. Recent microscopy techniques provide a potent tool to
acquire organoid morphology features, but manual image analysis remains a labor
and time-intensive process. Thus, this paper proposes a comprehensive pipeline
for microscopy analysis that leverages the SegmentAnything to precisely
demarcate individual organoids. Additionally, we introduce a set of
morphological properties, including perimeter, area, radius, non-smoothness,
and non-circularity, allowing researchers to analyze the organoid structures
quantitatively and automatically. To validate the effectiveness of our
approach, we conducted tests on bright-field images of human induced
pluripotent stem cells (iPSCs) derived neural-epithelial (NE) organoids. The
results obtained from our automatic pipeline closely align with manual organoid
detection and measurement, showcasing the capability of our proposed method in
accelerating organoids morphology analysis. | Xiaodan Xing, Chunling Tang, Yunzhe Guo, Nicholas Kurniawan, Guang Yang | 2023-09-08T08:03:42Z | http://arxiv.org/abs/2309.04190v4 | SegmentAnything helps microscopy images based automatic and quantitative organoid detection and analysis
###### Abstract
Organoids are self-organized 3D cell clusters that closely mimic the architecture and function of in vivo tissues and organs. Quantification of organoid morphology helps in studying organ development, drug discovery, and toxicity assessment. Recent microscopy techniques provide a potent tool to acquire organoid morphology features, but manual image analysis remains a labor and time-intensive process. Thus, this paper proposes a comprehensive pipeline for microscopy analysis that leverages the SegmentAnything to precisely demarcate individual organoids. Additionally, we introduce a set of morphological properties, including perimeter, area, radius, non-smoothness, and non-circularity, allowing researchers to analyze the organoid structures quantitatively and automatically. To validate the effectiveness of our approach, we conducted tests on bright-field images of human induced pluripotent stem cells (iPSCs) derived neural-epithelial (NE) organoids. The results obtained from our automatic pipeline closely align with manual organoid detection and measurement, showcasing the capability of our proposed method in accelerating organoids morphology analysis.
SegmentAnything, Microscopy image, Organoid Detection Send correspondence to X. Xing ([email protected]) and G. Yang ([email protected]). Xiaodan and Chunling contributed equally to this paper.
## 1 Description of Purpose
Organoids are self-organized 3D tissues typically derived from stem cells, exhibiting key functional, structural, and biological complexity similar to organs [1]. Their close biological resemblance makes organoid culture analysis crucial for advancing biological studies, as it aids in understanding the extent to which organoids resemble their in vivo counterparts.
The analysis of organoid morphology is commonly performed by capturing images of the organoids grown in multi-well plates. However, existing methods have limitations since they aggregate cell growth information over an entire well, rather than providing information about individual organoids and their constituent cells [2]. Unfortunately, manually demarcating organoids in microscopy images poses significant challenges. The sheer number of organoids in a single whole slice microscopy image can reach thousands, making manual demarcation a laborious and time-consuming task.
To tackle this challenge, researchers have introduced deep learning algorithms, such as Startdist [3] and Cellos [2]. However, these deep learning-based approaches demand a substantial amount of annotated data for effective algorithm training. Moreover, their limited scope in handling various modalities hinders their generalizability. For each distinct type of microscopy image, re-training the models becomes necessary, posing practical limitations on their applicability.
In this study, we explore the potential of SegmentAnything [4], a foundation model trained on an extensive dataset of 11 million images encompassing diverse modalities, to automate individual organoid detection in
microscopy images. Moreover, we have integrated comprehensive post-processing and analysis of morphological properties using the masks generated by SegmentAnything. The workflow is demonstrated in Fig. 1. Our main claim is that this proposed pipeline enables both automatic and accurate organoid detection, as well as fully automated organoid morphology analysis.
To further validate our hypothesis, we conducted a comparison between the outcomes of our research and those from a previously published peer-reviewed paper about hiSPCs derived NE organoids[5]. Theses NE organoids are generated to mimic neural tube development at early embryogenesis stage. They are defined with round morphology but different size during culture in vitro. Briefly, the result from our study shows the robustness of SegmentAnything in detecting NE organoids from bright-field images. Furthermore, automatically organoids size quantification closely aligns with the manually measured results from the publication, reinforcing the efficacy of our proposed approach. Thus, this successful validation indicates the reliability and precision of our automated methodology in the field of organoid morphology analysis. We are the first one investigating the efficacy of SegmentAnything on microscopy images, and all codes will be open sourced in [https://github.com/XiaodanXing/SAM4organoid](https://github.com/XiaodanXing/SAM4organoid).
## 2 Method
**Data acquisition.** Bright-field images used in this paper were obtained under the protocol described in [5]. The images were captured using Leica DMi8 microscope (Leica) equipped with 10\(\times\)/0.32 objective lens. We obtained one whole slide image from each group.
**SegmentAnything and post processing.** In our research, we utilized the Python API for SegmentAnything and evaluated three pretrained models[4], namely ViT-B, ViT-H, and ViT-L, ultimately selecting the ViT-H model for inference due to its consistent performance across various microscopy analyses. However, we encountered challenges with the SegmentAnything-generated masks, as is shown in FIg. 2, which required post-processing to achieve accurate cell identification. The first issue we encountered was that SegmentAnything sometimes misidentified the background as an object, resulting in non-zero indices for the background in the masks. Secondly, the high resolution of whole microscopy images necessitated the use of cropped patches for model fitting. However, this approach introduced incomplete organoids along the edges of the patches, leading to erroneous analysis of morphological properties. To address these concerns, we implemented an automated process where we the boundaries of the image patches were examined, and all objects located in these regions were excluded.
A third challenge was observed with organoids possessing a lumen structure, where the model inaccurately demarcated the regions into two separate objects. To rectify this problem, we computed the maximum boundary of each mask and unified all values within this boundary.
Lastly, debris might be erroneously identified as objects (organoids in this scenario) by the model. Unfortunately, we have not yet found an automated method to remove them. Thus, we manually marked these non-organoid structures and deleted them, which, when compared to manually identifying all organoid structures, proved to be a relatively simpler task.
**Property Analysis**: We conducted a comprehensive analysis of each organoid, computing five distinct properties to characterize their characteristics:
Figure 1: Overview of the proposed method.
1. Perimeter: This property quantifies the total length of the organoid's boundary, providing a measure of its overall shape complexity.
2. Radius: To estimate the organoid's size, we calculated the average distance from the center of the cell to various points on its perimeter.
3. Area: This property corresponds to the number of pixels encompassed within the organoid, serving as a direct indicator of its size.
4. Non-smoothness: Non-smoothness reflects the local variation in radius lengths along the organoid boundary. A higher non-smoothness value indicates a more irregular and less smooth boundary. To compute this property, we fitted an ellipse to the organoid's boundary and determined the smoothness as the ratio of perimeters between the fitted ellipse and the original contour.
5. Non-circularity: We employed the following equation to evaluate the extent to which the organoid resembles a perfect circle: \[Non-circularity=|(Perimeter^{2})/(4\pi\times Area)-1|\]
## 3 Results
We conducted an analysis of bright-field (BF) images of NE organoids formed in the neural induction medium with 2% and 8% Geltrex respectively at day 7 and day 18. The findings are presented below.
We conducted a comparison of the mean average precision (mAP) between the organoid detection results obtained from our method and those obtained from the open sourced StarDist method[3] in Fig. 3. Instead of training the StarDist method from scratch, we inferenced the '2D_versatile_fluo' model with default settings. The
Figure 3: (a) The average detection scores comparison between our method and the StarDist. (c) and (d) are the segmentation results from our method and the StarDist algorithm, respectively.
Figure 2: Challenges in directly applying SegmentAnything in real-world organoid morphology analysis workflow.
mAP comparison results are depicted in Fig. 4(a), while the segmentation comparison results are presented in Fig. 4(c) and (d). To ensure a fair and unbiased comparison, we refrained from manually removing any wrongly segmented regions (as described in challenge 4) from our proposed method. The results clearly demonstrate that the StarDist method, without any training or fine-tuning on the test modality, failed to achieve accurate segmentation on organoids. We also analyzed the morphological features of organoids among different groups in Fig, 4. Our results indicate that in the later stage of organoid formation (day 18), a higher concerntrration of Geltrex leads to smaller organoid sizes, which aligns with the hypothesis that Geltrex, being a hydrogel, undergoes solidification at 37 degrees Celsius, thereby exerting pressure on organoid formation from the paper [5]. Furthermore, our results are in agreement with the manually annotated results, highlighting the capability of our proposed toolbox in facilitating biological studies. The consistency between our automated analysis and the manually derived findings demonstrates the reliability and effectiveness of our approach in cellular analysis, offering valuable insights for further research and experimentation.
## 4 Conclusions
In this paper, we utilized the SegmentAnything model in automatic organoid structure identification in microscopy images. We claim that the SegmentAnything model showed promising performance, and our post-processing efforts were also necessary to enhance the accuracy of organoid structure detection and ensure reliable organoid morphology analysis. Overall, this research contributes to the field of organoid analysis in microscopy images by presenting an efficient approach for individual organoid detection and morphology analysis without any pre-requisites on data annotation. The automated pipeline offers promising avenues for accelerating and enhancing organoid features characterization and quantification, paving the way for further advancements in organoid research and related disciplines.
|
2310.00281 | On the Constants and Extremal Function and Sequence for Hardy
Inequalities in $L_p$ and $l_p$ | We study the behavior of the smallest possible constants $d(a,b)$ and $d_n$
in Hardy inequalities $$
\int_a^b\left(\frac{1}{x}\int_a^xf(t)dt\right)^p\,dx\leq d(a,b)\,\int_a^b
[f(x)]^p dx $$ and $$
\sum_{k=1}^{n}\Big(\frac{1}{k}\sum_{j=1}^{k}a_j\Big)^p\leq
d_n\,\sum_{k=1}^{n}a_k^p. $$ The exact rate of convergence of $d(a,b)$ and
$d_n$ is established and the ``almost extremal'' function and sequence are
found. | Ivan Gadjev | 2023-09-30T07:11:54Z | http://arxiv.org/abs/2310.00281v1 | On the constants and extremal function and sequence for Hardy inequalities in \(L_{p}\) and \(l_{p}\)
###### Abstract.
We study the behavior of the smallest possible constants \(d(a,b)\) and \(d_{n}\) in Hardy inequalities
\[\int_{a}^{b}\left(\frac{1}{x}\int_{a}^{x}f(t)dt\right)^{p}\,dx\leq d(a,b)\,\int_ {a}^{b}[f(x)]^{p}dx\]
and
\[\sum_{k=1}^{n}\left(\frac{1}{k}\sum_{j=1}^{k}a_{j}\right)^{p}\leq d_{n}\,\sum _{k=1}^{n}a_{k}^{p}.\]
The exact rate of convergence of \(d(a,b)\) and \(d_{n}\) is established and the "almost extremal" function and sequence are found.
Key words and phrases:Hardy inequality, exact constant, extremal function, extremal sequence 2010 Mathematics Subject Classification: Primary 26D10, 26D15; Secondary 33C45, 15A42 Research supported by the Bulgarian National Research Fund through Contract KP-06-N62/4.
## 1. Introduction and statement of the results
Between 1919 and 1925, in the series of papers [1, 2, 3] G. H. Hardy established the following inequalities which are nowadays known as the celebrated Hardy's ones. Let \(p>1\), then the integral inequality states that
\[\int_{0}^{\infty}\left(\frac{1}{x}\int_{0}^{x}f(t)dt\right)^{p}\,dx\leq\left( \frac{p}{p-1}\right)^{p}\,\int_{0}^{\infty}f^{p}(x)dx \tag{1.1}\]
holds for every \(f\), such that \(f(x)\geq 0\) for \(x\in(0,\infty)\) and \(f^{p}\) is integrable over \((0,\infty)\).
The corresponding discrete version claims that
\[\sum_{k=1}^{\infty}\left(\frac{1}{k}\sum_{j=1}^{k}a_{j}\right)^{p}\leq\left( \frac{p}{p-1}\right)^{p}\,\sum_{k=1}^{\infty}a_{k}^{p} \tag{1.2}\]
for every sequence \(\{a_{k}\}\) of non-negative numbers, for which the series on the right-hand side converges.
In his 1920 paper [2] Hardy claimed that he and Marcel Riesz derived (1.2) independently, but in their results the larger constant \((p^{2}/(p-1))^{p}\) appears on the right-hand side. E. Landau, in the letter [4] dated 1921, published later in [5], was the first to establish (1.2) with the exact constant \((p/(p-1))^{p}\) in the sense that there is no smaller one for which (1.2) holds for every sequence of non-negative numbers \(a_{k}\). For the latter statement he considered the sequence \(a_{k}^{*}=k^{-1/p-\varepsilon}\), suggested earlier by Hardy, and showed that
\[\left(\frac{a_{1}^{*}+\cdots+a_{k}^{*}}{k}\right)^{p}>\left(\frac{p}{p-1} \right)^{p}\left((a_{k}^{*})^{p}-\frac{p}{k^{2-1/p}}\right).\]
Since \(\sum_{k=1}^{\infty}(a_{k}^{*})^{p}\to\infty\) as \(\varepsilon\to 0\), the summation of the latter inequalities implies the sharpness of \((p/(p-1))^{p}\) for (1.2). In the same letter Landau pointed out that equality in (1.2) occurs only for the trivial sequence, that is, when \(a_{k}=0\) for every \(k\in\mathbb{N}\). Similarly, equality in (1.1) occurs if and only if \(f(x)\equiv 0\) almost everywhere.
The lack of nontrivial extremizers and the fact that the above argument of Landau does not work for finite sequences motivates one to consider, for any \(a\) and \(b\) with \(-\infty\leq a<b\leq\infty\) and weight positive a.e. functions \(u(x),v(x)\), the so-called general Hardy's integral inequality
\[\int_{a}^{b}\left(\int_{a}^{x}f(t)dt\right)^{p}u(x)\,dx\leq d(a,b)\,\int_{a}^{ b}f^{p}(x)v(x)dx,\quad f\in L^{p}[v;a,b] \tag{1.3}\]
and its discrete counterpart
\[\sum_{k=1}^{n}\Big{(}\sum_{j=1}^{k}a_{j}\Big{)}^{p}u_{k}\leq d_{n}\,\sum_{k=1 }^{n}a_{k}^{p}v_{k},\qquad a_{k},u_{k},v_{k}\geq 0,\,\,\,k=1,2,...,n. \tag{1.4}\]
Obviously the equations (1.3) and (1.4) are the"finite versions" of (1.1) and (1.2). The natural questions are: what are the best constants \(d(a,b)\) and \(d_{n}\) and the corresponding extremizers. This is exactly our endeavor in this paper, mainly because of the importance of Hardy's inequalities, their far reaching generalizations, especially to the so-called Hardy-Sobolev inequalities, and thus the necessity of understanding them more thoroughly. Answering the above questions in satisfactory manner for arbitrary weight functions and sequences such that the inequalities (1.3) and (1.4) hold, is impossible. In this paper we consider the important "unweighted" versions of inequalities (1.3) and (1.4), i.e.
\[\int_{a}^{b}\left(\frac{1}{x}\int_{a}^{x}f(t)dt\right)^{p}\,dx\leq d(a,b)\, \int_{a}^{b}[f(x)]^{p}dx,\qquad f(x)\geq 0,\,\,\,\,f\in L^{p}[a,b] \tag{1.5}\]
and
\[\sum_{k=1}^{n}\Big{(}\frac{1}{k}\sum_{j=1}^{k}a_{j}\Big{)}^{p}\leq d_{n}\, \sum_{k=1}^{n}a_{k}^{p},\qquad a_{k}\geq 0,\,\,\,k=1,2,...,n. \tag{1.6}\]
The behavior of the constant \(d(a,b)\) was studied in many papers - see, for instance, [6], [7],[8]. The best results about \(d(a,b)\) for \(p>1\) could be summarized in the following way (see, for instance [9] or [10]). Let
\[B=\sup_{a<x<b}\big{\{}(x-a)^{p-1}\left(x^{1-p}-b^{1-p}\right)\big{\}}\]
then for the constant \(d(a,b)\) the next estimations are true
\[\frac{1}{p-1}B\leq d(a,b)\leq\left(\frac{p}{p-1}\right)^{p}B.\]
It is easy to see that only the right estimation gives asymptotically (when \(a\to 0\) or \(b\to\infty\) or both) the exact constant. But not the rate of convergence.
In [11] we studied the inequality (1.5) for \(p=2\) and established the exact constant \(d(a,b)\) and the extremal function.
**Theorem 1.1**.: _[_11_]_ _Let \(a\) and \(b\) be any fixed numbers with \(0<a<b<\infty\). Then the inequality_
\[\int_{a}^{b}\left(\frac{1}{x}\int_{a}^{x}f(t)dt\right)^{2}dx\leq\frac{4}{1+4 \alpha^{2}}\,\int_{a}^{b}f^{2}(x)\,dx, \tag{1.7}\]
_where \(\alpha\) is the only solution of the equation_
\[\tan\left(\alpha\ln\frac{b}{a}\right)+2\alpha=0\ \ \mbox{in the interval}\ \ \left(\frac{\pi}{2\ln\frac{b}{a}},\frac{\pi}{\ln\frac{b}{a}}\right),\]
_holds for every \(f\in L^{2}[a,b]\). Moreover, equality in (1.7) is attained for_
\[f_{a,b}(x)=x^{-1/2}\left(2\alpha\cos(\alpha\ln x)+\sin(\alpha\ln x)\right).\]
The behavior of the constant \(d_{n}\) for \(p=2\) as a function of \(n\) was also studied extensively - see, for instance, [12], [13],[14], [15], [16]. In [14] Herbert S. Wilf established the exact rate of convergence of the constant \(d_{n}\) for \(p=2\)
\[d_{n}=4-\frac{16\pi^{2}}{\ln^{2}n}+O\left(\frac{\ln\ln n}{\ln^{3}n}\right).\]
In [17] F. Stampach gave slightly better estimation, i.e.
\[d_{n}=4-\frac{16\pi^{2}}{\ln^{2}n}+\frac{32\pi^{2}(\gamma+6\log 2)}{\log^{3}n}+O \left(\frac{1}{\ln^{4}n}\right).\]
In [18] we also studied the asymptotic behavior of the constant \(d_{n}\) for \(p=2\). It was proved there that \(d_{n}\) can be expressed in terms of the smallest zero of a continuous dual Hahn polynomial of degree \(n\) (see [19]), for a specific choice of the parameters, in terms of which these polynomials are defined. Despite that nice interpretation of \(d_{n}\), it was only proved in [18, Theorem 1.1] that the next inequalities are true for every natural \(n\geq 3\)
\[4\Bigg{(}1-\frac{4}{\ln n+4}\Bigg{)}\leq d_{n}\leq 4\Bigg{(}1-\frac{8}{(\ln n+4) ^{2}}\Bigg{)}. \tag{1.8}\]
In all proofs of the above mentioned estimations for the constant \(d_{n}\), the authors substantially used the special properties of the space \(l_{2}\). In [11] and [20] we applied a different approach which allowed us to give a simpler proof of some of the mentioned estimations and to find an almost extremal sequence. We proved the next theorem.
**Theorem 1.2**.: _[_11_]_ _Let_
\[a_{k}=\int_{k}^{k+1}h(x)dx, \tag{1.9}\]
_where_
\[h(x)=x^{-1/2}\left(2\alpha\cos(\alpha\ln x)+\sin(\alpha\ln x)\right),\quad 1 \leq x\leq n+1,\]
_and \(\alpha\) is the only solution of the equation_
\[\tan(\alpha\ln(n+1))+2\alpha=0\ \ \mbox{in the interval}\ \ \left(\frac{\pi}{2\ln(n+1)},\frac{\pi}{\ln(n+1)}\right).\]
_Then_
\[\sum_{k=1}^{n}\Big{(}\frac{1}{k}\sum_{j=1}^{k}a_{j}\Big{)}^{2}\geq\frac{4}{1+ 4\alpha^{2}}\,\sum_{k=1}^{n}a_{k}^{2}. \tag{1.10}\]
By combining the results (1.8) and (1.10) the exact rate of convergence of \(\{d_{n}\}\) is established and the next very sharp estimates for \(d_{n}\), i.e. the next inequalities
\[4-\frac{16\pi^{2}}{\ln^{2}(n+1)}\leq d_{n}\leq 4-\frac{32}{(\ln n+4)^{2}}\]
hold for every natural \(n\geq 3\). Also the sequence \(\{a_{k}\}_{1}^{n}\) defined in (1.9) is the almost extremal sequence.
In this paper we establish very sharp estimates for \(d(a,b)\) and \(d_{n}\) for \(2\leq p<\infty\), as well as obtain an "almost extremal" function for (1.5) and "almost extremal" sequence for (1.6). Our main results are summarized in the following two theorems.
**Theorem 1.3**.: _Let \(2\leq p<\infty\) and \(0<a<b<\infty\). Then there exist positive constants \(c_{1}=c_{1}(p)\) and \(c_{2}=c_{2}(p)\), depending only on \(p\), such that the next estimates for the constant \(d(a,b)\) in (1.5) hold_
\[\left(\frac{p}{p-1}\right)^{p}\left(1-\frac{c_{1}}{\ln^{2}\frac{b}{a}}\right) \leq d(a,b)\,\leq\left(\frac{p}{p-1}\right)^{p}\left(1+\frac{c_{2}}{\ln^{2} \frac{b}{a}}\right)^{-1}. \tag{1.11}\]
_Moreover, the function_
\[f^{*}(x)=\frac{1}{x^{1/p}}\left(\frac{\alpha p}{p-1}\cos(\alpha\ln x)+\sin( \alpha\ln x)\right)\]
_where \(\alpha\) is the only solution of the equation_
\[\tan\left(\alpha\ln\frac{b}{a}\right)+\frac{\alpha p}{p-1}=0\]
_in the interval \(\left(\frac{\pi}{2\ln\frac{b}{a}},\frac{\pi}{\ln\frac{b}{a}}\right)\), is an "almost extremal" function in the sense that_
\[\int_{a}^{b}\left(\frac{1}{x}\int_{a}^{x}f^{*}(t)dt\right)^{p}\,dx\geq\left( \frac{p}{p-1}\right)^{p}\left(1-\frac{c_{1}}{\ln^{2}\frac{b}{a}}\right)\,\int _{a}^{b}[f^{*}(x)]^{p}dx.\]
An immediate consequence of Theorem 1.3 is the next corollary.
**Corollary 1.4**.: _When either of the limits relations \(a\to 0\), \(b\to\infty\), or both hold, i.e. \(\ln(b/a)\to\infty\), then_
\[d(a,b)\sim\left(\frac{p}{p-1}\right)^{p}-\frac{C}{\ln^{2}\frac{b}{a}}.\]
_More precisely there exist constants \(C_{1}(p)>0\) and \(C_{2}(p)>0\) depending only on \(p\) such that_
\[\left(\frac{p}{p-1}\right)^{p}-\frac{C_{1}}{\ln^{2}\frac{b}{a}}\leq d(a,b) \leq\left(\frac{p}{p-1}\right)^{p}-\frac{C_{2}}{\ln^{2}\frac{b}{a}}.\]
_Also, for the constant \(C_{1}\) the next estimation holds_
\[C_{1}\leq\left(\frac{p}{p-1}\right)^{p+1}p\pi^{2}\]
_which is the exact constant for \(p=2\)._
_Remark 1.5_.: Because of the close connection to the maximal function also it is natural to consider the inequality
\[\int_{a}^{b}\left(\frac{1}{x-a}\int_{a}^{x}f(t)dt\right)^{p}\,dx\leq d(a,b)\, \int_{a}^{b}[f(x)]^{p}dx,\qquad f(x)\geq 0,\ \ f\in L^{p}[a,b]\]
which is equivalent (by change of variables) to
\[\int_{0}^{b}\left(\frac{1}{x}\int_{0}^{x}f(t)dt\right)^{p}\,dx\leq d(b)\,\int _{0}^{b}[f(x)]^{p}dx,\qquad f(x)\geq 0,\ \ f\in L^{p}[0,b].\]
Then from the above corollary we obtain that \(d(b)=\left(\frac{p}{p-1}\right)^{p}\) and the only function for which the equality is attained is \(f=0\) a.e.
**Theorem 1.6**.: _Let \(2\leq p<\infty\). Then there exist positive constants \(c_{3}=c_{3}(p)\) and \(c_{4}=c_{4}(p)\), depending only on \(p\), such that for every natural \(n\geq 2\) the next estimates for the constant \(d_{n}\) in (1.6) hold_
\[\left(\frac{p}{p-1}\right)^{p}-\frac{c_{3}}{\ln^{2}n}\leq d_{n}\,\leq\left( \frac{p}{p-1}\right)^{p}-\frac{c_{4}}{\ln^{2}n}. \tag{1.12}\]
_Moreover, the sequence_
\[a_{k}^{*}=\int_{k}^{k+1}f^{*}(x)\,dx,\quad k=1,2,...,n,\]
_where_
\[f^{*}(x)=\frac{1}{x^{1/p}}\left(\frac{\alpha p}{p-1}\cos(\alpha\ln x)+\sin( \alpha\ln x)\right),\quad 1\leq x\leq n+1 \tag{1.13}\]
_and \(\alpha\) is the only solution of the equation_
\[\tan(\alpha\ln(n+1))+\frac{\alpha p}{p-1}=0\]
_in the interval \(\left(\frac{\pi}{2\ln(n+1)},\frac{\pi}{\ln(n+1)}\right)\), is an "almost extremal" sequence in the sense that_
\[\sum_{k=1}^{n}\left(\frac{1}{k}\sum_{j=1}^{k}a_{j}^{*}\right)^{p}\geq\left( \left(\frac{p}{p-1}\right)^{p}-\frac{c_{3}}{\ln^{2}n}\right)\sum_{k=1}^{n}[a_ {k}^{*}]^{p}. \tag{1.14}\]
_Also, letting \(n\to\infty\) for the constant \(c_{3}\) the next estimation holds_
\[c_{3}\leq\left(\frac{p}{p-1}\right)^{p+1}p\pi^{2}\]
_which is the exact constant for \(p=2\)._
_Remark 1.7_.: The constants \(c_{2}(p)\) and \(c_{4}(p)\) in Theorem 1.3 and Theorem 1.6 are by no means the best ones. They could be improved in a lot of ways but that could have made the proofs longer and more complicated. Our goal was to establish the exact rate of convergence and to keep the proofs as simple as possible.
Henceforth, the constant \(c=c(p)\) will always be an absolute constant, which means it will depend only on \(p\). Also, it may be different on each occurrence. The relation \(O(f)\) means that there exists a positive constant \(c(p)\), depending only on \(p\) such that \(|O(f)|\leq c(p)|f|\).
The paper is organized as follows. In Section 2, some technical results are proved, in Section 3, the right inequality of Theorem 1.3 is proved, in Section 4, the left inequality of Theorem 1.3 is proved, in Section 5, the left inequality of Theorem 1.6 is proved and in Section 6, the right inequality of Theorem 1.6 is proved.
## 2. Auxiliary Results
We need some technical lemmas.
**Lemma 2.1**.: _For \(0\leq x\leq 1\) and \(\alpha\geq 0\) the next inequality is true:_
\[(1-x)^{\alpha}\leq 1-\alpha x+\frac{1}{2}(\alpha x)^{2}. \tag{2.1}\]
Proof.: From \(1-x\leq e^{-x}\) we have for \(\alpha\geq 0\) and \(\alpha x<1\)
\[(1-x)^{\alpha}\leq e^{-\alpha x}=1-\alpha x+\frac{(\alpha x)^{2}}{2}+\sum_{k= 3}^{\infty}(-1)^{k}\frac{(\alpha x)^{k}}{k!}\leq 1-\alpha x+\frac{1}{2}( \alpha x)^{2}.\]
For \(\alpha x\geq 1\) the function \(f(x)=1-\alpha x+\frac{1}{2}(\alpha x)^{2}-(1-x)^{\alpha}\) is increasing and consequently \(f(x)\geq f(0)=0\).
**Lemma 2.2**.: _For \(x\geq-1\), \(0\leq\alpha\leq 1\) the next inequality is true:_
\[(1+x)^{\alpha}\leq 1+\alpha x. \tag{2.2}\]
Proof.: Let \(\alpha=1/\beta\). Then (2.2) follows from the Bernoulli inequality.
**Lemma 2.3**.: _For \(x\geq 0\), \(\alpha\geq 0\) and \(\alpha x<1\) the next inequality is true:_
\[(1+x)^{\alpha}\leq 1+\alpha x+(\alpha x)^{2}. \tag{2.3}\]
Proof.: \[(1+x)^{\alpha}<e^{\alpha x}=1+\alpha x+\sum_{k=2}^{\infty}\frac{(\alpha x)^{k }}{k!}\leq 1+\alpha x+(\alpha x)^{2}\sum_{k=2}^{\infty}\frac{1}{k!}\leq 1+ \alpha x+(\alpha x)^{2}.\]
**Lemma 2.4**.: _Let \(b>1\), \(p\geq 2\), \(\frac{1}{p}+\frac{1}{q}=1\) and_
\[g(x)=\frac{1}{x^{1/(pq)}}\left(\cos(\alpha\ln x)\right)^{1/q} \tag{2.4}\]
_where \(\alpha=\frac{1}{\ln b}\arctan\frac{1}{p}\). Then there exists a constant \(c=c(p)>0\), depending only on \(p\) such that for \(1\leq x\leq b\) the next inequality holds:_
\[\left(\frac{\cos(\alpha\ln x)+\alpha q\sin(\alpha\ln x)}{1+\alpha^{2}q^{2}} \right)^{p/q}\leq-q\left(1+c\alpha^{2}\right)^{-1}x^{1+1/q}\left[(g(x))^{p} \right]^{\prime}. \tag{2.5}\]
Proof.: The above inequality (2.5) is equivalent to
\[\left(\frac{\cos(\alpha\ln x)+\alpha q\sin(\alpha\ln x)}{1+\alpha ^{2}q^{2}}\right)^{p/q}\] \[\leq\left(1+c\alpha^{2}\right)^{-1}\left(\cos(\alpha\ln x) \right)^{p/q-1}\left(\cos(\alpha\ln x)+\alpha p\sin(\alpha\ln x)\right)\]
or
\[\left(\frac{1+\alpha qy}{1+\alpha^{2}q^{2}}\right)^{p/q}\leq\frac{1+\alpha py }{1+c\alpha^{2}}\]
where \(y=\tan(\alpha\ln x)\). Obviously \(y=\tan(\alpha\ln x)\leq\tan(\alpha\ln b)=1/p\). We consider two cases.
**Case 1.**\(\alpha\geq 1\).
Since
\[\frac{1+\alpha qy}{1+\alpha^{2}q^{2}}\leq\frac{1+\alpha q/p}{1+\alpha^{2}} \leq\frac{1+\alpha}{1+\alpha^{2}}\leq 1\]
we have
\[\left(\frac{1+\alpha qy}{1+\alpha^{2}q^{2}}\right)^{p/q}=\frac{1+\alpha qy}{1+ \alpha^{2}q^{2}}\left(\frac{1+\alpha qy}{1+\alpha^{2}q^{2}}\right)^{p/q-1}\leq \frac{1+\alpha py}{1+\alpha^{2}q^{2}}.\]
**Case 2.**\(\alpha<1\).
From lemma 2.3
\[(1+\alpha qy)^{p/q}\leq 1+\alpha py+(\alpha py)^{2}\]
and from Bernoulli's inequality
\[\left(1+\alpha^{2}q^{2}\right)^{p/q}\geq 1+\alpha^{2}pq.\]
So, it is enough to prove that there exists a constant \(c=c(p)>0\) such that
\[1+\alpha py+(\alpha py)^{2}\leq\frac{1+\alpha^{2}pq}{1+c\alpha^{2}}(1+\alpha py)\]
which after some simplifications is
\[(py)^{2}\leq\frac{pq-c}{1+c\alpha^{2}}.\]
Since \(py\leq 1\) and \(\alpha<1\) the above inequality is true if we take, for instance, \(c=(pq-1)/2\).
By taking \(c=\min\{q^{2},(pq-1)/2\}\) we complete the proof of the lemma.
_Remark 2.5_.: The inequality (2.5) could be written in the following way
\[\left(\frac{\cos(\alpha\ln x)+\alpha q\sin(\alpha\ln x)}{1+\alpha^{2}q^{2}} \right)^{p/q}\leq-q\left(1+\frac{c}{\ln^{2}b}\right)^{-1}x^{1+1/q}\left(g^{p}( x)\right)^{\prime} \tag{2.6}\]
where \(c=\arctan(1/p)\min\{q^{2},(pq-1)/2\}\).
**Lemma 2.6**.: _Let \(p\geq 2\), \(\frac{1}{p}+\frac{1}{q}=1\), \(0<\epsilon<1\),_
\[b_{0}=e^{\pi/\sqrt{\min\{q(p-q)^{-1}(pq+1)^{-2},4(pq)^{-2}\}\epsilon}}\]
_and_
\[f^{*}(x)=\frac{1}{x^{1/p}}(\alpha q\cos(\alpha\ln x)+\sin(\alpha\ln x)) \tag{2.7}\]
_where \(\alpha\) is the only solution of the equation_
\[\tan\left(\alpha\ln b\right)+\alpha q=0\]
_in the interval \(\left(\frac{\pi}{2\ln b},\frac{\pi}{\ln b}\right)\). Then for every \(b>b_{0}\) the next inequality holds_
\[(\sin(\alpha\ln x))^{p/q}\geq-\frac{qx^{1+1/q}\left[(f^{*}(x))^{p/q}\right]^{ \prime}}{1+(pq+\epsilon)\alpha^{2}} \tag{2.8}\]
_Remark 2.7_.: The function \(f^{*}\) defined by (2.7) is well defined. Indeed, if \(\alpha\ln x\in\left(0,\frac{\pi}{2}\right]\) it is obvious. If \(\alpha\ln x\in\left(\frac{\pi}{2},\pi\right)\) since the function \(h(x)=\alpha q\cos(\alpha\ln x)+\sin(\alpha\ln x)\) is decreasing and \(h(b)=0\) it follows that \(h(x)>h(b)=0\), i.e. \(h(x)>0\) for \(1\leq x\leq b\) and consequently \(f^{*}(x)>0\).
Proof.: It is easy to see that for every \(b>b_{0}\)
\[\alpha\leq\sqrt{\min\left\{\frac{q}{(p-q)(pq+1)^{2}},\frac{4}{(pq)^{2}}\right\} \epsilon}.\]
Since
\[-\frac{qx^{1+1/q}}{1+\alpha^{2}pq}\left[(f^{*}(x))^{p/q}\right]^{\prime} \tag{2.9}\] \[=(\alpha q\cos(\alpha\ln x)+\sin(\alpha\ln x))^{p/q-1}\left(\sin( \alpha\ln x)-\frac{\alpha(p-q)}{1+\alpha^{2}pq}\cos(\alpha\ln x)\right)\]
we need to prove that for every \(b>b_{0}\) the next inequality is true
\[\frac{1+(pq+\epsilon)\alpha^{2}}{1+pq\alpha^{2}}(\sin(\alpha\ln x ))^{p/q} \tag{2.10}\] \[\geq(\alpha q\cos(\alpha\ln x)+\sin(\alpha\ln x))^{p/q-1}\left(\sin (\alpha\ln x)-\frac{\alpha(p-q)}{1+\alpha^{2}pq}\cos(\alpha\ln x)\right).\]
We consider two cases.
**Case 1.**\(\alpha\ln x\in\left[\frac{\pi}{2},\pi\right)\).
Let \(\alpha\ln x=\pi-\phi\), \(0<\phi\leq\pi/2\). Then(2.10) is equivalent to
\[\frac{1+(pq+\epsilon)\alpha^{2}}{1+pq\alpha^{2}}(\sin(\alpha\ln x ))^{p/q}\geq(-\alpha q\cos\phi+\sin\phi)^{p/q-1}\left(\sin\phi+\frac{\alpha(p -q)}{1+\alpha^{2}pq}\cos\phi\right)\]
or
\[\left[1+(pq+\epsilon)\alpha^{2}\right]y^{p/q}\geq(y-\alpha q)^{p/q-1}\left[ \left(1+\alpha^{2}pq\right)y+\alpha(p-q)\right] \tag{2.11}\]
where \(y=\tan\phi\). Since \(y>\alpha q\) in this case, we have from Bernoulli's inequality
\[\left(\frac{y}{y-\alpha q}\right)^{p/q}\geq\frac{y+\alpha(p-q)}{y-\alpha q}.\]
So, it is enough to prove that
\[\left[1+(pq+\epsilon)\alpha^{2}\right]\left[y+\alpha(p-q)\right]\geq\left(1+ \alpha^{2}pq\right)y+\alpha(p-q)\]
which is easy to verify.
**Case 2.**\(\alpha\ln x\in\left[0,\frac{\pi}{2}\right)\).
In this case (2.10) is equivalent to
\[\left(1+(pq+\epsilon)\alpha^{2}\right)y^{p/q}\geq(y+\alpha q)^{p/q-1}\left[ \left(1+\alpha^{2}pq\right)y-\alpha(p-q)\right] \tag{2.12}\]
where \(y=\tan(\alpha\ln x)\). If \(y\leq\frac{\alpha(p-q)}{1+\alpha^{2}pq}\) then (2.12) is obvious. Let \(y>\frac{\alpha(p-q)}{1+\alpha^{2}pq}\).
**Case 2.1.**\(2\leq p<3\)
(2.12) is equivalent to
\[1+(pq+\epsilon)\alpha^{2}\geq\left(1+\frac{\alpha q}{y}\right)^{p/q-1}\left(1 +\alpha^{2}pq-\frac{\alpha(p-q)}{y}\right).\]
Since
\[\left(1+\frac{\alpha q}{y}\right)^{p/q-1}\leq 1+\frac{\alpha(p-q)}{y}\]
it is enough to prove that
\[1+(pq+\epsilon)\alpha^{2}\geq\left(1+\frac{\alpha(p-q)}{y}\right)\left(1+ \alpha^{2}pq-\frac{\alpha(p-q)}{y}\right).\]
Simplifying it
\[\epsilon\geq\frac{\alpha pq(p-q)}{y}-\frac{(p-q)^{2}}{y^{2}}\]
which is true since \(\alpha pq<2\sqrt{\epsilon}\).
**Case 2.2.**\(p\geq 3\)
(2.12) is equivalent to
\[\left[1+(pq+\epsilon)\alpha^{2}\right]\left(\frac{y}{y+\alpha q}\right)^{p/q-1} \geq 1+\alpha^{2}pq-\frac{\alpha(p-q)}{y}.\]
From Bernoulli's inequality
\[\left(\frac{y}{y+\alpha q}\right)^{p/q-1}\geq 1-\frac{\alpha(p-q)}{y+\alpha q}.\]
So, it is enough to prove that
\[\left[1+(pq+\epsilon)\alpha^{2}\right]\left(1-\frac{\alpha(p-q)}{y+\alpha q} \right)\geq 1-\frac{\alpha(p-q)}{(1+\alpha^{2}pq)y}.\]
Simplifying it
\[\epsilon+\frac{q(p-q)}{y(y+\alpha q)}\geq\frac{(p-q)(pq+\epsilon)\alpha}{y+ \alpha q}.\]
If \(y\leq q[(pq+\epsilon)\alpha]^{-1}\) it is obvious. For \(y>q[(pq+\epsilon)\alpha]^{-1}\)
\[\frac{(p-q)(pq+\epsilon)\alpha}{y+\alpha q}<\frac{(p-q)(pq+\epsilon)\alpha}{q [(pq+\epsilon)\alpha]^{-1}+\alpha q}<\epsilon\]
since
\[\alpha^{2}<\frac{q\epsilon}{(p-q)(pq+1)^{2}}.\]
The lemma is proved.
**Lemma 2.8**.: _For \(p\geq 2\), \(\frac{1}{p}+\frac{1}{q}=1\) and every natural numbers \(i\) and \(n\) such that \(i\leq n\) the next inequality is true:_
\[\sum_{k=i}^{n}\frac{1}{k^{1+1/q}}\left(1-\frac{1}{pk^{1/q}}\right)^{p/q}\leq q \left(\frac{1}{i^{1/q}}-\frac{1}{(n+1)^{1/q}}\right). \tag{2.13}\]
Proof.: We will prove that there is a \(k_{0}\in\mathbb{N}\) such that for every \(k\geq k_{0}\) the next inequality is true:
\[\frac{1}{k^{1+1/q}}\left(1-\frac{1}{pk^{1/q}}\right)^{p/q}\leq q\left(\frac{1 }{k^{1/q}}-\frac{1}{(k+1)^{1/q}}\right). \tag{2.14}\]
From (2.1)
\[\left(1-\frac{1}{pk^{1/q}}\right)^{p/q}\leq 1-\frac{1}{qk^{1/q}}+\frac{1}{2q^{2 }k^{2/q}}\]
so, it is enough to prove that there is a \(k_{0}\in\mathbb{N}\) such that for every \(k\geq k_{0}\)
\[\frac{1}{k^{1+1/q}}\left(1-\frac{1}{qk^{1/q}}+\frac{1}{2q^{2}k^{2/q}}\right) \leq q\left(\frac{1}{k^{1/q}}-\frac{1}{(k+1)^{1/q}}\right)\]
i.e.
\[1-\frac{1}{qk^{1/q}}+\frac{1}{2q^{2}k^{2/q}}\leq qk\left[1-\left(\frac{k}{k+1 }\right)^{1/q}\right].\]
But from (2.2)
\[\left(\frac{k}{k+1}\right)^{1/q}=\left(1-\frac{1}{k+1}\right)^{1/q}\leq 1- \frac{1}{q(k+1)}.\]
and then
\[qk\left[1-\left(\frac{k}{k+1}\right)^{1/q}\right]\geq\frac{k}{k+1}=1-\frac{1}{k+1}.\]
So, it is enough to prove that there is a \(k_{0}\in\mathbb{N}\) such that for every \(k\geq k_{0}\)
\[1-\frac{1}{qk^{1/q}}+\frac{1}{2q^{2}k^{2/q}}\leq 1-\frac{1}{k+1},\]
i.e.
\[\frac{1}{k+1}+\frac{1}{2q^{2}k^{2/q}}\leq\frac{1}{qk^{1/q}}\]
which is obviously true for \(k\) big enough. Actually, by considering the function \(f(x)=\frac{1}{xk^{1/x}}+\frac{1}{2x^{2}k^{2/x}}\) it is not difficult to prove that it is true for every \(k\geq 8\). From the above it follows that there is \(i_{0}\) such that for every \(i\geq i_{0}\) (2.14) is true and consequently (2.13) as well.
Now, let (2.13) is true for \(i+1\). We will prove that it is true for \(i\) as well. We have
\[\sum_{k=i}^{n}\frac{1}{k^{1+1/q}}\left(1-\frac{1}{pk^{1/q}}\right)^{p/q}\leq \frac{1}{i^{1+1/q}}\left(1-\frac{1}{pi^{1/q}}\right)^{p/q}+q\left(\frac{1}{(i +1)^{1/q}}-\frac{1}{(n+1)^{1/q}}\right)\]
So, it is enough to prove that
\[\frac{1}{i^{1+1/q}}\left(1-\frac{1}{pi^{1/q}}\right)^{p/q}+\frac{q}{(i+1)^{1/ q}}\leq\frac{q}{i^{1/q}}\]
i.e.
\[\frac{1}{qi}\left(1-\frac{1}{pi^{1/q}}\right)^{p/q}+\left(\frac{i}{i+1} \right)^{1/q}\leq 1.\]
and from (2.2) it follows that it is enough to prove that
\[\left(1-\frac{1}{pi^{1/q}}\right)^{p/q}\leq\frac{i}{i+1}.\]
From Bernoulli's inequality for \(\alpha\geq 1\) and \(0\leq x<1\)
\[(1-x)^{\alpha}\leq 1-\frac{\alpha x}{1+(\alpha-1)x}\]
we have
\[\left(1-\frac{1}{pi^{1/q}}\right)^{p/q}\leq 1-\frac{p}{pi^{1/q}+p-q}.\]
So, it is enough to prove
\[\frac{1}{i+1}\leq\frac{p}{pi^{1/q}+p-q}\]
i.e.
\[i^{1/q}\leq\frac{i}{q}+\frac{1}{p}\]
which is easy to verify.
The proof of the lemma is complete.
**Lemma 2.9**.: _For \(p\geq 2\), \(\frac{1}{p}+\frac{1}{q}=1\) and every natural numbers \(i\) and \(n\) such that \(i\leq n\) the next inequality is true:_
\[\sum_{k=i}^{n} \frac{\ln^{2}k-2q\ln k+2q^{2}}{k^{1+1/q}} \tag{2.15}\] \[>\frac{q\left(\ln^{2}i+2q^{2}\right)}{i^{1/q}}+\frac{\ln^{2}i-2q \ln i+2q^{2}}{2i^{1+1/q}}-\frac{q\left(\ln^{2}n+2q^{2}\right)}{n^{1/q}}.\]
Proof.: For the function
\[f(x)=\frac{\ln^{2}x-2q\ln x+2q^{2}}{x^{1+1/q}}\]
we have
\[f^{\prime}(x)=\frac{1}{x^{2+1/q}}\left[-\left(1+\frac{1}{q}\right)\ln^{2}x+2( q+2)\ln x-2q(q+2)\right]\]
and
\[f^{\prime\prime}(x)=\frac{1}{x^{3+1/q}}\left[\left(2+\frac{3}{q}+\frac{1}{q^ {2}}\right)\ln^{2}x-2(6+2q+\frac{3}{q})\ln x+4\left(q^{2}+3q+2\right)\right].\]
It is easy to see that \(f^{\prime}(x)<0\), \(f^{\prime\prime}(x)>0\) and consequently \(f^{\prime}(x)\) is increasing and \(|f^{\prime}(x)|\) is decreasing. From Euler's summation formula (see, for instance [21, p.149])
\[\sum_{k=i}^{n}f(k)=\int_{i}^{n}f(x)dx+\frac{f(i)+f(n)}{2}+\int_{i}^{n}\left(x- [x]-\frac{1}{2}\right)f^{\prime}(x)dx\]
where \([x]\) is the floor function.
\[\int_{i}^{n}\left(x-[x]-\frac{1}{2}\right)f^{\prime}(x)dx=\sum_{k=i}^{n-1} \left(\int_{k}^{k+1/2}+\int_{k+1/2}^{k+1}\right)>0\]
since
\[\int_{k}^{k+1/2}>0,\ \int_{k+1/2}^{k+1}<0\quad\text{and}\quad\int_{k}^{k+1/2}> \left|\int_{k+1/2}^{k+1}\right|.\]
Then
\[\sum_{k=i}^{n}f(k) >\int_{i}^{n}f(x)dx+\frac{f(i)+f(n)}{2}>\int_{i}^{n}f(x)dx+\frac{ \ln^{2}i-2q\ln i+2q^{2}}{2i^{1+1/q}}\] \[=\frac{q\left(\ln^{2}i+2q^{2}\right)}{i^{1/q}}+\frac{\ln^{2}i-2q \ln i+2q^{2}}{2i^{1+1/q}}-\frac{q\left(\ln^{2}n+2q^{2}\right)}{n^{1/q}}.\]
The lemma is proved.
**Lemma 2.10**.: _For \(p\geq 2\), \(\frac{1}{p}+\frac{1}{q}=1\) and every natural numbers \(i\) and \(n\) such that \(i\leq n\) the next inequality is true:_
\[\sum_{k=i}^{n}\frac{\ln^{2}k-2q\ln k+2q^{2}}{k^{1+2/q}}<\frac{\ln^{2}i-2q\ln i +2q^{2}}{i^{1+2/q}}+\frac{q\left(\ln^{2}i-q\ln i+3/2q^{2}\right)}{2i^{2/q}}. \tag{2.16}\]
Proof.: For the function
\[g(x)=\frac{\ln^{2}x-2q\ln x+2q^{2}}{x^{1+2/q}}\]
we have
\[g^{\prime}(x)=\frac{1}{x^{2+2/q}}\left[-\left(1+\frac{2}{q}\right)\ln^{2}x+2(q+3 )\ln x-2q(q+3)\right]<0\]
and consequently \(g(x)\) is decreasing and
\[\sum_{k=i}^{n}g(k) <\frac{\ln^{2}i-2q\ln i+2q^{2}}{i^{1+2/q}}+\int_{i}^{n}g(x)dx\] \[<\frac{\ln^{2}i-2q\ln i+2q^{2}}{i^{1+2/q}}+\frac{q\left(\ln^{2}i-q \ln i+3/2q^{2}\right)}{2i^{2/q}}.\]
The lemma is proved.
**Lemma 2.11**.: _For \(p\geq 2\), \(\frac{1}{p}+\frac{1}{q}=1\) and every natural numbers \(i\) and \(n\) such that \(i\leq n\) the next inequality is true:_
\[\sum_{k=i}^{n}\frac{1}{k^{p}}\left[\left(k^{1/q}\right)^{p/q-1}- \left(k^{1/q}-\frac{1}{p}\right)^{p/q-1}\right]\left[k^{1/q}\left(\ln^{2}k-2q \ln k+2q^{2}\right)-2q^{2}\right] \tag{2.17}\] \[<\frac{\ln^{2}i-2q\ln i+2q^{2}}{qi^{1+2/q}}+\frac{\ln^{2}i-q\ln i +3/2q^{2}}{2i^{2/q}}-\frac{2q^{2}}{3i^{3/q}}+\frac{2q^{2}}{3n^{3/q}}.\]
Proof.: Since
\[\left(1-\frac{1}{pk^{1/q}}\right)^{p/q-1}>\left(1-\frac{1}{pk^{1/q}}\right)^{p /q}>1-\frac{1}{qk^{1/q}}\]
we have
\[\left(k^{1/q}\right)^{p/q-1}-\left(k^{1/q}-\frac{1}{p}\right)^{p/q-1}=\left(k ^{1/q}\right)^{p/q-1}\left[1-\left(1-\frac{1}{pk^{1/q}}\right)^{p/q-1}\right] <\frac{1}{q}\left(k^{1/q}\right)^{p/q-2}.\]
Then
\[\sum_{k=i}^{n}\frac{1}{k^{p}}\left[\left(k^{1/q}\right)^{p/q-1}- \left(k^{1/q}-\frac{1}{p}\right)^{p/q-1}\right]\left[k^{1/q}\left(\ln^{2}k-2q \ln k+2q^{2}\right)-2q^{2}\right]\] \[<\frac{1}{q}\sum_{k=i}^{n}\frac{1}{k^{1+3/q}}\left[k^{1/q}\left( \ln^{2}k-2q\ln k+2q^{2}\right)-2q^{2}\right]\] \[=\frac{1}{q}\sum_{k=i}^{n}\frac{\ln^{2}k-2q\ln k+2q^{2}}{k^{1+2/q} }-2q\sum_{k=i}^{n}\frac{1}{k^{1+3/q}}\] \[<\frac{1}{q}\sum_{k=i}^{n}\frac{\ln^{2}k-2q\ln k+2q^{2}}{k^{1+2/q }}-2q\int_{i}^{n}\frac{dx}{x^{1+3/q}}\] \[=\frac{1}{q}\sum_{k=i}^{n}\frac{\ln^{2}k-2q\ln k+2q^{2}}{k^{1+2/q }}-\frac{2q^{2}}{3i^{3/q}}+\frac{2q^{2}}{3n^{3/q}}\]
and the lemma follows from (2.16).
**Lemma 2.12**.: _For \(p\geq 2\), \(\frac{1}{p}+\frac{1}{q}=1\) and every natural \(i\) there is a constant \(c=c(p)>0\), depending only on \(p\) such that for every natural \(i\geq 2\) the next inequality is true:_
\[2q^{2}-\frac{3}{4}+\frac{2q}{3i^{2/q}}-\frac{2q}{i^{1+1/q}}-\frac{q^{2}}{i^{1/q }}-\frac{\ln^{2}i-q\ln i+3/2q^{2}}{2qi^{1/q}}>c(p).\]
Proof.: It is obvious that there is a \(i_{0}\) and a constant \(c(p,i_{0})\) such that for every \(i>i_{0}\) the above inequality is true. For \(i\leq i_{0}\) we have
\[2q^{2}-\frac{3}{4}+\frac{2q}{3i^{2/q}}-\frac{2q}{i^{1+1/q}}- \frac{q^{2}}{i^{1/q}}-\frac{\ln^{2}i-q\ln i+3/2q^{2}}{2qi^{1/q}}\] \[>2q^{2}-\frac{3}{4}-\frac{4q}{3i^{1+1/q}}-\frac{q^{2}}{i^{1/q}} -\frac{\left(\ln i-q/2\right)^{2}+5/4q^{2}}{2qi^{1/q}}\] \[=\frac{1}{i^{1/q}}\left[\left(2q^{2}-\frac{3}{4}\right)i^{1/q}- \frac{4q}{3i}-q^{2}-\frac{\left(\ln i-q/2\right)^{2}+5/4q^{2}}{2q}\right]\]
Now we will prove that for every \(i\leq i_{0}\)
\[\left(2q^{2}-\frac{3}{4}\right)i^{1/q}>q^{2}+\frac{4q}{3i}+\frac{\left(\ln i- q/2\right)^{2}+5/4q^{2}}{2q}.\]
We consider the cases \(i=2\) and \(i\geq 3\) separately.
**Case 1.**\(i=2\)
We need to prove that
\[\left(2q^{2}-\frac{3}{4}\right)2^{1/q}>q^{2}+\frac{2}{3}q+\frac{\left(\ln 2-q/ 2\right)^{2}+5/4q^{2}}{2q}.\]
Since \(\left(\ln 2-q/2\right)^{2}<0.1\) it is enough to prove
\[\left(2q^{2}-\frac{3}{4}\right)2^{1/q}>q^{2}+\frac{2}{3}q+\frac{1}{20q}+ \frac{5}{8}q=q^{2}+\frac{31}{24}q+\frac{1}{20q}.\]
Considering for \(1\leq x\leq 2\) the function
\[f(x)=\left(2x^{2}-\frac{3}{4}\right)2^{1/x}-x^{2}-\frac{31}{24}x-\frac{1}{20x}\]
we have
\[f^{\prime}(x) =4x2^{1/x}-\left(2-\frac{3}{4x^{2}}\right)2^{1/x}\ln 2-2x-\frac{31 }{24}+\frac{1}{20x^{2}}\] \[>4x2^{1/x}-\frac{7}{10}\left(2-\frac{3}{4x^{2}}\right)2^{1/x}-2x- \frac{31}{24}\] \[=\left(4x-\frac{7}{5}+\frac{21}{40x^{2}}\right)2^{1/x}-2x-\frac{3 1}{24}\] \[>4\sqrt{2}x-\frac{7\sqrt{2}}{5}-2x-\frac{31}{24}>4\sqrt{2}-\frac{ 7\sqrt{2}}{5}-2-\frac{31}{24}>0\]
and consequently the function \(f(x)\) is increasing and \(f(x)>f(1)>0\).
**Case 2.**\(i\geq 3\)
We have
\[i^{1/q}=e^{\ln i/q}>1+\frac{\ln i}{q}+\frac{\ln^{2}i}{2q^{2}}\quad\text{and} \quad\left(\ln i-q/2\right)^{2}<\ln^{2}i\]
so it is enough to prove that for \(3\leq i\) the next inequality is true
\[\left(2q^{2}-\frac{3}{4}\right)\left(1+\frac{\ln i}{q}+\frac{\ln^{2}i}{2q^{2}} \right)>q^{2}+\frac{4q}{3i}+\frac{\ln^{2}i}{2q}+\frac{5}{8}q\]
i.e.
\[q^{2}-\frac{3}{4}+\left(2q-\frac{3}{4q}\right)\ln i>\frac{4q}{3i}+\frac{5}{8}q\]
because
\[\frac{2q^{2}-\frac{3}{4}}{2q^{2}}>\frac{1}{2q}.\]
But \(2q-\frac{3}{4q}>5/4\) and
\[q^{2}-\frac{3}{4}+\left(2q-\frac{3}{4q}\right)\ln i>q^{2}-\frac{3}{4}+\frac{5} {4}\ln 3>q^{2}+\frac{1}{2}.\]
Also, \(\frac{4q}{3i}+\frac{5}{8}q<\frac{4q}{9}+\frac{5}{8}q=\frac{77}{72}q\) and since \(q^{2}+\frac{1}{2}>\frac{77}{72}q\) the lemma is proved.
**Lemma 2.13**.: _For \(p\geq 2\), \(\frac{1}{p}+\frac{1}{q}=1\) there is a constant \(c=c(p)>0\) such that for every natural \(n\) the next inequality is true:_
\[\frac{\ln^{2}2+2q^{2}}{2^{1/q}}+\frac{2}{3}\frac{q}{2^{3/q}}-\ln^{2}2-2q\sum_ {k=2}^{n}\frac{1}{k^{1+2/q}}-\frac{\ln^{2}2-q\ln 2+3/2q^{2}}{q2^{1+2/q}}>c(p).\]
Proof.: We have
\[\sum_{k=2}^{n}\frac{1}{k^{1+2/q}}=\frac{1}{2^{1+2/q}}+\sum_{k=3}^{n}\frac{1}{k ^{1+2/q}}<\frac{1}{2^{1+2/q}}+\int_{2}^{\infty}\frac{dx}{x^{1+2/q}}=\frac{1+q} {2.2^{2/q}}\]
and
\[\frac{\ln^{2}2-q\ln 2+3/2q^{2}}{q2^{1+2/q}}=\frac{\left(\ln 2-q/2\right)^{2}+5/ 4q^{2}}{q2^{1+2/q}}<\frac{1}{20q2^{2/q}}+\frac{5q}{2^{3+2/q}}.\]
Since
\[\frac{2}{3}\frac{q}{2^{3/q}}>\frac{1}{20q2^{2/q}}\]
and
\[\left(1-\frac{1}{2^{1/q}}\right)\ln^{2}2<\frac{1}{4}\]
it follows
\[\frac{\ln^{2}2+2q^{2}}{2^{1/q}}+\frac{2}{3}\frac{q}{2^{3/q}}-\ln^ {2}2-2q\sum_{k=2}^{n}\frac{1}{k^{1+2/q}}-\frac{\ln^{2}2-q\ln 2+3/2q^{2}}{q2^{1+2 /q}}\] \[>\frac{q}{2^{2/q}}\left[\left(2^{1+1/q}-1\right)q-\frac{13}{8} \right]-\frac{1}{4}>\frac{1}{4}\left[\left(2^{1+1/q}-1\right)q-\frac{21}{8} \right]>0\]
since the function \(f(x)=\left(2^{1+1/x}-1\right)x\) is increasing and consequently \(\left(2^{1+1/q}-1\right)q>f(1)=3\)
**Lemma 2.14**.: _For \(p\geq 2\), \(\frac{1}{p}+\frac{1}{q}=1\) and every natural numbers \(i\) and \(n\) such that \(2\leq i\leq n\) the next inequality is true:_
\[\sum_{k=i}^{n}\frac{1}{k^{p}} \left(k^{1/q}-\frac{1}{p}\right)^{p/q-1}\left[k^{1/q}\left(\ln^{2} k-2q\ln k+2q^{2}\right)-2q^{2}\right]\] \[>\frac{q\left(\ln^{2}i+2q^{2}\right)}{i^{1/q}}-\frac{q\left(\ln^ {2}n+2q^{2}\right)}{n^{1/q}}-2q^{2}\sum_{k=i}^{n}\frac{1}{k^{1+2/q}} \tag{2.18}\] \[-\frac{\ln^{2}i-q\ln i+3/2q^{2}}{2i^{2/q}}+\frac{2q^{2}}{3i^{3/q} }-\frac{2q^{2}}{3n^{3/q}}.\]
Proof.: \[\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{p/q-1}\left[k ^{1/q}\left(\ln^{2}k-2q\ln k+2q^{2}\right)-2q^{2}\right]=J-I\]
where
\[J =\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}\right)^{p/q-1}\left[k ^{1/q}\left(\ln^{2}k-2q\ln k+2q^{2}\right)-2q^{2}\right]\] \[=\sum_{k=i}^{n}\frac{\ln^{2}k-2q\ln k+2q^{2}}{k^{1+1/q}}-2q^{2} \sum_{k=i}^{n}\frac{1}{k^{1+2/q}}\]
and
\[I=\sum_{k=i}^{n}\frac{1}{k^{p}}\left[\left(k^{1/q}\right)^{p/q-1}-\left(k^{1/ q}-\frac{1}{p}-\right)^{p/q-1}\right]\left[k^{1/q}\left(\ln^{2}k-2q\ln k+2q^{2} \right)-2q^{2}\right].\]
From (2.15)
\[J>\frac{q\left(\ln^{2}i+2q^{2}\right)}{i^{1/q}}+\frac{\ln^{2}i-2q\ln i+2q^{2} }{2i^{1+1/q}}-\frac{q\left(\ln^{2}n+2q^{2}\right)}{n^{1/q}}-2q^{2}\sum_{k=i}^ {n}\frac{1}{k^{1+2/q}}\]
and from (2.17)
\[I<\frac{\ln^{2}i-2q\ln i+2q^{2}}{qi^{1+2/q}}+\frac{\ln^{2}i-q\ln i+3/2q^{2}}{2 i^{2/q}}-\frac{2q^{2}}{3i^{3/q}}+\frac{2q^{2}}{3n^{3/q}}.\]
For \(i\geq 2\) we have \(qi^{1/q}\geq 2\) and consequently
\[\frac{\ln^{2}i-2q\ln i+2q^{2}}{2i^{1+1/q}}\geq\frac{\ln^{2}i-2q\ln i+2q^{2}}{ qi^{1+2/q}}.\]
The lemma is proved.
**Lemma 2.15**.: _For \(p\geq 2\), \(\frac{1}{p}+\frac{1}{q}=1\) and every natural \(n\) the next inequality is true:_
\[\sum_{k=1}^{n}\frac{1}{k^{p}} \left(k^{1/q}-\frac{1}{p}\right)^{p/q-1}\left[k^{1/q}\left(\ln^{2} k-2q\ln k+2q^{2}\right)-2q^{2}\right]\] \[>\frac{q\left(\ln^{2}2+2q^{2}\right)}{2^{1/q}}-\frac{q\left(\ln^ {2}n+2q^{2}\right)}{n^{1/q}}-2q^{2}\sum_{k=2}^{n}\frac{1}{k^{1+2/q}} \tag{2.19}\] \[-\frac{\ln^{2}2-q\ln 2+3/2q^{2}}{2^{1+2/q}}+\frac{2}{3}\frac{q^{2} }{2^{3/q}}-\frac{2q^{2}}{3n^{3/q}}.\]
Proof.: Since
\[\sum_{k=1}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{p/ q-1}\left[k^{1/q}\left(\ln^{2}k-2q\ln k+2q^{2}\right)-2q^{2}\right]\] \[=\sum_{k=2}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{ p/q-1}\left[k^{1/q}\left(\ln^{2}k-2q\ln k+2q^{2}\right)-2q^{2}\right]\]
the lemma follows from the previous one.
**Lemma 2.16**.: _Let \(p\geq 2\), \(\frac{1}{p}+\frac{1}{q}=1\) and \(A>1\). Then for every natural numbers \(i\) and \(n\) such that \(i\leq n\) the next equality is true:_
\[\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\frac{1}{A^{j}\ln^{2j}(n +1)}\sum_{k=i}^{n}\frac{\ln^{2j}k-2qj\ln^{2j-1}k}{k^{1+1/q}} \tag{2.21}\] \[=\frac{q}{i^{1/q}}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\frac{ \ln^{2j}i}{A^{j}\ln^{2j}(n+1)}+O\left(\frac{1}{A^{2}\ln^{2}(n+1)i^{1/q}} \right)+O\left(\frac{1}{A^{2}n^{1/q}}\right). \tag{2.20}\]
Proof.: Let us denote by \(g\) the function
\[g(x)=\frac{\ln^{2j}x-2qj\ln^{2j-1}x}{x^{1+1/q}}.\]
Again, from Euler's summation formula
\[\sum_{k=i}^{n}g(k)=\int_{i}^{n}g(x)dx+\frac{g(i)+f(n)}{2}+\int_{i}^{n}\left(x -[x]-\frac{1}{2}\right)g^{\prime}(x)dx\]
where \([x]\) is the floor function. Now
\[|g(i)|=\left|\frac{\ln^{2j}i-2qj\ln^{2j-1}i}{i^{1+1/q}}\right|<\frac{c(p)j\ln ^{2j}i}{i^{1+1/q}}<\frac{c(p)j^{2}\ln^{2j-2}n}{i^{1/q}}\]
and
\[|g(n)|=\left|\frac{\ln^{2j}n-2qj\ln^{2j-1}n}{n^{1+1/q}}\right|<\frac{c(p)j\ln ^{2j}n}{n^{1+1/q}}<\frac{c(p)j^{2}\ln^{2j-2}n}{i^{1/q}}.\]
For \(g(x)\) we have
\[g^{\prime}(x)=\frac{\ln^{2j-2}x}{x^{2+1/q}}\left[-\left(1+\frac{1}{q}\right) \ln^{2}x+2j(q+2)\ln x-2qj(2j-1)\right]\]
and consequently
\[|g^{\prime}(x)|<\frac{c(p)j^{2}\ln^{2j}x}{x^{2+1/q}}<\frac{c(p)j^{2}\ln^{2j-2}x}{ x^{1+1/q}}<\frac{c(p)j^{2}\ln^{2j-2}n}{x^{1+1/q}}.\]
Then
\[\left|\int_{i}^{n}\left(x-[x]-\frac{1}{2}\right)g^{\prime}(x)dx\right|<c(p)j^{ 2}\ln^{2j-2}n\int_{i}^{n}\frac{1}{x^{1+1/q}}<\frac{c(p)j^{2}\ln^{2j-2}n}{i^{1/q }}.\]
Also
\[\int_{i}^{n}g(x)dx=\frac{q\ln^{2j}i}{i^{1/q}}-\frac{q\ln^{2j}n}{n^{1/q}}.\]
Consequently
\[\sum_{k=i}^{n}\frac{\ln^{2j}k-2qj\ln^{2j-1}k}{k^{1+1/q}}=\frac{q\ln^{2j}i}{i^{1 /q}}-\frac{q\ln^{2j}n}{n^{1/q}}+O\left(\frac{j^{2}\ln^{2j-2}n}{i^{1/q}}\right).\]
Then
\[\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\frac{1}{A^{j}\ln^{2j}(n+ 1)}\sum_{k=i}^{n}\frac{\ln^{2j}k-2qj\ln^{2j-1}k}{k^{1+1/q}}\] \[=\frac{q}{i^{1/q}}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\frac{ \ln^{2j}i}{A^{j}\ln^{2j}(n+1)}+O\left(\frac{1}{A^{2}\ln^{2}(n+1)i^{1/q}}\right) +O\left(\frac{1}{A^{2}n^{1/q}}\right).\]
## 3. Proof of the right inequality in theorem 1.3
\[d(a,b)\,\leq\left(\frac{p}{p-1}\right)^{p}\left(1+\frac{c}{\ln^{2}\frac{b}{a} }\right)^{-1},\qquad c=c(p)>0.\]
By simple change of variables and notations it is easy to see that it is enough to prove (1.11) for the interval \((1,b)\).
From Holder's inequality we have for every two functions \(f(x)\geq 0\) and \(g(x)>0\), \(x\in(1,b)\) and \(f^{p}(x)\) and \(g^{q}(x)\) are integrable over (1,b),
\[\left(\int_{1}^{x}f(t)dt\right)^{p}\leq\left(\int_{1}^{x}g^{q}(t)dt\right)^{p/ q}\left(\int_{1}^{x}\frac{f^{p}(t)}{g^{p}(t)}dt\right).\]
After multiplying both sides by \(x^{-p}\), integrating from 1 to \(b\) and changing the order of integration in the right side we get
\[\int_{1}^{b}\left(\frac{1}{x}\int_{1}^{x}f(t)dt\right)^{p}dx\leq\int_{1}^{b} \left[\frac{1}{g^{p}(t)}\int_{t}^{b}\left(\int_{1}^{x}g^{q}(u)du\right)^{p/q} \frac{dx}{x^{p}}\right]f^{p}(t)dt.\]
Let us denote for brevity \(M(g,t)=g^{-p}(t)M^{*}(g,t)\) where
\[M^{*}(g,t)=\int_{t}^{b}\left(\int_{1}^{x}g^{q}(u)du\right)^{p/q}\frac{dx}{x^{p }}.\]
Then for every two functions \(f(x)\geq 0\) and \(g(x)>0\), \(1<x<b\) such that \(f^{p}(x)\) and \(g^{q}(x)\) are integrable the next upper estimation holds
\[\int_{1}^{b}\left(\frac{1}{x}\int_{1}^{x}f(t)dt\right)^{p}dx\leq\max_{1<t<b}M( g,t)\int_{1}^{b}f^{p}(t)dt\]
and consequently for every function \(g(x)>0\), \(1<x<b\)
\[d(1,b)\leq\max_{1<t<b}M(g,t).\]
Now we want to minimize
\[\max_{1<t<b}M(g,t)\]
over all functions \(g(x)>0\) on the interval \((1,b)\) or to find
\[\min_{g(x)>0}\,\max_{1<t<b}\frac{1}{g^{p}(t)}\int_{t}^{b}\left(\int_{1}^{x}g^{ q}(u)du\right)^{p/q}\frac{dx}{x^{p}}.\]
_Remark 3.1_.: For \(g(x)=x^{-1/(pq)}\) we obtain the original Hardy inequality. Indeed, we have
\[\int_{1}^{x}g^{q}(u)du=qx^{1/q}-q<qx^{1/q},\]
\[\int_{t}^{b}\left(\int_{1}^{x}g^{q}(u)du\right)^{p/q}\frac{dx}{x^{p}}<\int_{t} ^{b}\left(qx^{1/q}\right)^{p/q}\frac{dx}{x^{p}}=q^{p}\left(t^{-1/q}-b^{-1/q} \right)<q^{p}t^{-1/q}\]
for every \(1<t<b\). Consequently \(M(g,t)<q^{p}\) for every \(1<t<b\), which means that
\[\max_{1<t<b}M(g,t)<q^{p}\quad\text{i.e.}\quad d(1,b)\leq q^{p}.\]
Now, for the function \(g(x)\), defined by (2.4) we have
\[\int_{1}^{x}g^{q}(u)du =\frac{q}{1+\alpha^{2}q^{2}}\left[x^{1/q}(\cos(\alpha\ln x)+ \alpha q\sin(\alpha\ln x))-1\right]\] \[<\frac{qx^{1/q}}{1+\alpha^{2}q^{2}}\left[\cos(\alpha\ln x)+ \alpha q\sin(\alpha\ln x)\right]\]
and for every \(1<t<b\)
\[M^{*}(g,t)<q^{p/q}\int_{t}^{b}\left(\frac{\cos(\alpha\ln x)+\alpha q\sin( \alpha\ln x)}{1+\alpha^{2}q^{2}}\right)^{p/q}\frac{dx}{x^{1+1/q}}.\]
Then from (2.6) it follows that for every \(1<t<b\)
\[M^{*}(g,t)<-q^{p}\left(1+\frac{c}{\ln^{2}b}\right)^{-1}\int_{t}^{b}\left(g^{ p}(x)\right)^{\prime}dx\leq q^{p}\left(1+\frac{c}{\ln^{2}b}\right)^{-1}g^{p}(t)\]
and consequently
\[M(g,t)\leq q^{p}\left(1+\frac{c}{\ln^{2}b}\right)^{-1}=\left(\frac{p}{p-1} \right)^{p}\left(1+\frac{c}{\ln^{2}\frac{b}{a}}\right)^{-1}.\]
The last means that
\[\max_{1<t<b}M(g,t)\leq\left(\frac{p}{p-1}\right)^{p}\left(1+\frac{c}{\ln^{2} \frac{b}{a}}\right)^{-1}\]
i.e.
\[d(1,b)\leq\left(\frac{p}{p-1}\right)^{p}\left(1+\frac{c}{\ln^{2}\frac{b}{a}} \right)^{-1}.\]
## 4. Proof of the left inequality in theorem 1.3
\[d(a,b)\,\geq\left(\frac{p}{p-1}\right)^{p}-\frac{c}{\ln^{2}\frac{b}{a}}\,,\qquad c =c(p)>0.\]
By changing the order of integration, we write the left side of (1.5) for \(a=1\) and \(f(x)>0,\ 1<x<b\) in the following way
\[\int_{1}^{b}\left(\frac{1}{x}\int_{1}^{x}f(t)dt\right)^{p}\,dx=\int_{1}^{b}M(t )f^{p}(t)dt\]
where
\[M(t)=\frac{1}{[f(t)]^{p/q}}\int_{t}^{b}\left(\int_{1}^{x}f(u)du\right)^{p/q} \frac{dx}{x^{p}}.\]
Obviously
\[d(1,b)\geq\min_{1<t<b}M(t).\]
Then for the function \(f^{*}(x)\) defined in (2.7) we have
\[\int_{1}^{x}f^{*}(u)du=qx^{1/q}\sin(\alpha\ln x)\]
and
\[\int_{t}^{b}\left(\int_{1}^{x}f^{*}(u)du\right)^{p/q}\frac{dx}{x^{p}}=q^{p/q} \int_{t}^{b}(\sin(\alpha\ln x))^{p/q}\frac{dx}{x^{1+1/q}}. \tag{4.1}\]
Now let \(0<\epsilon<1\). Then from (4.1) and (2.8) it follows that for \(b>b_{0}\) where
\[b_{0}=e^{\pi/\sqrt{\min\{q(p-q)^{-1}(pq+1)^{-2},4(pq)^{-2}\}\epsilon}}\]
the next inequality holds
\[\int_{t}^{b}\left(\int_{1}^{x}f^{*}(u)du\right)^{p/q}\frac{dx}{x^{p}}\geq- \frac{q^{p}}{1+(pq+\epsilon)\alpha^{2}}\int_{t}^{b}\left[(f^{*}(x))^{p/q} \right]^{\prime}dx=\frac{q^{p}[f^{*}(t)]^{p/q}}{1+(pq+\epsilon)\alpha^{2}}.\]
Consequently for every \(b>b_{0}\)
\[M(t)\geq\frac{q^{p}}{1+(pq+\epsilon)\alpha^{2}}\geq q^{p}\left(1-(pq+\epsilon )\alpha^{2}\right)\geq q^{p}-\frac{q^{p}(pq+\epsilon)\pi^{2}}{\ln^{2}b}\]
i.e.
\[d(1,b)\geq q^{p}-\frac{q^{p}(pq+\epsilon)\pi^{2}}{\ln^{2}b}.\]
Since for \(b\leq b_{0}\) we have
\[d(1,b)\geq q^{p}-\frac{q^{p}\ln^{2}b_{0}}{\ln^{2}b}\]
by taking \(c=\max\{q^{p}(pq+\epsilon)\pi^{2},q^{p}\ln^{2}b_{0}\}\) we complete the proof.
## 5. Proof of the left inequality in theorem 1.6
\[d_{n}\geq\left(\frac{p}{p-1}\right)^{p}-\frac{c}{\ln^{2}n}\,,\qquad c=c(p)>0.\]
By changing the order of summation we write the left side of (1.6) in the following way
\[\sum_{k=1}^{n}\left(\frac{1}{k}\sum_{j=1}^{k}a_{j}\right)^{p}=\sum_{i=1}^{n} \left[\frac{1}{a_{i}^{p/q}}\sum_{k=i}^{n}\frac{1}{k^{p}}\left(\sum_{j=1}^{k}a_ {j}\right)^{p/q}\right]a_{i}^{p}=\sum_{i=1}^{n}M_{i}a_{i}^{p}\]
where
\[M_{i}=\frac{1}{a_{i}^{p/q}}M_{i}^{*}\quad\text{and}\quad M_{i}^{*}=\sum_{k=i} ^{n}\frac{1}{k^{p}}\left(\sum_{j=1}^{k}a_{j}\right)^{p/q}.\]
Then
\[\sum_{k=1}^{n}\left(\frac{1}{k}\sum_{j=1}^{k}a_{j}\right)^{p}\geq\min_{1\leq i \leq n}M_{i}\sum_{k=1}^{n}a_{i}^{p}\]
and consequently
\[d_{n}\geq\min_{1\leq i\leq n}M_{i}.\]
Now, we will prove that the sequence \(a_{k}^{*}\) defined in Theorem 1.6 is the "almost extremal" sequence, i. e. the inequality (1.14) holds. We have
\[\sum_{j=1}^{k}a_{j}^{*}=\int_{1}^{k+1}f^{*}(x)dx=q(k+1)^{1/q}\sin(\alpha\ln(k +1))\]
where \(f^{*}\) is the function defined by (1.13). It is easy to see that the function \(x^{p-1-1/q}\left(\sin(\alpha\ln x)\right)^{p/q}=\left(x^{1/q}\sin(\alpha\ln x )\right)^{p/q}\) is increasing and consequently
\[M_{i}^{*} =q^{p/q}\sum_{k=i}^{n}\frac{(k+1)^{p-1-1/q}}{k^{p}}\left(\sin( \alpha\ln(k+1))\right)^{p/q} \tag{5.1}\] \[\geq q^{p/q}\int_{i}^{n+1}(\sin(\alpha\ln x))^{p/q}\frac{dx}{x^{1 +1/q}}.\]
Since the function \(f^{*}(x)\) is continuous there exists a point \(\eta_{i}\in[i,i+1]\) such that \(a_{i}=f^{*}(\eta_{i})\). Now let \(0<\epsilon<1\). Then from (5.1) and Lemma 2.6 it follows that for every integer
\[n>n_{0}=e^{\pi/\sqrt{\min\{q(p-q)^{-1}(pq+1)^{-2},4(pq)^{-2}\}\epsilon}}\]
\[M_{i}^{*}\geq-\frac{q^{p}}{1+(pq+\epsilon)\alpha^{2}}\int_{\eta_{i}}^{b}\left[ (f^{*}(x))^{p/q}\right]^{\prime}dx=\frac{q^{p}\left[f^{*}(\eta_{i})\right]^{p /q}}{1+(pq+\epsilon)\alpha^{2}}\]
and consequently
\[M_{i}\geq\frac{q^{p}}{1+(pq+\epsilon)\alpha^{2}}\geq q^{p}\left(1-(pq+\epsilon )\alpha^{2}\right)\geq q^{p}-\frac{q^{p}(pq+\epsilon)\pi^{2}}{\ln^{2}(n+1)}\]
i.e.
\[d_{n}\geq q^{p}-\frac{q^{p}(pq+\epsilon)\pi^{2}}{\ln^{2}(n+1)}.\]
Since for \(n\leq n_{0}\) we have
\[d_{n}\geq q^{p}-\frac{q^{p}\ln^{2}n_{0}}{\ln^{2}n}\]
by taking \(c=\max\{q^{p}(pq+\epsilon)\pi^{2},q^{p}\ln^{2}n_{0}\}\) we complete the proof.
## 6. Proof of the right inequality in theorem 1.6
\[d_{n}<\left(\frac{p}{p-1}\right)^{p}-\frac{c}{\ln^{2}n}\,,\qquad c=c(p)>0.\]
From Holder's inequality we have for every two sequences \(\mu_{i}>0\) and \(\eta_{i}\geq 0\), \(i=1,...n\)
\[\sum_{i=1}^{k}\mu_{i}\eta_{i}\leq\left(\sum_{i=1}^{k}\eta_{i}^{p}\right)^{1/p} \left(\sum_{i=1}^{k}\mu_{i}^{q}\right)^{1/q}\]
or
\[\left(\frac{1}{k}\sum_{i=1}^{k}\mu_{i}\eta_{i}\right)^{p}\leq\frac{1}{k^{p}} \left(\sum_{i=1}^{k}\eta_{i}^{p}\right)\left(\sum_{i=1}^{k}\mu_{i}^{q}\right) ^{p/q}.\]
Denoting \(a_{i}=\mu_{i}\eta_{i}\) and after changing the order of summation we get
\[\sum_{k=1}^{n}\left(\frac{1}{k}\sum_{i=1}^{k}a_{i}\right)^{p}\leq\sum_{i=1}^{ n}M_{i}a_{i}^{p}\leq\left(\max_{1\leq i\leq n}M_{i}\right)\sum_{i=1}^{n}a_{i}^{p},\]
where
\[M_{i}=\frac{1}{\mu_{i}^{p}}M_{i}^{*}\,,\quad M_{i}^{*}=\sum_{k=i}^{n}\frac{1}{ k^{p}}\left(\sum_{j=1}^{k}\mu_{j}^{q}\right)^{p/q}.\]
Obviously
\[d_{n}\leq\max_{1\leq i\leq n}M_{i},\quad\mbox{so we want to minimize}\quad\max_{1\leq i\leq n}M_{i}\]
over all sequences \(\mu=\{\mu_{i}>0\}\), \(i=1,2,...,n\), i.e. to find
\[\min_{\mu>0}\,\max_{1\leq i\leq n}M_{i}\]
or, at least, to make it as small as possible.
_Remark 6.1_.: By choosing, for instance,
\[\mu_{k}=k^{-1/(pq)},\quad k=1,2,\,...\,n\]
we obtain the Hardy's inequality with \(d_{n}=\left(\frac{p}{p-1}\right)^{p}\).
Indeed,
\[\sum_{j=1}^{k}\mu_{j}^{q}=1+\sum_{j=2}^{k}\frac{1}{j^{1/p}}<1+\int_{1}^{k}\frac {dx}{x^{1/p}}=qk^{1/q}-q+1=qk^{1/q}\left(1-\frac{1}{pk^{1/q}}\right) \tag{6.1}\]
and from (2.13) of Lemma 2.8
\[M_{i}^{*}\leq q^{p/q}\sum_{k=i}^{n}\frac{1}{k^{1+1/q}}\left(1-\frac{1}{pk^{1/ q}}\right)^{p/q}\leq\left(\frac{p}{p-1}\right)^{p}i^{-1/q}.\]
Consequently
\[M_{i}\leq\left(\frac{p}{p-1}\right)^{p}\quad\text{and}\quad d_{n}\leq\left(\frac{ p}{p-1}\right)^{p}.\]
But in order to prove the right inequality of (1.12) we need to make a more complicated choise of the sequence \(\mu_{k}\). Let
\[\mu_{k}=\left(\frac{A}{k^{1/p}}-\frac{1}{\ln^{2}(n+1)}\int_{k}^{k+1}\frac{\ln^ {2}x}{x^{1/p}}dx\right)^{1/q}\]
where \(A=A(p)>2\) is a constant which depends only on \(p\) and will be chosen later. It is obvious that the sequence \(\mu_{k}\), \(k=1,...,n\) is well defined. Then for every \(i\) such that \(1\leq i\leq n\)
\[\mu_{i}^{p}<\frac{A^{p/q}}{i^{1/q}}=\frac{c(p)}{i^{1/q}} \tag{6.2}\]
and
\[\mu_{i}^{p} >\left(\frac{A}{i^{1/p}}-\frac{\ln^{2}(i+1)}{i^{1/p}\ln^{2}(n+1) }\right)^{p/q}=\frac{A^{p/q}}{i^{1/q}}\left(1-\frac{\ln^{2}(i+1)}{A\ln^{2}(n +1)}\right)^{p/q} \tag{6.3}\] \[=\frac{A^{p/q}}{i^{1/q}}\sum_{j=0}^{\infty}(-1)^{j}\binom{p/q}{j} \left(\frac{\ln^{2}(i+1)}{A\ln^{2}(n+1)}\right)^{j}.\]
Now
\[\sum_{j=1}^{k}\mu_{j}^{q} =A\sum_{j=1}^{k}\frac{1}{j^{1/p}}-\frac{1}{\ln^{2}(n+1)}\int_{1}^ {k+1}\frac{\ln^{2}x}{x^{1/p}}dx\] \[<A\sum_{j=1}^{k}\frac{1}{j^{1/p}}-\frac{1}{\ln^{2}(n+1)}\int_{1}^ {k}\frac{\ln^{2}x}{x^{1/p}}dx.\]
Since
\[\int_{1}^{k}\frac{\ln^{2}x}{x^{1/p}}dx=qk^{1/q}\left(\ln^{2}k-2q\ln k+2q^{2} \right)-2q^{3}\]
and from (6.1) we have
\[\sum_{j=1}^{k}\mu_{j}^{q} <Aq\left(k^{1/q}-\frac{1}{p}\right)-\frac{q}{\ln^{2}(n+1)}\left[ k^{1/q}\left(\ln^{2}k-2q\ln k+2q^{2}\right)-2q^{2}\right]\] \[=Aq\left(k^{1/q}-\frac{1}{p}\right)(1-S(k))\]
where for brevity we denoted by
\[S(k)=\frac{k^{1/q}\left(\ln^{2}k-2q\ln k+2q^{2}\right)-2q^{2}}{A\left(k^{1/q }-\frac{1}{p}\right)\ln^{2}(n+1)}.\]
We have
\[(Aq)^{-p/q}M_{i}^{*} <(Aq)^{-p/q}\sum_{k=i}^{n}\frac{1}{k^{p}}\left[Aq\left(k^{1/q}-\frac {1}{p}\right)\right]^{p/q}(1-S(k))^{p/q}\] \[=\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{p/ q}\sum_{j=0}^{\infty}(-1)^{j}\binom{p/q}{j}S^{j}(k)\] \[=\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{p/ q}-\frac{p}{q}\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{p/ q}S(k) \tag{6.4}\] \[+\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{p/ q}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}S^{j}(k)=L_{1}-L_{2}+L_{3}.\]
From 2.13 of Lemma 2.8
\[L_{1}=\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{p/q}\leq q \left(\frac{1}{i^{1/q}}-\frac{1}{(n+1)^{1/q}}\right). \tag{6.5}\]
From 2.18 of Lemma 2.14 we have for \(2\leq i\leq n\)
\[L_{2}(i\geq 2) =\frac{p}{q}\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p }\right)^{p/q}S(k)\] \[>\frac{p}{qA\ln^{2}(n+1)}\Big{[}\frac{q\left(\ln^{2}i+2q^{2} \right)}{i^{1/q}}-\frac{q\left(\ln^{2}n+2q^{2}\right)}{n^{1/q}}-2q^{2}\sum_{k =i}^{n}\frac{1}{k^{1+2/q}} \tag{6.6}\] \[-\frac{\ln^{2}i-q\ln i+3q^{2}/2}{2i^{2/q}}+\frac{2q^{2}}{3i^{3/q} }-\frac{2q^{2}}{3n^{3/q}}\Big{]}\]
and since \(S(1)=0\)
\[L_{2}(i=1) =\frac{p}{q}\sum_{k=1}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p }\right)^{p/q}S(k)\] \[>\frac{p}{qA\ln^{2}(n+1)}\Big{[}\frac{q\left(\ln^{2}2+2q^{2} \right)}{2^{1/q}}-\frac{q\left(\ln^{2}n+2q^{2}\right)}{n^{1/q}}-2q^{2}\sum_{k =2}^{n}\frac{1}{k^{1+2/q}} \tag{6.7}\] \[-\frac{\ln^{2}2-q\ln 2+3q^{2}/2}{2^{1+2/q}}+\frac{2}{3}\frac{q^{2} }{2^{3/q}}-\frac{2q^{2}}{3n^{3/q}}\Big{]}\]
for \(i=1\).
For \(k\geq 2\) from (2.1) of Lemma 2.1 and since
\[\left|\frac{2q}{\ln k}-\frac{2q^{2}-2q^{2}k^{-1/q}}{\ln^{2}k}\right|<\frac{c(p )}{\ln k}\]
it follows that for every natural \(j\geq 2\)
\[S^{j}(k) =\left[\frac{k^{1/q}\ln^{2}k}{A\left(k^{1/q}-\frac{1}{p}\right)\ln^ {2}(n+1)}\right]^{j}\left[1-\frac{2qj}{\ln k}+\frac{\left(2q^{2}-2q^{2}k^{-1/q} \right)j}{\ln^{2}k}+O\left(\frac{j^{2}}{\ln^{2}k}\right)\right]\] \[=\left[\frac{k^{1/q}\ln^{2}k}{A\left(k^{1/q}-\frac{1}{p}\right) \ln^{2}(n+1)}\right]^{j}\left[1-\frac{2qj}{\ln k}+O\left(\frac{j^{2}}{\ln^{2}k }\right)\right]\] \[=\left[\frac{k^{1/q}}{A\left(k^{1/q}-\frac{1}{p}\right)\ln^{2}(n +1)}\right]^{j}\left[\ln^{2j}k-2qj\ln^{2j-1}k\right]\] \[+\left[\frac{k^{1/q}\ln^{2}k}{A\left(k^{1/q}-\frac{1}{p}\right) \ln^{2}(n+1)}\right]^{j}O\left(\frac{j^{2}}{\ln^{2}k}\right)\] \[=\left[\frac{k^{1/q}}{A\left(k^{1/q}-\frac{1}{p}\right)\ln^{2}(n +1)}\right]^{j}\left[\ln^{2j}k-2qj\ln^{2j-1}k\right]+\left(\frac{2}{A}\right)^ {j}O\left(\frac{j^{2}}{\ln^{2}(n+1)}\right).\]
Then for \(i\geq 2\)
\[L_{3} =\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{p /q}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}S^{j}(k)\] \[=\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{p /q}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\frac{k^{j/q}\left(\ln^{2j}k-2qj \ln^{2j-1}k\right)}{A^{j}\left(k^{1/q}-\frac{1}{p}\right)^{j}\ln^{2j}(n+1)}\] \[+\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{ p/q}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\left(\frac{2}{A}\right)^{j}O \left(\frac{j^{2}}{\ln^{2}(n+1)}\right)=L_{31}+L_{32}.\]
Now
\[\left|L_{32}\right| =\left|\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\left(\frac{2}{A}\right) ^{j}O\left(\frac{j^{2}}{\ln^{2}(n+1)}\right)\right| \tag{6.8}\] \[<\frac{c(p)}{\ln^{2}(n+1)}\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1 /q}\right)^{p/q}\sum_{j=2}^{\infty}\left|\binom{p/q}{j}\right|\left(\frac{2}{A }\right)^{j}j^{2}\] \[=\frac{c(p)}{A^{2}\ln^{2}(n+1)}\sum_{k=i}^{n}\frac{1}{k^{1+1/q}}< \frac{c(p)}{i^{1/q}A^{2}\ln^{2}(n+1)}=\frac{1}{i^{1/q}}O\left(\frac{1}{A^{2} \ln^{2}(n+1)}\right).\]
For \(\alpha\geq 1\) and \(0\leq x\leq 1/2\) the next inequality holds
\[\frac{1}{(1-x)^{\alpha}}\leq 1+\alpha 2^{\alpha+1}x\]
which is easy to verify. Then for \(p\geq 2\) and \(j\geq 1\) we have
\[\frac{k^{j/q}}{\left(k^{1/q}-\frac{1}{p}\right)^{j}}=\frac{1}{\left(1-\frac{1}{ pk^{1/q}}\right)^{j}}=1+O\left(\frac{2^{j}j}{k^{1/q}}\right)=1+O\left(\frac{2^{j}j}{ \ln^{2}k}\right).\]
Consequently
\[\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\frac{k^{j/q}\left(\ln^{2 j}k-2qj\ln^{2j-1}k\right)}{A^{j}\left(k^{1/q}-\frac{1}{p}\right)^{j}\ln^{2j}(n+1)}\] \[=\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\frac{\ln^{2j}k-2qj\ln^ {2j-1}k}{A^{j}\ln^{2j}(n+1)}+O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right).\]
Then by (2.20) of Lemma 2.16
\[L_{31} =\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{p /q}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\frac{\ln^{2j}k-2qj\ln^{2j-1}k}{A^ {j}\ln^{2j}(n+1)}\] \[+\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{p /q}O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right)\] \[=\sum_{k=i}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{p /q}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\frac{\ln^{2j}k-2qj\ln^{2j-1}k}{A^ {j}\ln^{2j}(n+1)}\] \[+\frac{1}{i^{1/q}}O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right)\] \[=\sum_{k=i}^{n}\frac{1}{k^{1+1/q}}\sum_{j=2}^{\infty}(-1)^{j} \binom{p/q}{j}\frac{\ln^{2j}k-2qj\ln^{2j-1}k}{A^{j}\ln^{2j}(n+1)}O\left(\frac {1}{k^{1/q}}\right)\] \[+\frac{1}{i^{1/q}}O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right)\] \[=\sum_{k=i}^{n}\frac{1}{k^{1+1/q}}\sum_{j=2}^{\infty}(-1)^{j} \binom{p/q}{j}\frac{\ln^{2j}k-2qj\ln^{2j-1}k}{A^{j}\ln^{2j}(n+1)}\] \[+\frac{1}{i^{1/q}}O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right)\] \[=\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\frac{1}{A^{j}\ln^{2j}( n+1)}\sum_{k=i}^{n}\frac{\ln^{2j}k-2qj\ln^{2j-1}k}{k^{1+1/q}}\] \[+\frac{1}{i^{1/q}}O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right)\] \[=\frac{q}{i^{1/q}}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\frac{ \ln^{2j}i}{A^{j}\ln^{2j}(n+1)}+\frac{1}{i^{1/q}}O\left(\frac{1}{A^{2}\ln^{2}( n+1)}\right)+O\left(\frac{1}{A^{2}n^{1/q}}\right) \tag{6.9}\]
because
\[\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{p/q}=\frac{1}{k^{1+1/q}}\left(1- \frac{1}{pk^{1/q}}\right)^{p/q}=\frac{1}{k^{1+1/q}}\left(1+O\left(\frac{1}{k^{1/ q}}\right)\right),\]
\[\left|\binom{p/q}{j}\frac{(-1)^{j}\left(\ln^{2j}k-2qj\ln^{2j-1}k\right)}{A^{j} \ln^{2j}(n+1)}O\left(\frac{1}{k^{1/q}}\right)\right|<\frac{cj^{p/q+1}\ln^{2}k} {A^{j}\ln^{2}(n+1)k^{1/q}}<\frac{cj^{p/q+1}}{A^{j}\ln^{2}(n+1)}\]
and
\[\sum_{k=i}^{n}\frac{1}{k^{1+1/q}}\sum_{j=2}^{\infty}\frac{cj^{p/q+1}}{A^{j} \ln^{2}(n+1)}<\frac{c}{A^{2}\ln^{2}(n+1)}\sum_{k=i}^{n}\frac{1}{k^{1+1/q}}< \frac{c}{A^{2}\ln^{2}(n+1)i^{1/q}}.\]
Consequently for \(i\geq 2\)
\[L_{3}(i\geq 2)=\frac{q}{i^{1/q}}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j} \frac{\ln^{2j}i}{A^{j}\ln^{2j}(n+1)}+\frac{1}{i^{1/q}}O\left(\frac{1}{A^{2} \ln^{2}(n+1)}\right)+O\left(\frac{1}{A^{2}n^{1/q}}\right). \tag{6.10}\]
Since \(S(1)=0\) we have for \(i=1\)
\[\begin{split} L_{3}(i=1)&=\sum_{k=1}^{n}\frac{1}{k ^{p}}\left(k^{1/q}-\frac{1}{p}\right)^{p/q}\sum_{j=2}^{\infty}(-1)^{j}\binom{p /q}{j}S^{j}(k)\\ &=\sum_{k=2}^{n}\frac{1}{k^{p}}\left(k^{1/q}-\frac{1}{p}\right)^ {p/q}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}S^{j}(k)\\ &=\frac{q}{2^{1/q}}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j} \frac{\ln^{2j}2}{A^{j}\ln^{2j}(n+1)}+O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right) +O\left(\frac{1}{A^{2}n^{1/q}}\right)\\ &=\frac{q}{2^{1/q}}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j} \frac{\ln^{2j}2}{A^{j}\ln^{2j}(n+1)}+O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right) \end{split} \tag{6.11}\]
We consider the cases \(i=1\) and \(i\geq 2\) separately.
**Case \(i=1\).**
From (6.5), (6.6) and (6.11) we obtain
\[(Aq)^{-p/q}M_{1}^{*}\] \[<q-\frac{p}{A\ln^{2}(n+1)}\left[\frac{\ln^{2}2+2q^{2}}{2^{1/q}}+ \frac{2}{3}\frac{q}{2^{3/q}}-2q\sum_{k=2}^{n}\frac{1}{k^{1+2/q}}-\frac{\ln^{2}2 -q\ln 2+3q^{2}/2}{q2^{1+2/q}}\right]\] \[\quad+\frac{q}{2^{1/q}}\sum_{j=2}^{\infty}(-1)^{j}{p/q\choose j} \frac{\ln^{2j}2}{A^{j}\ln^{2j}(n+1)}+O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right)-T\] \[=q\sum_{j=0}^{\infty}(-1)^{j}{p/q\choose j}\frac{\ln^{2j}2}{A^{j} \ln^{2j}(n+1)}-q\left(1-\frac{1}{2^{1/q}}\right)\sum_{j=2}^{\infty}(-1)^{j}{p/ q\choose j}\frac{\ln^{2j}2}{A^{j}\ln^{2j}(n+1)}\] \[-\frac{p}{A\ln^{2}(n+1)}\left[\frac{\ln^{2}2+2q^{2}}{2^{1/q}}+ \frac{2}{3}\frac{q}{2^{3/q}}-\ln^{2}2-2q\sum_{k=2}^{n}\frac{1}{k^{1+2/q}}-\frac {\ln^{2}2-q\ln 2+3/2q^{2}}{q2^{1+2/q}}\right]\] \[+O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right)-T\] \[=q\sum_{j=0}^{\infty}(-1)^{j}{p/q\choose j}\frac{\ln^{2j}2}{A^{j} \ln^{2j}(n+1)}+O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right)-T\] \[-\frac{p}{A\ln^{2}(n+1)}\left[\frac{\ln^{2}2+2q^{2}}{2^{1/q}}+ \frac{2}{3}\frac{q}{2^{3/q}}-\ln^{2}2-2q\sum_{k=2}^{n}\frac{1}{k^{1+2/q}}-\frac {\ln^{2}2-q\ln 2+3/2q^{2}}{q2^{1+2/q}}\right]\]
where
\[T=\frac{q}{(n+1)^{1/q}}-\frac{p}{A\ln^{2}(n+1)}\left[\frac{\ln^{2}n+2q^{2}}{n ^{1/q}}+\frac{2q}{3n^{3/q}}\right].\]
It is obvious that by taking \(A=A(p)\) big enough we can make \(T\) positive. By Lemma 2.13 there is a constant \(c(p)\) such that
\[\frac{\ln^{2}2+2q^{2}}{2^{1/q}}+\frac{2}{3}\frac{q}{2^{3/q}}-\ln^{2}2-\sum_{k= 2}^{n}\frac{2q}{k^{1+2/q}}-\frac{\ln^{2}2-q\ln 2+3/2q^{2}}{q2^{1+2/q}}>c(p).\]
Then
\[(Aq)^{-p/q}M_{1}^{*}<q\sum_{j=0}^{\infty}(-1)^{j}{p/q\choose j}\frac{\ln^{2j} 2}{A^{j}\ln^{2j}(n+1)}+O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right)-\frac{pc(p)}{ A\ln^{2}(n+1)}.\]
Again, by taking \(A=A(p)\) big enough we have
\[(Aq)^{-p/q}M_{1}^{*}<q\sum_{j=0}^{\infty}(-1)^{j}{p/q\choose j}\frac{\ln^{2j} 2}{A^{j}\ln^{2j}(n+1)}-\frac{c(p)}{A\ln^{2}(n+1)}\]
and from (6.3) and (6.11) we get
\[M_{1}<\left(\frac{p}{p-1}\right)^{p}-\frac{c(A,p)}{\ln^{2}(n+1)}=\left(\frac{p }{p-1}\right)^{p}-\frac{c(p)}{\ln^{2}(n+1)}.\]
**Case \(i\geq 2\).**
From (6.5), (6.6), (6.10) we obtain
\[(Aq)^{-p/q}M_{i}^{*}<q\left[\frac{1}{i^{1/q}}-\frac{1}{(n+1)^{1/q}}\right]\] \[-\frac{p}{qA\ln^{2}(n+1)}\Big{[}\frac{q\left(\ln^{2}i+2q^{2} \right)}{i^{1/q}}-\frac{q\left(\ln^{2}n+2q^{2}\right)}{n^{1/q}}-2q^{2}\sum_{k=i }^{n}\frac{1}{k^{1+2/q}}\] \[-\frac{\ln^{2}i-q\ln i+3/2q^{2}}{2i^{2/q}}+\frac{2q^{2}}{3i^{3/q} }-\frac{2q^{2}}{3n^{3/q}}\Big{]}+O\left(\frac{1}{A^{2}n^{1/q}}\right)\] \[+\frac{q}{i^{1/q}}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\frac {\ln^{2j}i}{A^{j}\ln^{2j}(n+1)}+\frac{1}{i^{1/q}}O\left(\frac{1}{A^{2}\ln^{2}( n+1)}\right)\] \[=\frac{q}{i^{1/q}}\sum_{j=0}^{\infty}(-1)^{j}\binom{p/q}{j}\frac {\ln^{2j}i}{A^{j}\ln^{2j}(n+1)}-\frac{q}{(n+1)^{1/q}}\] \[-\frac{p}{A\ln^{2}(n+1)}\Big{[}\frac{2q^{2}}{i^{1/q}}-\frac{\ln^{ 2}n+2q^{2}}{n^{1/q}}-2q\sum_{k=i}^{n}\frac{1}{k^{1+2/q}}+\frac{2q}{3i^{3/q}}- \frac{2q}{3n^{3/q}}\] \[-\frac{\ln^{2}i-q\ln i+3/2q^{2}}{2qi^{2/q}}\Big{]}+\frac{1}{i^{1/ q}}O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right)+O\left(\frac{1}{A^{2}n^{1/q}}\right)\] \[=\frac{q}{i^{1/q}}\sum_{j=0}^{\infty}(-1)^{j}\binom{p/q}{j}\frac {\ln^{2j}(i+1)}{A^{j}\ln^{2j}(n+1)}-\frac{q}{i^{1/q}}\sum_{j=2}^{\infty}(-1)^{ j}\binom{p/q}{j}\frac{\ln^{2j}(i+1)-\ln^{2j}i}{A^{j}\ln^{2j}(n+1)}\] \[-\frac{p}{A\ln^{2}(n+1)}\Big{[}\frac{2q^{2}-\ln^{2}(i+1)+\ln^{2} i}{i^{1/q}}-2q\sum_{k=i}^{n}\frac{1}{k^{1+2/q}}\] \[-\frac{\ln^{2}i-q\ln i+3/2q^{2}}{2qi^{2/q}}+\frac{2q}{3i^{3/q}} \Big{]}+\frac{1}{i^{1/q}}O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right)-T\]
where
\[T=\frac{q}{(n+1)^{1/q}}-\frac{p}{A\ln^{2}(n+1)}\left[\frac{\ln^{2}n+2q^{2}}{n ^{1/q}}+\frac{2q}{3n^{3/q}}\right]+O\left(\frac{1}{A^{2}n^{1/q}}\right).\]
By taking \(A=A(p)\) big enough we can make \(T\) positive. Now
\[\left|\frac{q}{i^{1/q}}\sum_{j=2}^{\infty}(-1)^{j}\binom{p/q}{j}\frac{\ln^{2j} (i+1)-\ln^{2j}i}{A^{j}\ln^{2j}(n+1)}\right|=\frac{1}{i^{1/q}}O\left(\frac{1}{A ^{2}\ln^{2}(n+1)}\right).\]
Also we have
\[\ln^{2}(i+1)-\ln^{2}i<\frac{3}{4}\]
and
\[\sum_{k=i}^{n}\frac{1}{k^{1+2/q}}\leq\frac{1}{i^{1+2/q}}+\int_{i}^{n}\frac{dx} {x^{1+2/q}}=\frac{1}{i^{1+2/q}}+\frac{q}{2i^{2/q}}\]
and consequently
\[\frac{2q^{2}-\ln^{2}(i+1)+\ln^{2}i}{i^{1/q}}-2q\sum_{k=i}^{n}\frac{1} {k^{1+2/q}}-\frac{\ln^{2}i-q\ln i+3/2q^{2}}{2qi^{2/q}}+\frac{2q}{3i^{3/q}}\] \[>\frac{1}{i^{1/q}}\left[2q^{2}-\frac{3}{4}+\frac{2q}{3i^{2/q}}- \frac{2q}{i^{1+1/q}}-\frac{q^{2}}{i^{1/q}}-\frac{\ln^{2}i-q\ln i+3/2q^{2}}{2qi^ {1/q}}\right].\]
By Lemma 2.12 there is a constant \(c(p)>0\) such that
\[\frac{2q^{2}-\ln^{2}(i+1)+\ln^{2}i}{i^{1/q}}-2q\sum_{k=i}^{n}\frac{1}{k^{1+2/q} }-\frac{\ln^{2}i-q\ln i+3/2q^{2}}{2qi^{2/q}}+\frac{2q}{3i^{3/q}}>\frac{c(p)}{i ^{1/q}}.\]
Then
\[(Aq)^{-p/q}M_{i}^{*}\] \[<\frac{q}{i^{1/q}}\sum_{j=0}^{\infty}(-1)^{j}\binom{p/q}{j}\frac {\ln^{2j}(i+1)}{A^{j}\ln^{2j}(n+1)}-\frac{pc(p)}{A\ln^{2}(n+1)i^{1/q}}+\frac{1 }{i^{1/q}}O\left(\frac{1}{A^{2}\ln^{2}(n+1)}\right).\]
By taking \(A=A(p)\) big enough we obtain
\[M_{i}^{*}<\frac{A^{p/q}q^{p}}{i^{1/q}}\sum_{j=0}^{\infty}(-1)^{j}\binom{p/q}{ j}\frac{\ln^{2j}(i+1)}{A^{j}\ln^{2j}(n+1)}-\frac{c(A,p)}{i^{1/q}\ln^{2}(n+1)}\]
and from (6.3) and (6.11) we get
\[M_{i}<q^{p}-\frac{c(A,p)}{\ln^{2}(n+1)}=\left(\frac{p}{p-1}\right)^{p}-\frac{ c(A,p)}{\ln^{2}(n+1)}=\left(\frac{p}{p-1}\right)^{p}-\frac{c(p)}{\ln^{2}(n+1)}.\]
|
2309.14397 | Predicting environment effects on breast cancer by implementing machine
learning | The biggest Breast cancer is increasingly a major factor in female
fatalities, overtaking heart disease. While genetic factors are important in
the growth of breast cancer, new research indicates that environmental factors
also play a substantial role in its occurrence and progression. The literature
on the various environmental factors that may affect breast cancer risk,
incidence, and outcomes is thoroughly reviewed in this study report. The study
starts by looking at how lifestyle decisions, such as eating habits, exercise
routines, and alcohol consumption, may affect hormonal imbalances and
inflammation, two important factors driving the development of breast cancer.
Additionally, it explores the part played by environmental contaminants such
pesticides, endocrine-disrupting chemicals (EDCs), and industrial emissions,
all of which have been linked to a higher risk of developing breast cancer due
to their interference with hormone signaling and DNA damage. Algorithms for
machine learning are used to express predictions. Logistic Regression, Random
Forest, KNN Algorithm, SVC and extra tree classifier. Metrics including the
confusion matrix correlation coefficient, F1-score, Precision, Recall, and ROC
curve were used to evaluate the models. The best accuracy among all the
classifiers is Random Forest with 0.91% accuracy and ROC curve 0.901% of
Logistic Regression. The accuracy of the multiple algorithms for machine
learning utilized in this research was good, which is important and indicates
that these techniques could serve as replacement forecasting techniques in
breast cancer survival analysis, notably in the Asia region. | Muhammad Shoaib Farooq, Mehreen Ilyas | 2023-09-25T15:54:03Z | http://arxiv.org/abs/2309.14397v1 | # Predicting environment effects on breast cancer by implementing machine learning
###### Abstract
The biggest Breast cancer is increasingly a major factor in female fatalities, overtaking heart disease. While genetic factors are important in the growth of breast cancer, new research indicates that environmental factors also play a substantial role in its occurrence and progression. The literature on the various environmental factors that may affect breast cancer risk, incidence, and outcomes is thoroughly reviewed in this study report. The study starts by looking at how lifestyle decisions, such as eating habits, exercise routines, and alcohol consumption, may affect hormonal imbalances and inflammation, two important factors driving the development of breast cancer. Additionally, it explores the part played by environmental contaminants such pesticides, endocrine-disrupting chemicals (EDCs), and industrial emissions, all of which have been linked to a higher risk of developing breast cancer due to their interference with hormone signaling and DNA damage. Algorithms for machine learning are used to express predictions. Logistic Regression, Random Forest, KNN Algorithm, SVC and extra tree classifier. Metrics including the confusion matrix correlation coefficient, F1-score, Precision, Recall, and ROC curve were used to evaluate the models. The best accuracy among all the classifiers is Random Forest with 0.91% accuracy and ROC curve 0.901% of Logistic Regression. The accuracy of the multiple algorithms for machine learning utilized in this research was good, which is important and indicates that these techniques could serve as replacement forecasting techniques in breast cancer survival analysis, notably in the Asia region.
P : Pollution and Breast Cancer, Disease Prediction, SVC, Chemical Toxicants, : Machine learning Models, Breast Cancer prediction
## I Introduction
The irregular cell development known as a tumor frequently spreads to other body parts. Knowing how and at what stage cancer formed in this patient is important since there are many different varieties of cancer, each of which is divided into numerous classes and categories [1]. Humans are now more susceptible than ever to developing several types of cancer. One out of every six fatalities is regarded as due to cancer, which is a primary reason for death globally. The most widespread cancer is breast cancer in terms of new cases. Over 40,920 women died in 2018 from breast cancer alone. The World Health Organization (WHO) estimates that 2.90 million women worldwide receive a breast cancer diagnosis each year. More than 100 diseases that affect various parts of the human body are referred to as cancer [2]. While it is well-known that genetic variables can increase a woman's risk of developing breast cancer, recent studies have shown how important environmental factors are to the progression of the condition. Numerous environmental factors, such as exposure to air pollution, endocrine-disrupting chemicals, socioeconomic status, and geographic location, affect the chance of developing breast cancer. In the early stages of cancer, there are a variety of unique approaches that scientists have discovered to predict the efficacy of therapy. The advancement of medicine and healthcare technology has led to a plethora of data about this matter. Here, we propose a machine-learning approach centered round patient data that was previously gathered from a large number of patients. In recent years, there has been a surge when applying machine learning techniques in the healthcare area as a means of making accurate diagnoses and classifying patients' conditions. More recently, Strategies for machine learning have been employed in healthcare to better aid in the detection and treatment of cancer. Tumors may now often be diagnosed by screenings and diagnostics before the patient even realizes anything is wrong.
This section looks at the particular environmental elements that machine learning algorithms take into account, the difficulties in gathering correct environmental data, and techniques to assess how much each one contributes to the prediction of breast cancer risk [11, 12]. Machine learning algorithms have recently shown outstanding ability in evaluating big datasets and spotting complex patterns. Early detection and tailored treatment could be transformed by using these effective tools to forecast breast cancer risk based on changes in environmental factors. The integration of machine learning algorithms with environmental data to predict breast cancer risk is examined in this study, offering insight on how these cutting-edge methods can help identify high-risk populations and enhance preventive measures [9, 10].Machine learning algorithms have the potential to completely change the way we provide care for patients with breast cancer. These cutting-edge computational technique may examine enormous datasets that cover a wide range of environmental characteristics, such as air quality, socioeconomic status, accessibility to healthcare services, and more. We can find hidden patterns, correlations, and risk factors by using these algorithms to analyze this data that may have eluded detection using only conventional statistical techniques. This newly discovered knowledge of how the environment influences breast cancer survival
improves therapeutic decision-making and provides hope for focused interventions and individualized treatment regimens, ultimately benefiting the lives of numerous breast cancer patients. Predictions are generated using learning-based classification models, and their performance is measured against test data. This conclusion was reached by comparing and analyzing several methods of categorization. Random forest algorithms are a great tool for machine learning and accuracy because of their flexibility and simplicity of usage. When several classifiers are swiftly merged in an ensemble analysis a huge collection is irrelevant. In this study, we suggest a patient-specific utilization of machine learning to increase the survival rate for breast cancer patients. The accuracy study of precision, sensitivity, and specificity, and the performance of the suggested algorithms was assessed using AUC.. Here, five machine learning techniques--including the support vector classifier (SVC), the KNN algorithm, the extra tree (ET) classifier, the logistic regression (LR), and the Random Forest--were created for identifying patient will be survived or not. The top five algorithms that have been used to identify cancer using medical datasets were passed over because of their superior performance.
The novelty of using machine learning algorithms to analyze the influence of the environment on breast cancer patient survival resides in its potential to revolutionize clinical and academic healthcare research. Traditional epidemiological studies frequently have difficulty capturing the complex and nuanced interactions between environmental factors and serious diseases like breast cancer. Machine learning, on the other hand, enables the integration of many variables, even those with non-linear and nuanced interactions, providing a more thorough viewpoint. By examining these intricate relationships, we can discover previously unknown risk factors, protective components that are unique to particular patients or subpopulations, and environmental factors that have an impact on the patient as an individual. This advances the field of breast cancer research and improves patient outcomes in a ground-breaking way by improving not only our awareness of the complex dynamics at play but also opening the door for tailored interventions, early detection techniques, and more efficient treatment regimens.
The format of our article is structured as follows: we discussed the related work in section 2.We briefly discussed the material and method in this article in section 3.Summary of Results from Experiment in section 4 is Presented. The briefly discussion in section 5. In section 6 the conclusion of our work is given. The paper ends with a list of references.
**2. RELTED WORK**
Mohammad Nazmul Haque et al.[4] The SEER Program of the National Cancer Institute, which information about cancer in the general population, was updated in November 2017 and used to build this patient record for breast cancer. 4024 patients were left after excluding those with unknown tumor size, examined regional LNs, positive regional LNs, and those with overall survival of less than one month. were excluded Eight popular machine-learning techniques for forecasting mortality rates from breast cancer recurrence were evaluated using a database. Decision tree, Random forest, the K-nearest neighbor predictive modeling, assistive vector technology, gradient boosting classifier, AdaBoost classifier, and voting classifier. Classification is produced by a classifier using a random forest. One of every algorithm, it was the most accurate (94.64 percent). In this case, logistic regression had an accuracy rate of 81%. The accuracy of the support vector machine is 85%. 88% of the traits connected to breast cancer can be successfully determined via the voting classifier model. The accuracy is 88% as a result, which is superior to logistic regression and support vector machine. With an accuracy of 89%, the decision tree classifier model outperforms the LR, SVM, and voting classifier. The accuracy of the K-NN model is 84%, which is higher than that of logistic regression. The GB model has a 92 percent accuracy rate. The decision tree classifier's outcome is identical to the 89% accuracy of the AdaBoost model.
Aditi kajala at el. [5] The work that is being presented makes use of the SEER, which stands for Surveillance, Epidemiology, and End Results dataset. The collection contains 4024 records of breast cancer participants with 16 attributes, 3408 records of alive cases, and 616 records of deaths. During data cleaning, the label encoding method is employed to transform all category attributes into numerical values. The prediction made by machine learning algorithms aids in understanding how different attributes affect the prediction. It can be described as a partial plot or summary plot of Shap values in graph-like structures. By using data sampling techniques, some minority or majority samples are either expanded or removed while maintaining the necessary relevant information. Replicating samples from minority classes in order to balance the dataset is referred to as oversampling. In the interest of balancing the dataset, under-sampling includes deleting samples from the dominant class. In order to balance the dataset for this investigation, four oversampling techniques--SMOTE, ADASYN, Random oversample, and SMOTE with Near-border and AllKNN under-sampling techniques--were used. Decision Tree, Random Forest, KNN, and SVM are four machine learning methods, are compared for performance. By calculating the Precision and AUC score using the used dataset, the effectiveness of these models was assessed. The outcome demonstrated that the SVM model with a Random overs sampler outperformed all others and obtained precision and AUC scores of 1 and 0.9935, respectively.
Aqsa Rahim at el.[6] The numerous machine learning models that various writers have presented and their successes. According to the method outlined in this work, just one highly correlated feature, i.e., if three characteristics are highly linked, is chosen for feature selection. This is done by first eliminating the highly associated features. A smaller feature set of 16 features is the result of this. reducing the number of capabilities, even more, I have used the recursive
feature selection method. As a result, the classification models receive the selection of the top 11 features for categorization. Random forest achieves the highest level of accuracy. The suggested approach tested the impact of the feature space through feature space selection. To accomplish this, the showcase space was condensed to 11 features using a mix of Recursive feature elimination (RFE) and connection-based picking of features. In order to address the issue of overfitting in machine learning, this is especially crucial. We contrast the results of many models that have been suggested, using techniques like SVM, Random Forest, Gradient Boosting, Artificial Neural Network, and Multilayer Perceptron Model. The Multilayer Perceptron Model, which had an accuracy of 99.12%, was the best algorithm.
Leili Tapak at el. [7] We used a dataset comprising health 550 breast cancer patient's medical records. AdaBoost, Support Vector Machine (SVM), Least-square SVM (LSSVM), Naive Bayes (NB), Random Forest (RF), Least-square SVM (LSSVM), Adabag, Logistic Regression (LR), and Linear Discriminant Analysis were utilized to forecast survival and metastasis of breast cancer. Overall accuracy, likelihood ratio, sensitivity, specificity, and specificity were used to gauge how well the approaches worked. 850 patients were still living, and 85% of them did not experience metastases. In comparison to other techniques, the SVM and LDA have a higher sensitivity (73%) for directions to ensure, with an average total specificity of all methodologies of 94%. The SVM and LDA outperform other algorithms in terms of overall accuracy (93%). The LR and LDA provided the greatest overall accuracy. (86%) for identifying metastasis, whereas The maximum specificity (98%) and sensitivity (36%) were achieved by the NB and RF, respectively.
Mogana Darshini Gangagayah at el.[8] The University Malaya Medical Centre (UMMC), Kuala Lumpur, Malaysia's Breast Cancer Registry provided the 8942 breast cancer patient records that make up a sizable hospital-based dataset. The entire dataset (referred to as "all data"), which included 8066 observations and 23 rates of survival predictors, was subjected to modeling. Six techniques were used to compare the data quality: support vector machine, decision tree, random forest, neural networks, extreme boost, and logistic regression (e1071). For the model evaluation utilizing all the techniques, then the dataset was split into a training set and a testing set. Accuracy, sensitivity, specificity, and precision were all measured for each model. Also computed for each model are the Matthew correlation coefficient and the area under the receiver operating characteristic curve (AUC). In terms of model accuracy and the calibration measure, all methods produced results that were close, with decision trees producing the lowest results (accuracy = 79.8%) and random forests producing the greatest results (accuracy = 82.7%).
## 3 Material and Method
This section is divided into three parts. Priorities include taking care of feature selection, preprocessing, and patient dataset description. In the second stage, prediction models that use different ML algorithms are evaluated in order to create the model that performs the best. Lastly, once the performance of each model is assessed using the assessment criteria, the best model that can predict the variables is picked. Figure 1 depicts the organization of a number of elements for selecting the optimal model.
### Dataset Description:
\begin{tabular}{|l|l|l|} \hline Sr.No. & Attribute & Range \\ \hline
1 & Age & 30-69 \\
2 & Race & White, black, other \\
3 & Marital Status & Divorced, married, single, \\
4 & T stage & widow \\
5 & N stage &,separated \\
6 & 6 stage & T1,T2,T3,T4 \\
7 & Differentiate & N1,N2,N3 \\
8 & Grade & 1A,2B,3A,3B,3C \\
9 & Stage & Well, moderately, poorly, \\
10 & Tumor size & undifferentiated \\
11 & **Estrogen** & 1,2,3,4 grade \\
12 & **Status** & Regional, Distant \\
13 & **Progesterone** & \textless{36mm,\textgreater{}}105mm \\
14 & **status** & Positive, Negative \\
15 & **Regional** & Positive, negative \\
16 & **nodes** & Total examined \\ & **examined** & \\ & **Regional** & Positive examined \\ & **nodes positive** & \\ & **Survival** & \\ & **months** & Alive, dead \\ & **Status** & \\ \hline \end{tabular}
The collection includes many attributes that are related to the patient's medical information. On a set of data, research, and analysis were done. The information was obtained from the Kaggle website which contained both discrete and continuous characteristics. Table 1 below provides a detailed description of the data.
### Dataset preprocessing:
Prior to classification, the used dataset is pre-processed. Preprocessing was used to clean the data and improve the accuracy of our dataset for value prediction. Cleaning and normalizing are preprocessing chores. During data cleaning, the label encoding method is employed to transform all category attributes into numerical values. No attribute in this dataset contains any missing values. Due to the use of several measuring units, the data normalization strategy (bringing values between 0 and 1) has been utilized.
\(\blacktriangleright\) 16 columns and 4024 rows make up the data set.
The dataset is continuous, and the issue is one of classification.
Data with categorical encoding
Verifying values for null
Examining outliers
Verify the relationships between the variables.
Data division into dependent and independent variables;
categorization into two classes (1-alive, 2-dead
Dataset is continuous, discrete and it is classification problem.
Check correlation between variables.
Dividing data into dependent and independent variables.
#### 3.2.1 Classifiers:
An essential element of machine learning and data analysis is the classifier. It performs the role of an intelligent decision-maker, tasked with classifying instances or data points into specified classes or categories in accordance with their traits and attributes.
They are the most popular classification algorithms for healthcare diagnosis and could produce better results, the following automated learning techniques are used in this section to predict the survival stage of patients: KNN algorithm, logistic regression, Random forest, SVC, and extra tree classifier.
#### 3.2.1.1 Logistic regression:
Logistic regression is a typical machine learning classification technique used to predict the probability of classes given a collection of dependent variables. All of the input characteristics are generally summed in logistic regression models to arrive at a probability estimate for the output. For a binary classification problem, the output of logistic regression is always in the range of (0, 1).
Learning, optimization, and training input data and parameters. The predicted value is produced when the input is near to 1.
\[\mathbf{h_{0(x)}}=\frac{1}{1+e^{-\theta x}}\]
If we want optimal performance on our work, we must use a loss function (also known as a cost or objective function). It is conventional to use the log-likelihood loss function in logistic regression.
m=number of samples in the training data.is given.
y\({}^{i}\) is the label of the i(th) sample.
P\({}^{i}\) such as i is the prediction value of i(th) sample. (jessica, 2022)
\[\mathbf{J_{(0)}}=\frac{1}{m}\quad\sum_{m}^{i=1}(y^{i}\ log(p^{i})+(1-y^{i}) log(1-p^{i}))\]
#### 3.2.1.2 Extra tree:
For the purpose of selecting features, the suggested methodology incorporates an embedded technique called as additional trees classifier. As a bridge when feature extraction and categorization are interrelated, it is carried out. The technique of automatically identifying the significant features that will provide the prediction variable with the most information is known as feature selection. The accuracy of the model will be reduced as a result of processing irrelevant features, which will also lengthen calculation time. The duration of the classification process in this study has been significantly decreased by the feature selection utilizing extra trees classifier [3].
#### 3.2.1.3 Random Forest:
The RF algorithm is often used while bagging. For training a classifier, traditional decision trees use the same attributes and the same subset of the data, while RF randomly picks both. Each trained classifier produces a separate set of predictions for the same input. Voting on each trained classifier's output yields the final prediction result, which is frequently done using the plurality or mean. The random feature distribution of the algorithm will increase the variety of its classifiers, improving the generalizability of the model.
Figure 1: Flow chart diagram of classifiers
Figure 3: Diagram of Extra Tree
Figure 2: Diagram of logistic regression
To convert them to a value between 0 and 1, just divide by the total importance of all the traits.
\[normf{i_{i}}=\frac{{fi_{i}}}{{\Sigma_{jealfeatures}}{fi_{j}}}\]
The importance of each feature is then estimated by averaging its weight across all of the trees in the Random Forest. The relative relevance of each attribute to each tree may be calculated by adding up all the values and then dividing by the total number of trees.
\[RF{fi_{i}}=\frac{{\Sigma_{jealtrees}}normf{i_{ij}}}{{\tau}}\]
T = Number of trees overall.
\(RF{fi_{i}}\) Sub (i) = the Random Forest model's estimation of the feature's importance based on data from all trees.
\(normf{i_{ij}}\) Sub (ij) = significance of feature i for tree j normalized.
#### 3.2.1.4 K-Nearest Neighbor:
Many of these categorization issues may potentially be solved using the supervised machine learning technique known as the k-nearest neighbor method (KNN). KNN takes the average distance between a query and each data sample to find the K most similar instances, and then uses majority voting or an average of the labels to decide on a final classification. Equation 1 represents the calculation of the distance between a Euclidean object and the provided data for training.
\[=d(x,y)=\sqrt{\sum_{l=1}^{k}(x_{l}-y_{l})^{2}} \tag{1}\]
#### 3.2.1.5 Support Vector Classifier:
The support vector classifier is one of the top supervised machine learning approaches for classification. (SVC), which has the benefit of not having the overfitting problem. In the vast majority of instances, it also gives improved categorization accuracy. Due to the binary nature of the basic SVM, multi-class classification is done by applying the "one-versus-one" method. SVC creates (n(n 1))/2 classifiers, each of which can handle data from two different classes, if there are 'n classes. With the input feature vector gathered from leukocyte pictures, the SVC creates a decision surface. [9]
## 4 Experiment and Result:
A variety of approaches are used to analyze and forecast Breast Cancer caused by Environment effects. Analytics-based strategies can provide an accurate prediction for a specific disease by grouping individuals with similar symptoms.
The study Breast cancer patient survival looked at the KNN, SVC, Logistic Regression, Extra Tree Classifier, and RF algorithms for machine learning classification. The training data from the dataset are divided by 80%, and the test data are divided by 20%.
### Evaluation Matrix:
Accuracy, recall, and precision, f1, MCC, and ROC AUC are just a few of the helpful metrics that may be applied to both positive and negative sample predictors. The accuracy score indicates the percentage of cases and controls for which lung cancer was accurately identified. The difference between recall and precision is that the latter gives the precise number of instances of the positive class, while recall gives an estimate of how near you were to the true positive class total. For your convenience, many formulas to compute these variables are provided below.
Figure 4: Diagram of Random forest
Figure 5: Diagram of K-nearest neighbor
Figure 6: Diagram of Support Vector Classifier
**ACCURACY:**
The degree to which a system can accurately foretell the future is one metric by which to evaluate it. The reliability of our model is determined by how often its forecasts come true. The following formula may be used to evaluate the precision of a yes/no response.
\(\begin{array}{l}\text{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{{\small{{\small{{\small{{{ }}}}}}}}}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
In logistic regression true positive rate (TPR) is 6.7e+02% The false rate of negatives (FNR) is 22%, the rate of false positives (FPR) is 11% and the true negative rate (TNR) is 53%.
In Extra tree classifier, true positive rate (TPR) is 6.7e+02% The false percentage of negatives (FNR) is 67%, and the prevalence of false positives (FPR) is 16%. and the true negative rate (TNR) is 53%.
Figure 8: **Models Evaluation Based on AUC-ROC Curves** In our study, We employed many machine-learning methods to forecast the patient survival rate for breast cancer. ROC curves are widely used to visually depict the relationship between the two when evaluating clinical sensitivity and specificity for each proposed cut-off for a test or combination of tests. The benefits of using the question test are also shown
by the ROC curve's ROC area. The range of the ROC is always 0 to 1. The additional tree classifier produced results with a ROC curve of 0.901. The graph below displays the other ROC results. The area under the ROC curve reveals the use of the questioned test. The ROC always falls between 0 and 1. The Logistic Regression classifier achieves the greatest Roc curve of 0.901%. The graph depicts the other ROC results.
## 5 Discussion
A cutting-edge method in research is the examination of environmental influences on breast cancer using machine learning models. A potent technique for analyzing the complex connections between the environment and breast cancer incidence, prognosis, and treatment response is machine learning. The review is predicated on the experimental work detailed below. The accuracy of a classifier may be measured first by classification. Throughout the years, several schemes for organizing data have been proposed and implemented. The "accuracy" of our trained algorithm is measured by how well it correctly guesses which categories belong to which. Classifier accuracy is compared in Table 2. Current research indicates that when used for classification challenges, Random forest outperform conventional methods (0.91% accuracy). Among the characteristics that make a random forest apart are: In many cases, random forest may achieve near-perfect accuracy. Incredible in its ability to process massive amounts of data. An approximate value is provided for a number of useful attributes that may be used to group items. This resulted in the expansion of wooded areas from which future generations may benefit. Compared to other modern technical gadgets, it lacks a flash. When there are more explanatory factors than noise variables, logistic regression is effective; otherwise, a random forest should be utilized. A random forest top-down induction model is used for classification and prediction in logistic regression, whereas path analysis is used to define the directional connections between a collection of variables. Like a group of decision trees (CARTs).
## 6 Conclusion and Future Work
This paper has examined the promising combination of breast cancer data and machine learning algorithms to forecast the risk of breast cancer as the environment changes. Understanding the interactions between environmental factors and breast cancer risk is essential for enhancing prevention and early detection measures since breast cancer remains a serious worldwide health concern. Incorporating a variety of environmental parameters, including air pollution, endocrine-disrupting substances, socioeconomic situations, and geographic location, researchers can develop prediction models. Compared to previous methods, the random forest method produced somewhat higher evaluation accuracy. In this study, the accuracy rates for the RF, ET, K-NN, SVM, and LR algorithms were 0.91 percent, 0.89 percent, 0.90 percent, and 0.89 percent, respectively. All of the algorithms' accuracy appeared to be close, though. The efficiency of the model and critical factors from Environment effect on breast cancer patients' survival rates were identified in this study, which may be used in clinical practise, particularly in the context of Asia.
|
2308.16871 | The Gender-GAP Pipeline: A Gender-Aware Polyglot Pipeline for Gender
Characterisation in 55 Languages | Gender biases in language generation systems are challenging to mitigate. One
possible source for these biases is gender representation disparities in the
training and evaluation data. Despite recent progress in documenting this
problem and many attempts at mitigating it, we still lack shared methodology
and tooling to report gender representation in large datasets. Such
quantitative reporting will enable further mitigation, e.g., via data
augmentation. This paper describes the Gender-GAP Pipeline (for Gender-Aware
Polyglot Pipeline), an automatic pipeline to characterize gender representation
in large-scale datasets for 55 languages. The pipeline uses a multilingual
lexicon of gendered person-nouns to quantify the gender representation in text.
We showcase it to report gender representation in WMT training data and
development data for the News task, confirming that current data is skewed
towards masculine representation. Having unbalanced datasets may indirectly
optimize our systems towards outperforming one gender over the others. We
suggest introducing our gender quantification pipeline in current datasets and,
ideally, modifying them toward a balanced representation. | Benjamin Muller, Belen Alastruey, Prangthip Hansanti, Elahe Kalbassi, Christophe Ropers, Eric Michael Smith, Adina Williams, Luke Zettlemoyer, Pierre Andrews, Marta R. Costa-jussà | 2023-08-31T17:20:50Z | http://arxiv.org/abs/2308.16871v1 | # The Gender-GAP Pipeline: A Gender-Aware Polyglot Pipeline
###### Abstract
Gender biases in language generation systems are challenging to mitigate. One possible source for these biases is gender representation disparities in the training and evaluation data. Despite recent progress in documenting this problem and many attempts at mitigating it, we still lack shared methodology and tooling to report gender representation in large datasets. Such quantitative reporting will enable further mitigation, e.g., via data augmentation. This paper describes the Gender-GAP Pipeline (for **G**ender-**A**ware **P**olyglot Pipeline), an automatic pipeline to characterize gender representation in large-scale datasets for 55 languages. The pipeline uses a multilingual lexicon of gendered person-nouns to quantify the gender representation in text. We showcase it to report gender representation in WMT1 training data and development data for the News task, confirming that current data is skewed towards masculine representation. Having unbalanced datasets may indirectly optimize our systems towards outperforming one gender over the others. We suggest introducing our gender quantification pipeline in current datasets and, ideally, modifying them toward a balanced representation.2
Footnote 1: [http://www.2.statmt.org/wmt23/](http://www.2.statmt.org/wmt23/)
Footnote 2: The Gender-GAP pipeline is available at [https://github.com/facebookresearch/ResponsibleNLP/tree/main/gender_gap_pipeline](https://github.com/facebookresearch/ResponsibleNLP/tree/main/gender_gap_pipeline)
## 1 Introduction
Despite their widespread adoption, Natural Language Processing (NLP) systems are typically trained on data with social and demographic biases. Such biases inevitably propagate to our models and their generated outputs, e.g., by over-representing some demographic groups and under-representing others. It is, therefore, critical to measure, report, and design methods to mitigate these biases, before they can be encoded or even amplified during training (Foulds et al., 2020; Wang and Russakovsky, 2021).
This paper focuses on quantifying gender representation in highly multilingual data (see Figure 1), in particular, for the task of machine translation. Gender is a complex concept that can be defined in many ways depending on the field of study, language or culture (Chandra et al., 1981; Hellinger and Bussmann, 2001; Kramer, 2020). We discuss and define gender in Section 3.1. However, briefly, we define gender bias as the systematic unequal treatment based on one's gender (Blodgett et al., 2020; Stanczak and Augenstein, 2021). Gender bias, when it impacts training data, may decrease the performance of the system on certain gender groups (Hovy et al., 2020). When impacting evaluation data, it may push the system designers to deploy a system that causes harm by favoring one group over others (Mehrabi et al., 2021). For example, a system that translates text that includes feminine nouns more poorly than text with masculine nouns may lead the end users to miss important information or misunderstand the sentence (Savoldi et al., 2021). A system that inaccurately translates a gender-neutral sentence in English e.g. _they are professors_ to a sentence with a masculine noun _ils sont professeurs_ in French may also lead to serious representational harm.
We propose the Gender-GAP pipeline to quan
Figure 1: The Gender-GAP Pipeline works by identifying gendered lexical terms and reporting statistics on these lexical matching.
tify gender representation bias of multilingual texts using lexical matching as a proxy. Our pipeline can be seen as two main modules.
First, we build a multilingual gender lexicon: starting from a list of about 30 English nouns extracted from the HolisticBias dataset (Smith et al., 2022), split into 3 gendered classes--masculine, feminine, and unspecified. We manually translate them and reassign them to the appropriate gender class for each target language (e.g. "grandfathers", masculine in English, becomes "abuelos", masculine and unspecified in Spanish). Our list is restricted to nouns that refer to people (e.g. man, woman, individual) or to kinship relationships (e.g. dad, mom, parent). Most languages, including genderless languages (Prewitt-Freilino et al., 2012) (e.g. Finnish, Turkish) encode genders through kinship relationships and person terms (Savoldi et al., 2021). For this reason, focusing on a restricted list of kinship and person nouns allow us to scale our lexicon to 55 languages.
Second, we arrive at a straightforward and easily comparable gender distribution by using a word matching counter. Based on our newly collected multilingual lexicon, our pipeline segments each input sentence at the word-level using Stanza (Qi et al., 2020), a state-of-the-art word segmentation tool, and counts the number of occurrences of words in each gender class. As a result, we obtain a gender distribution across 55 languages.
In summary, our contribution is threefold:
* We build an aligned multilingual lexicon that can support measurement of the representation of genders in 55 languages.
* We introduce the Gender-Aware Polyglot pipeline (Gender-GAP), a lexical matching pipeline, and describe the gender distribution observed in popular machine translation training and evaluation data. On average, all three analyzed datasets are biased toward the masculine gender. We find the gender representations to be domain- and language-specific. Additionally, using the Gender-GAP pipeline, we can discover sentences that have been translated with a gender bias.
* We release our pipeline and recommend the reporting of gender representations in machine translation training and evaluation datasets to improve awareness on potential gender biases.
## 2 Related work
The study of biases in text has become more important in recent years, with Large Language Models (LLMs) displaying bias against people depending on their demographics and identity. As a testament to the importance of this topic, many recent papers, including those introducing GPT-3 and 4 (Brown et al., 2020; OpenAI, 2023), PaLM 1 and 2 (Chowdhery et al., 2022; Anil et al., 2023), LLaMa 1 and 2 (Touvron et al., 2023, 2020), analyze how such biases affect their model outputs. Some works even discuss frequencies of gendered terms in their pre-training corpora (Anil et al., 2023; Touvron et al., 2023), as this can affect downstream generation. Despite this acknowledgment of the issue, general purpose tools to measure demographic biases are still fairly rare, and so far have mainly been in English.
However, some have begun to measure demographic biases beyond English. Smith et al. (2022) built a comprehensive analysis dataset covering 13 demographic groups and Costa-jussa et al. (2023) extended it to the multilingual setting. Specific to Machine Translation, Savoldi et al. (2021) discussed best practices in reporting gender bias. Several works (Stanovsky et al., 2019; Prates et al., 2020; Renduchintala et al., 2021; Renduchintala and Williams, 2022) have explored metrics for exposing failures in automatically translating pronoun and occupations, and some have even explored MT model training (Escude Font and Costa-jussa, 2019; Stafanovics et al., 2020) or fine-tuning (Saunders et al., 2020; Corral and Saralegi, 2022; Costa-jussa and de Jorge, 2020) or both (Choubey et al., 2021) to lessen the effect of gender-related biases. More than this, there are initiatives that provide toolkits to generate multilingual balanced datasets in terms of gender (Costa-jussa et al., 2019) from Wikipedia and even balanced in gender within occupations (Costa-jussa et al., 2022).
However, despite the progress made, most of these resources only cover a handful of languages--the community still lacks easy to use, open-source toolkits to measure biases across a large number of languages. In this work, we address this need by showcasing, Gender-GAP, a lexical matching pipeline to measure gender distribution across 55 languages.
## 3 Proposed Data Collection and Pipeline
### Defining Gender
Gender is a complex topic that can be defined in many different ways depending on the field of studies and the context Hellinger and Bussmann (2001). We approach gender from two perspectives:
First, linguistic gender Corbett (2013); Cao and Daume III (2020); Kramer (2020); Stanczak and Augenstein (2021) corresponds to the classification of linguistic units, such as words, into categories based on the gender information they provide. Linguistic gender refers to overlapping notions, such as _grammatical_, and _semantic_ gender, depending on the properties of the language. Grammatical gender implies the classification of nouns, adjectives, and other parts of speech into categories based on their morphosyntactic properties (e.g., in Russian masculine nouns typically end in -i, feminine nouns typically ends in -a or -a and neuter nouns usually end in -o, -e, or -e). In many languages, grammatical gender morphology appears on all nouns, regardless of whether they refer to persons, animals, plants, or inanimate objects (e.g., "il libro" _the book_ is a masculine noun in Italian). Semantic gender Corbett (1991) refers to the existence of lexical units whose meaning is associated with a specific cultural notion of peoples' gender(s). For instance, in English, the word "men" associated with masculine traits, "woman" with feminine ones, etc. Semantic gender then may be present in languages that do not morphologically mark grammatical gender, such as English, Turkish, or Mandarin Chinese. In languages that do mark grammatical gender, grammatical and semantic gender do not always match: for example, in German, the word for girl "Madchen" is grammatically neuter, but refers to a person which would fall into our 'feminine' class based on its meaning. For our purposes, we use semantic gender classes in our multilingual lexicon, since we are interested in gender representation.
Our goal is to build and foster inclusive NLP technologies that do not carry, replicate, or amplify social gender biases, which can impact end users and societies negatively by affecting representations of specific groups. However, there are social meanings of gender that are not readily accessible in text, so, we use semantic gender on human words as a proxy for social gender.
Social gender refers to gender as a social construct based on cultural norms and identity Ackerman (2019); Cao and Daume III (2020); Stanczak and Augenstein (2021); Duignan (2023). As highlighted in Ackerman (2019), social gender is defined as the internal gender experienced by a given human individual. For this reason, data-driven analysis of genders in large corpora can only relate to social gender indirectly through linguistic notions of gender(s).3 We assume for our purposes that a list of gendered words can be used to approximate some important aspects of social gender for the purposes of measuring representation disparities.
Footnote 3: We recall that gender is distinct from sex which refers to collections of biological properties of individuals such as genes (e.g., chromosomes), phenotypes (e.g., anatomy) Council of Europe (2023). See Butler (2011) for a discussion of additional factors that complicate this view.
### Aligned Gendered Multilingual Lexicon
To measure gender distribution across 55 languages, we first build a multilingual lexicon. We want this lexicon to be as aligned as possible across languages while also encoding language-specific gender linguistic phenomena.
LanguagesOur lexicon is available in 55 typologically and phylogenetically diverse languages such as English, Finnish, Zulu, Vietnamese, Ganda, Japanese or Lithuanian, spanning 15 distinct scripts. We report the complete list of languages in Figure 6.
Gender ClassesWe define three semantic gender classes: masculine, feminine and unspecified. The unspecified class aggregates nouns of different sorts. It mainly capture nouns that do not explicitly encode any particular gender (e.g. "person" is
Figure 2: Distribution of the words in our proposed dataset across different languages, gender-classes and number.
considered unspecified in English). For this reason, "unspecified" can be seen as aggregating masculine, feminine and non-binary genders Herdt (2020).
While there exist more complex gender lexica as discussed in Stanczak and Augenstein (2021), they are focused on English and are not always easily translated. Because our goal is to provide a methodology that can be used to evaluate bias across multiple languages, we take a more pared down lexical approach.
Lexicon creationWe start by defining a list of about ten, high frequency person nouns per gender class in English. Each noun is found in both its singular and plural form. To find a list of nouns that is as universal as possible, we restrict this list of persons such as masculine "man", feminine "woman", and "person" and synonyms (e.g. "individual") that we complement with kinship terms classified by gender (e.g., masculine "father", feminine "mother", neutral "parent"). Our list corresponds to the one defined in the previous work of HolisticBias Smith et al. (2022), which is only available in English.4
Footnote 4: We use the gender noun list v1.1 from HolisticBias
We then translate these nouns into the other languages by reassigning them to the appropriate gender class. A noun in a given gender class may be part of another class (or multiple other classes) in another language. For instance "grandparents" (masculine, plural) becomes "abuelos" in Spanish which is both masculine and unspecified genders.
The English-language source list is passed on translators who are native speakers of the target language, with language proficiency at CEFR5 level C2 in the source language. For all languages, translators are asked to provide equivalent singular and plural terms in their respective native language, except if any of the source concepts do not exist in the language. For example, not all languages use a distinctive, gender-agnostic term such as the English term _sibling_, distinctively from either _brother_ or _sister_. We also consider that the reverse can be true (i.e. that the target language may have more than one term to translate one of the English terms in the source list), and give the translators the possibility to provide additional translations in such cases. For instance, when we translate _women_ into Korean we get : "\(\circ\)\(\triangle\)\(\boxed\)" and "\(\circ\)\(\triangle\)\(\boxed\)\(\boxed\)".
Footnote 5: [https://coe.int/en/web/common-european-framework-reference-languages/level-descriptions](https://coe.int/en/web/common-european-framework-reference-languages/level-descriptions) retrieved 2023-07-24
Additionally, translators are asked to consider the terms in the source list as lemmas (or headwords in dictionary entries) and, if applicable to the given language, to provide relevant morphologically derived forms, including cases and gendered forms. Finally, translators are also encouraged to provide terms covering all language registers, which is necessary because some languages (e.g., Thai or Korean, among others) use several different terms at various levels of formality.
We are cognizant of the fact that this approach presents several limitations. The first limitation occurs when a term could be said to fall into both the unspecified and one of the gendered categories. For example, the term Spanish _padres_ can be used to mean both _fathers_ or _parents_. Some speakers also use the singular form to mean _parent_ (and not nec
Figure 3: Diagram of the Gender-GAP pipeline. In the first stage, we process each sentence of the 55 supported languages of the dataset and count the word matches for each category. Once this step is completed, we compute a gender-class score which corresponds to the proportion of gendered noun matched within all the words in the dataset.
essarily _father_). The second limitation applies to languages that are closer to the synthetic end of the analytic-synthetic spectrum; i.e. languages that are agglutinative or highly fusional (e.g., Zulu, Uzbek, Estonian). This approach may not allow for the detection of many agglutinated or fused word forms. Finally, due to the templated, context-free nature of the lexicon, one term was particularly difficult to disambiguate: _veteran_, which can be used to refer to a soldier or a seasoned professional.6 Cultural differences also had to be considered in addition to the above ambiguity; for example, Japanese translators mentioned the fact that the Japanese equivalent of the term was infrequently used with the first meaning cited above.7
Footnote 6: [https://www.merriam-webster.com/dictionary/veteran](https://www.merriam-webster.com/dictionary/veteran) retrieved 2023-07-24
Footnote 7: See [https://en.wikipedia.org/wiki/Article_9_of_the_Japanese_Constitution](https://en.wikipedia.org/wiki/Article_9_of_the_Japanese_Constitution) retrieved 2023-07-24
Footnote 8: [https://pythonlinp.github.io/docs/2.0/api/tokenize.html](https://pythonlinp.github.io/docs/2.0/api/tokenize.html)
Lexicon statisticsIn Figure 2 we can see the obtained data distribution across number and gender for the different languages. We notice a few outliers. As described above, translators are asked to provide relevant morphologically derived forms. This makes the number of nouns in Estonian to be 7 times larger than the average. For instance, "woman" is translated into _naine_ "a woman", _naise_ "of a woman", _naisele_ "to a woman", etc.
### Proposed Pipeline
Figure 3 shows a diagram of the Gender-GAP pipeline. In the first stage or the counts collection, we work at the sentence level for the NTREX and FLORES-200 and at the document level for Common Crawl. We segment each sample at the word level using Stanza tokenizer available in the given language Qi et al. (2020) except for Cantonese (yue) for which we reuse the model available for simplified Chinese (zh-hans) and Thai for which we use PyThaiNLP.9 For the rest of the languages we use simple nltk9 typographic tokenizer (based on white-space and punctuation marks). We then count and increment a gender-class counter anytime we match a word in the list of words representative of this class. For instance, in the sentence "my mother was a nurse" the pipeline will add +1 to the feminine counter (due to lexical match of "mother").
Footnote 9: [https://www.nltk.org/api/nltk.tokenize.html](https://www.nltk.org/api/nltk.tokenize.html)
Once this process has been done for each sentence in the dataset we move to the second stage or the reporting of gender proportions where we define a score for each gender-class by dividing the gender-class count by the total number of words in the dataset. By doing so, the final gender score does not depend on any defined linguistic macro-unit such as sentences or documents lengths but only on the word-level tokenization.
to 200 languages. NTREX-128 is made of 1997 sentences from news documents originally collected for WMT 2019 Barrault et al. (2019) translated from English into 128 languages. Both these datasets are part of the corpora provided by the WMT shared task. In addition, we run the pipeline on a sample of Common Crawl.10 Common Crawl is a snapshot of crawlable web data that is widely used in the NLP community thanks to the release of the CCNET corpora Wenzek et al. (2020), the OSCAR corpus Ortiz Suarez et al. (2019) and the C4 corpus Raffel et al. (2019). It is used to train NLP systems like language and machine translation models. We run it on 100k documents for each language. Our pipeline supports 55 languages, and we run it on the intersection of these datasets with the set of supported languages.
Footnote 10: [https://commoncrawl.org/](https://commoncrawl.org/)
## 5 Analysis
### Quantitative Analysis
We report the average coverage and gender distribution in Table 1 along with the complete tables for the 55 languages in Table 3-5.
CoverageWe first look at the number of samples for which at least one noun is found (cf. %doc in Table 1). We find that, on average, about 10% of samples match with at least a noun (between 10.1 and 13.4% depending on the dataset). We find that the coverage is the largest for Vietnamese (with up 45.7% of samples matched) and Thai (28.9% of samples matched) and the smallest for Korean (between 1.7% and 2.5% depending on the dataset). This shows that even though our lexicon is restricted to person nouns and kinship relationships, we are still covering a very large number of samples based on which we measure gender representations.
Gender DistributionTable 1 shows gender representation for masculine, feminine and unspecified. For better visualization, Figures 4, 5 and 6 report the % of masculine and feminine representation of the total tokens in FLORES, NTREX, and Common Crawl respectively.
On average, the masculine gender is more represented than the feminine in all three datasets. Accounting for uncertainty, using the standard deviation to define a confidence interval,11 we find that 30/45 languages are skewed toward the masculine gender for NTREX. This includes languages like English, Arabic, French, Spanish, Vietnamese, and Panjabi. The rest of the languages are either balanced between masculine and feminine (i.e.\(\Delta\)([Fem.-Masc.]) is inferior to two times the confidence interval length) or skewed toward the feminine gender. In addition, we find 16/54 languages skewed toward the masculine gender for all three datasets suggesting an inherent gender bias in these languages. This includes several romance languages such as Spanish, French, Catalan and Italian along with Belarusian, Indonesian, and Panjabi.
Footnote 11: For a given language, we consider a gender gap between the masculine and feminine genders when the \(\Delta\)([Fem.-Masc.]) is higher than two times the standard error. Otherwise, we consider the dataset to be gender balanced.
Impact of DomainsWe find that 14/55 languages for which, the gender representation changes drastically across the different datasets. For instance, the gender differences are much larger in NTREX than in Common Crawl data. More specifically, in Lithuanian the distribution is skewed toward the masculine class for NTREX data, while it is skewed toward the feminine for Common Crawl data. For Danish, the gender representation is balanced for NTREX but skewed toward the Feminine class for Common Crawl data. This shows that domains highly impact gender representation. NTREX is based on news data, while Common Crawl includes a large diversity of domains from the Web.
Comparing Genders across LanguagesIn addition, we find a large variability across languages.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Lang** & **Fem.** & **Masc.** & **Uns.** & \(\Delta\)**(Fem.-Masc.)** & \%doc.** \\ \hline \hline \multirow{4}{*}{eng} & \multicolumn{5}{c}{_Flores DevTest._} \\ & 0.121 & 0.065 & **0.379** & 0.056 (0.0003) & 11.2 \\ avg. & 0.128 & 0.144 & **0.302** & 0.097 (0.0003) & 10.1 \\ \hline \multirow{4}{*}{eng} & \multicolumn{5}{c}{_NTREX_} \\ & 0.166 & 0.203 & **0.379** & 0.037 (0.0003) & 15.5 \\ avg. & 0.180 & 0.224 & **0.329** & 0.099 (0.0003) & 13.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: % Gender Distribution in WMT Evaluation dataset. We report the English distribution and the average across all languages (standard deviation indicated between parenthesis). The full table is available in the appendix Table 3-5. We **bold** the most represented gender class, and underline the second most represented gender class. We define the the gender gap \(\Delta\) defined as the absolute difference between the Feminine and Masculine scores. %doc. refers to Coverage.
Some languages like Belarus (bel) and Swedish (swe) are highly skewed toward the Maxeline gender class, while other languages are much more balanced such as Mandarin Chinese (cmn) or Hindi (hin).
We note that gender distribution cannot be compared across languages quantitatively. Indeed, first, our lexicon is based by design on nouns that are not entirely parallel across languages. Second, our metric highly depends on the number of words in each dataset, which is not comparable across all languages due to their differences in morphology and syntax. However, as discussed below (SS 5.2), our pipeline allows us to highlight qualitative differences in how gender is encoded in different languages.
### Qualitative Analysis: Gender representation variation in parallel data
To understand the cause of these gender representation differences across languages, we present several examples in Table 2.
[MISSING_PAGE_POST]
representation. As reported in the previous section, for some languages, the gender scores highly vary depending on the domains (e.g., News vs. Web crawled data). This suggests that when we analyze non-parallel data, the domain may be a prevalent factor that explains gender representation differences across languages. Third, as we observe when analyzing parallel data, gender representation differences may come from biases in the translation itself. For instance, in Sentence 1, the translation explicitly encoded the masculine gender in Spanish and Catalan while being gender unspecified in English. Other translations could have preserved the gender. Fourth, the way gender is encoded is, partly at least, unique to each language. Some languages are inherently biased toward the masculine gender (e.g. "padres", which may mean both _fathers_ and _parents_ in Spanish). Other languages do not always have genderless nouns. For instance, _siblings_ can only be translated onto Lithuanian as "broliali ir seserys" _Brothers and Sisters_.
## 6 Conclusion
In this work, we presented Gender-GAP, a large scale multilingual pipeline to compute gender distribution across 55 languages. We find that broadly used datasets are biased toward masculine gender. Based on this finding, our primary recommendation for multilingual NLP practitioner is to report the gender distribution along with the performance score. This allows reader and systems adopters to be aware of these biases in order to integrate this in their system deployment. Secondly, based on our multilingual lexicon, many directions could be taken to mitigate biases in the performance of the systems (due to biases in the data). Qian et al. (2022) developed a perturbation-based technique to build NLP systems that are less biased toward specific group. We envision using our multilingual lexicon to adapt this technique beyond English.
### Limitations and Ethical Statement
**English-centric** We designed the list of gendered nouns starting from the English language and then scaled it to multiple languages. This means that our approach may cover incompletely the nuances in different language families regarding gender or only cover them partially and from an English-centric perspective.
**Non-Binary Gender Modeling** To favor scalability across 55 languages, we chose to use a three gender class lexicon. However, this restrict our approach to binary genders (masculine and feminine) and we only measure imperfectly non-binary genders distribution Haynes et al. (2001); Herdt (2020) with the "unspecified" class. We leave for future work the refinement of our lexical categories in order to measure more granularly genders across languages.
**Lexical Matching** The core assumption of this work is that our predefined lexicon defined in Section 3.2 gives us a proxy to account for gender distributions in large datasets. Although our lexicon is obviously not exhaustive, it is simple enough to scale to highly multilingual environments. Future work could consider other types of nouns (beyond family relations or persons) such as gendered occupations nouns, pronouns, etc.
## Acknowledgements
We thank Mark Tygert for his help and feedback on the statistical analysis, Carleigh Wood for her help with the translation and Gabriel Mejia Gonzalez for his help with linguistic analysis.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.